Why software-based control of applications and infrastructure is becoming a necessity

In my previous blog, we looked at how and why infrastructure has been changing. In this installment, we will look at the current state of software-based control and why it is becoming critical to today's enterprises.

A software-based control necessity

To manage the complexity created by huge volumes of managed assets, the increased frequency and speed of change, dynamic configurations, and the sophisticated descriptive cloud agnostic offerings of the future, it should be clear that human-based processes are not enough. All this needs to be automated and driven by software applications.

As humans, what we must do in this situation is to use the power of computers to manage and control that complexity. Once computers come to the rescue of computers, it’s then a matter of controlling software intelligence to trigger development activity, make changes, measure deviations, react to problems or configure policy violations.

In fact, as explained earlier, a commoditization bar is rising from the underlying infrastructure to the
upper level of the application stack, with cloud evolving from IaaS services to PaaS, CaaS and
serverless functions as-a-service. With large organizations relatively slow to adopt these new technologies, the level of ease and efficiency of the latest cloud service offerings is accelerating the pace — just as the realization that pure “lift-and-shift” approaches were not producing optimal results.

Similarly, we know that SaaS services remain one of the most successful ways to consume services
in the cloud. In fact, we are seeing a SaaSification of independent software vendor (ISV) offerings, offering APIs instead of user interfaces and enabling enterprises to integrate ISV products into their own IT ecosystems.

There is no reason that the commoditization of infrastructure services will stop where it is today. It’s likely that the everything as-a-service offering will become the norm with cloud service providers, reflecting the demands of the IT market. Application developers and users should be able to
deploy applications and services where they make sense, without being concerned with the “how” or being constrained by a particular console or user interface.

If we attempt to extrapolate the future of this evolution, we can envision cloudless computing where public clouds and edge provide a true continuum of experience for the compute, data and network. It would act as a gigantic virtual computer, running an intelligent operation system (a kind of cross-cloud control pane) able to dispatch, load and execute attribute-based computing assets that developers describe with required constraints.

The intelligent cross-cloud operating system would be able to make decisions about application component locations and behaviors. Such attributes could be:

- Horizontal / vertical scalability constraints (cost/performance)
- Power consumption capping
- Proximity with a particular data source
- Geo-fencing constraints due to regulations or privacy concerns

NOTE: To support that cross-cloud vision, I would hope major cloud providers enter discussions to collaborate with east / west interfaces and agree on standards allowing them to dispatch and balance workloads with each other, or to have common event notification formats. This is essentially what is happening with Kubernetes cluster federation APIs. As time goes by, competitive technologies eventually begin to commoditize and aggregate back into standards.
Case in point: Today, who remembers the various incompatible implementations of the TCP/IP stack between Microsoft, Novell and OSI in the 1980s?

Software-control sophistication

Even though we now understand the need to use software for the task, it is interesting to think about
the nature of the control software. Achieving a level of control over this volume of small, dynamic objects is far beyond the scope and power of the old fashioned shell script-based
approach, which very quickly becomes too complex and unable to scale. There is a need for more
modern and robust approaches.

More modern and sophisticated software are needed to manage IT assets like servers, storage, network, containers and functions. With this power comes responsibility. Like any controlling software, it must be designed with high quality standards and should be capable of resisting the complexity and growth it will face over time.

Software has underlying data models, which are only good at managing things which are described as objects in some form of data entities and relationships — like an ontology definition. For software to
come to the rescue and manage assets and applications at scale, it requires first knowing what to
manage. Therefore, developers need a way to describe their cloud-native applications in terms of the underlying elements that compose it. These lower-level elements are getting further and further
away from the usual concept of databases, servers and middleware — and are now becoming more abstract concepts.

For instance, Kubernetes goes quite far into offering a description model and a set of abstract service objects which practically hide the cloud and infrastructure layer. As part of the application description, a Kubernetes application developer must “claim” the use of (meaning describe the content of) their deployment to their cluster in terms of abstract objects like microservices, pod replicas, persistent storage and ingress traffic flow. Add to this a list of predefined object types, and we see a growing list
of operators or custom resource descriptor (CRD)-based extensions which extend that programming model to other types of conceptual objects. Some are not yet invented, but all are powered by this software-defined everything layer.

For the engineer in charge of deploying an application, it becomes far more important to master the YAML language and the abstract model of a cloud-native application than to understand what a
VM, a disk or a network card is. These application underpinning objects have little to do with the usual infrastructure elements we once knew. However, they are all codified in a text file, so we are just one step away from dynamically generated YAML application manifest files from a higher level of

So, software now manages software objects in an automated way — not just the provisioning but also Day 2 lifecycle management actions. Automated provisioning is creating a consistent and predictive landscape that enables us to safely apply Day 2 automated management actions because it is consistent. With the help of AI-assisted automation, it is even possible to improve the level of
tolerance for automated management to react to environmental differences.

Similarly, we see the trend towards evolving the IT support world from a classic segmentation between development and operation teams, to a more integrated DevSecOps model or even a Sustained Reliability Engineering (SRE) mode. With SRE, IT problems are solved on enterprise scale applications by writing software which fixes them and even prevents future problems. Even IT support is turning into a job that requires software mastery.

We need the help of computer software to manage the complex landscape of applications and digital assets


Let’s sum up what is happening here. We need the help of computer software to manage the
complex landscape of applications and IT assets because:

- The number of “things” to manage and the number of locations they use has been
dramatically increasing
- The speed at which things need to be managed has changed to practically near real-time
- Applications are themselves described in manifest text files that describe their underlying needs in sets of abstract software concept objects, not infrastructure
- Soon, cloud native applications (Kubernetes-based) should be able to use the API of a cross-cloud operating system, which will abstract the underlying complexity of infrastructure elements or heterogeneous cloud providers
- Applications are deployed by smart software-based orchestrators
- Automation is expected everywhere — from provisioning to Day 2 lifecycle management and even to fix issues

Follow this space to read the next chapter, where I will explore the impact on liability and risk exposure, as well as the required skillsets and techniques to mitigate these risks. Tune in next week!

By Alexis Mermet-Grandfille, Atos Group CTO Strategic Technology Advisor

Posted on: October 1st, 2021

Share this blog article

About Alexis Mermet-Grandfille
Atos Group CTO Strategic Technology Advisor Distinguished Expert of the Atos Expert Community
Alexis has a software engineering background with over 30 years of experience in bringing technology and innovation to the business of customers. As an ENSIMAG engineer, he has international experience in product and service-oriented businesses in a global context. After experiences at Network General corp (CA, U.S.A) and 13 years at Hewlett Packard in various architect and management positions in the Network and Global PC Business division, he joined Atos in 2013 where he has held management positions as Director of IT Service Management Development organization, Global Technical Services Architecture and CTO of the Atos/Google Alliance. He is now the Strategic Technical Advisor for the Atos Group CTO. Alexis is a member of Atos Expert Community as a Distinguished Expert.

Follow or contact Alexis