The change behind numbers
In my previous blog, I set the scene exploring how the infrastructure management world is turning into a software engineering business. Let’s focus today on cloud transformation and how infrastructure management is evolving.
Charting the evolution of infrastructure management
Managing an infrastructure business, like many other IT domains, has changed dramatically over time. It can be summed up by the following eras:
- The onsite-centric phase, when IT teams had to be onsite, dedicated to providing IT support services.
- The dawn of remote infrastructure, where IT support could be provided from remote locations or from externalized management by third parties (the birth of managed services). We saw the growth of remote access tools in various forms — graphic-oriented with remote desktop solutions, command line-based, SSH, global file systems and other twins.
- The beginning of digitalization and automation, as it became increasingly possible to automate tasks to build, operate and manage the lifecycle of remote assets. This was the first generation of prescriptive workflow orchestration tools, followed by cloud and modern approaches with descriptive language capabilities and full API programmatic access like Chef, Puppet and Ansible.
- The cloud and post cloud era, during which infrastructure layers of applications were located in various cloud domains, like the famous hyperscaler cloud providers and private cloud offerings.
- Hybrid computing emerged as the answer to protect data in sanctuary perimeters for regulatory or safety reasons and to enable local/edge computing for low latency treatments such as IoT and intelligent cars. This has also helped streamline massive data volumes for real-time processing such as in video analysis, which must be performed locally. Some post-treatment can then be done in the public cloud to benefit from the scalability and elasticity of these environments.
Changing the game
Over time, something slightly more subtle and silent happened. Crucial changes in the volume and volatility of workloads have changed the magnitude of some key numbers, disrupting business and the workforce. Here’s how this affected infrastructure management:
- The volume of IT assets to be managed grew, as these assets evolved from monolithic applications running on a server to hundreds of microservices running on a multitude of servers federated in a cluster.
- The time required to execute changes and/or management actions went from weeks (for a classic change process), to being executed in seconds or milliseconds (for a smart cluster to react to a new infrastructure demand).
The operating expense (Opex) pricing model and the dynamic nature of cloud assets unlocked new cost-saving tactics for cloud services customers. For instance, it became possible to save the costs of complete Hadoop compute clusters once one could dismiss a complete environment and its unused infrastructure, then recreate it on demand in minutes. Near real-time right sizing adjustments of virtual machines (VMs) or Kubernetes Pod replicas became a reality. Infrastructure costs started to become pay-as-you-go, charged by the second and adjustable at will.
Crucial changes in the volume and volatility of workloads have changed the magnitude of some key numbers, disrupting business and the workforce.
A parallel evolution in the abstraction level of the value-added services provided on the cloud is underway. The progression from standard Infrastructure as a service (IaaS) to Platform as a service (PaaS) or serverless capabilities (with containers and function-as-a-service) completely hides the concept of a server to the program.
Unlike its name suggests, serverless deployments (or container-based deployment in Kubernetes clusters) suddenly meant applications now engage fleets of servers, grouped in clusters. Similarly, microservices-based applications run hundreds of functions with enormous flexibility in scaling and fault tolerance. Multiple Kubernetes cluster federations brought the ability to manage multiple cloud container-orchestrated stacks at scale from a common central console.
Descriptive technologies like Terraform / Ansible to name a few, are now very popular. Their success lies in their descriptive nature enabling an operator to express his needs rather than how to achieve it. The operator simply describes what infrastructure and application configuration is desired, and an intelligent internal orchestrator engine performs the actions until the reality matches the requested situation. Kubernetes is based on these descriptive concepts, too. The developer documents a desired state and lets the Kubernetes admission controller and control loop constantly evaluate the current cluster conditions versus the expectations, automatically triggering the necessary work to reach the desired state.
In a nutshell, large enterprises or infrastructure service providers should consider the following factors when deciding on the best course of action:
- Earlier, the number of assets to manage were in the range of hundreds. They are now tens of thousands of different kinds of assets.
- Application and server order to deployment turnaround time has gone from weeks to seconds, with high volatility.
In this ever-changing landscape, descriptive technologies may be the best recourse, but they require a skilled operator who knows how to express a specific end state rather than outline a method to reach it.
Follow me in this series of articles exploring how this transformation is triggering a massive shift toward software engineering technologies. Previous installments are available here:
Is the infrastructure management world turning into a software engineering business?
In my next article, I will explore how applications and infrastructure are evolving into software-controlled objects, which are becoming an unavoidable necessity.
By Alexis Mermet-Grandfille, Atos Group CTO Strategic Technology Advisor
Posted on: September 23, 2021