Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content.
You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Managing your cookies

Our website uses cookies. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button.

Necessary cookies

These are essential for the user navigation and allow to give access to certain functionalities such as secured zones accesses. Without these cookies, it won’t be possible to provide the service.
Matomo on premise

Marketing cookies

These cookies are used to deliver advertisements more relevant for you, limit the number of times you see an advertisement; help measure the effectiveness of the advertising campaign; and understand people’s behavior after they view an advertisement.
Adobe Privacy policy | Marketo Privacy Policy | MRP Privacy Policy | AccountInsight Privacy Policy | Triblio Privacy Policy

Social media cookies

These cookies are used to measure the effectiveness of social media campaigns.
LinkedIn Policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Skip to main content

The change behind numbers

In my previous blog, I set the scene exploring how the infrastructure management world is turning into a software engineering business. Let’s focus today on cloud transformation and how infrastructure management is evolving.

Charting the evolution of infrastructure management

Managing an infrastructure business, like many other IT domains, has changed dramatically over time. It can be summed up by the following eras:

  • The onsite-centric phase, when IT teams had to be onsite, dedicated to providing IT support services.
  • The dawn of remote infrastructure, where IT support could be provided from remote locations or from externalized management by third parties (the birth of managed services). We saw the growth of remote access tools in various forms — graphic-oriented with remote desktop solutions, command line-based, SSH, global file systems and other twins.
  • The beginning of digitalization and automation, as it became increasingly possible to automate tasks to build, operate and manage the lifecycle of remote assets. This was the first generation of prescriptive workflow orchestration tools, followed by cloud and modern approaches with descriptive language capabilities and full API programmatic access like Chef, Puppet and Ansible.
  • The cloud and post cloud era, during which infrastructure layers of applications were located in various cloud domains, like the famous hyperscaler cloud providers and private cloud offerings.
  • Hybrid computing emerged as the answer to protect data in sanctuary perimeters for regulatory or safety reasons and to enable local/edge computing for low latency treatments such as IoT and intelligent cars. This has also helped streamline massive data volumes for real-time processing such as in video analysis, which must be performed locally. Some post-treatment can then be done in the public cloud to benefit from the scalability and elasticity of these environments.

Changing the game

Over time, something slightly more subtle and silent happened. Crucial changes in the volume and volatility of workloads have changed the magnitude of some key numbers, disrupting business and the workforce. Here’s how this affected infrastructure management:

  • The volume of IT assets to be managed grew, as these assets evolved from monolithic applications running on a server to hundreds of microservices running on a multitude of servers federated in a cluster.
  • The time required to execute changes and/or management actions went from weeks (for a classic change process), to being executed in seconds or milliseconds (for a smart cluster to react to a new infrastructure demand).

The operating expense (Opex) pricing model and the dynamic nature of cloud assets unlocked new cost-saving tactics for cloud services customers. For instance, it became possible to save the costs of complete Hadoop compute clusters once one could dismiss a complete environment and its unused infrastructure, then recreate it on demand in minutes. Near real-time right sizing adjustments of virtual machines (VMs) or Kubernetes Pod replicas became a reality. Infrastructure costs started to become pay-as-you-go, charged by the second and adjustable at will.

Crucial changes in the volume and volatility of workloads have changed the magnitude of some key numbers, disrupting business and the workforce.

A parallel evolution in the abstraction level of the value-added services provided on the cloud is underway. The progression from standard Infrastructure as a service (IaaS) to Platform as a service (PaaS) or serverless capabilities (with containers and function-as-a-service) completely hides the concept of a server to the program.

Unlike its name suggests, serverless deployments (or container-based deployment in Kubernetes clusters) suddenly meant applications now engage fleets of servers, grouped in clusters. Similarly, microservices-based applications run hundreds of functions with enormous flexibility in scaling and fault tolerance. Multiple Kubernetes cluster federations brought the ability to manage multiple cloud container-orchestrated stacks at scale from a common central console.

Descriptive technologies like Terraform / Ansible to name a few, are now very popular. Their success lies in their descriptive nature enabling an operator to express his needs rather than how to achieve it. The operator simply describes what infrastructure and application configuration is desired, and an intelligent internal orchestrator engine performs the actions until the reality matches the requested situation. Kubernetes is based on these descriptive concepts, too. The developer documents a desired state and lets the Kubernetes admission controller and control loop constantly evaluate the current cluster conditions versus the expectations, automatically triggering the necessary work to reach the desired state.

In a nutshell, large enterprises or infrastructure service providers should consider the following factors when deciding on the best course of action:

  • Earlier, the number of assets to manage were in the range of hundreds. They are now tens of thousands of different kinds of assets.
  • Application and server order to deployment turnaround time has gone from weeks to seconds, with high volatility.

In this ever-changing landscape, descriptive technologies may be the best recourse, but they require a skilled operator who knows how to express a specific end state rather than outline a method to reach it.

 

Follow me in this series of articles exploring how this transformation is triggering a massive shift toward software engineering technologies. Previous installments are available here:
Is the infrastructure management world turning into a software engineering business?

In my next article, I will explore how applications and infrastructure are evolving into software-controlled objects, which are becoming an unavoidable necessity.

Share this blog article


About Alexis Mermet-Grandfille
Atos Group CTO Strategic Technology Advisor Distinguished Expert of the Atos Expert Community
Alexis has a software engineering background with over 30 years of experience in bringing technology and innovation to the business of customers. As an ENSIMAG engineer, he has international experience in product and service-oriented businesses in a global context. After experiences at Network General corp (CA, U.S.A) and 13 years at Hewlett Packard in various architect and management positions in the Network and Global PC Business division, he joined Atos in 2013 where he has held management positions as Director of IT Service Management Development organization, Global Technical Services Architecture and CTO of the Atos/Google Alliance. He is now the Strategic Technical Advisor for the Atos Group CTO. Alexis is a member of Atos Expert Community as a Distinguished Expert.

Follow or contact Alexis