The Software-Defined Data Center


Posted on: May 28, 2014 by Guy Lidbetter

The Software Defined Data Center is expected to transform how IT infrastructure is deployed and managed. I am delighted to share the insights of Joe Baguley, EMEA CTO for Atos global partner VMware.

The Software-Defined Data Center (or SDDC) represents possibly the biggest fundamental shift in this era of IT. In the simplest terms, SDDC refers to an IT facility in which all layers of the stack (storage, computing and networking) are virtualized ultimately enabling them to be delivered as an agile on-demand service. For businesses and public sector organisations the technology is an integral stepping stone to becoming a Software-Defined Enterprise and it’s no overstatement to say that this will profoundly change the way organisations acquire and consume their IT infrastructure.

I’m proud to say that VMware has been a key player in its development (we’re even credited with coining the term back in 2012) and since then we’ve seen the value of the market continue to rise – a recent report by MarketsandMarkets forecast that by 2018 the industry would be worth over $5.41bn.

How did we get here?

Upon their creation, data centers – or simply mainframes as they were more commonly known at the time – were largely inefficient and expensive beasts, requiring teams of engineers to manage cooling and maintenance, with their use mainly being limited to the army and government.

However, with the rise of computing in the 1970 & 80s for commercial purposes, and as organisations started to take control of their own IT, the reliance on mainframes was replaced by low-end inexpensive servers which became abundant within organisations. At this time, each server ran a single application at 15-20 per cent of its capacity[1]. Unsurprisingly, concerns over space and power quickly became a factor.

It wasn’t until the true advent of x86 virtualization fifteen years ago that the way in which we started to see how the data center could function changed forever. By abstracting the compute layer from the hardware businesses were able to run multiple workloads within a single server – saving space, energy consumption and man-power.

IT workers en masse started adopting a virtual first attitude, with the uptake of virtualization growing steadily until by 2011, more than half of server workloads in data centers were virtualized.

But as we know, IT never sits still and although it took almost a decade and half to virtualize the other layers of the data center we have now been successful. Compute, storage, networking, and security resources can now be implemented in software that can be pooled for efficient allocation and managed centrally in an integrated and automated manner. This on-demand consumption approach delivers even far greater savings across the data centers, without resources being used as efficiently as possible.

On a purely infrastructure level, SDDC underpins the journey to IT-as-a-Service and offers the ideal building blocks for creating any kind of cloud environment (whether it be hybrid, public or private). It’s also important to understand that SDDC is more than simply abstracting hardware – it creates a single toolkit that can allow entire data centers to be controlled from a single-pane of glass.

The Business Case: old problems need new solutions

The adoption of SDDC cannot come soon enough. Yet again, it seems scale is becoming an uphill struggle and moving forwards it will not be uncommon for 10,000 servers to be managed by a single IT employee. This, coupled with the emergence of the mobile cloud era, has created a need to virtualize the entire data center so that all infrastructure services become as inexpensive and easy to provision and manage as virtual machines.

Data has become the life blood of a business, and businesses across the world are generating data at an unprecedented speed. While this offers huge opportunities for businesses to tap into new insights and expand into new regions, it does raise some worrying questions around data center resourcing as well as potential capacity and spacing problems. Once again, virtualization provides the solutions – with an SDDC, organisations can now maximise their data center’s efficiency and ensure resources are being used properly.

Agility is also a big issue for businesses: the market has become significantly faster and organisations need to be able to respond quickly to potential revenue opportunities or risk losing ground to competitors. Under the current system, many businesses have to wait months to spin up new production environments – although virtualized servers can be spun in minutes, actually provisioning the hardware and the supporting networking and storage took much longer. Now with the SDDC, months become minutes and businesses can launch applications within a few clicks and jump on every market gap as soon as it emerges.

Some businesses are already taking advantage of such advances in data center technology, including a major outdoor sports clothing manufacturer which is now able to deliver resources to business users in minutes rather than weeks, as well as saving hundreds of thousands of dollars in reduced energy consumption and increased efficiency. Yet there are many businesses that still need to explore this future data center model.

Retiring the ‘Museum of IT’

Back in 2012, when the Software-Defined Data Center was first being discussed, VMware CEO Pat Gelsinger referred to the industry’s current approach to data centers as ‘a museum of IT’ with huge mainframes and legacy hardware still in place. As pressure grows on CIOs to drive revenues within the business, the need for flexible and robust infrastructure will continue to rise up the agenda. The market demand is already there and over the next year (and beyond) I believe we will see a rapid transformation in the way businesses provision infrastructure.

________________

[1] According to Tony Iams, Senior Analyst at D.H. Brown Associates Inc. in Port Chester, NY,

Share this blog article


About Guy Lidbetter

Chief Technology Officer, Infrastructure & Data Management. Atos Fellow and member of the Scientific Community
With over 30 years of experience in the IT services industry, as CTO for Atos Infrastructure & Data Management, Guy is responsible for setting Technical and Innovation Strategy across the IT infrastructure stack in both cloud and non-cloud delivery models. He is also responsible for senior level relationships with technology leaders of strategic partners. Previously, he has held numerous technical and management positions in Sema Group, SchlumbergerSema and Atos Origin. In 2017 Guy was appointed an Atos Fellow and is also a founder member of the Atos Scientific Community, most recently sitting on the Editorial Board for the latest Ascent magazine, “Imagining our Quantum Future’.  He has a passion for sport, particularly Chelsea Football Club, baseball’s Atlanta Braves, rugby union and cricket. He also walks, cycles and more leisurely pursuits include photography, reading, music and attempting cryptic crosswords with varying degrees of success.

Follow or contact Guy