Data Centers in Financial Services Industry: are you ready?
Entering 2020 brings a new sensitive subject in every CIO’s Agenda : Data Centers.
When discussing this topic with many CIO and I&O managers from banks and insurance organizations across Europe in 2019, I’ve particularly felt that many Data Centers (DC) must radically evolve. It is about considering the DCs’ design, following last century’s legacy architectures, and deploy for the future so they may answer new stakes and adjust to a new context.
The focus on environmental considerations is significant and Data Center footprints are more and more highlighted in Corporate Social & Environmental strategies & announcements. With a Power Usage Effectiveness (PUE) greater than 1,5, “classic” private Data Centers (which qualified back in the early 2000 as “advanced”) now appear as anomalies compared to Giant Data Centers of hyperscale providers who do not only announce PUE approximately around 1,1 but also state that their carbon footprint is neutral thanks to renewable energy and financial compensations.
One can also highlight other stakes related to business resilience and continuity particularly in the Financial Services industry. Since the 2008 crisis, the importance of the stability of the financial market has clearly been fundamental. This has led to several new and strengthened regulations including the designation of numerous “too big to fail” financial institutions. At the same time, Banks and Insurers are now more dependent on IT : something that has become widely visible when regular News report on major IT breaches. All of this is requiring us today to reconsider the traditional DC architecture (typically, 2 active-active DCs in the same area) in order to cover the famous “extreme but plausible scenario” such as a whole region impacted by a meteorological disaster, health crisis, a major industrial accident or social and political riots.
Remember the architecture and components of the last century Data Centers:
- one or several mainframes,
- a big farm of Unix servers,
- another farm of x86 servers eventually including a private cloud,
- centralized and shared storage and backup solutions,
- all that interconnected by significant networking devices.
Twenty years later, the mainframes are still there but due to Digital Transformation, their footprint has started to reduce. Consolidation strategies have been conducted and selective outsourcing initiatives have successfully been implemented in order to take advantages from shared resources, available skills and pay per user models. We can now clearly imagine than in ten years from now - which is half of the life of typical new Data Centers to be built or renovated - the need for DC rooms by mainframe will significantly be reduced.
In the last 20 years, several other evolutions have disrupted IT technology:
- Consolidation strategies for specific workloads (databases, content management…),
- Quasi elimination of Unix based client server systems,
- Significant growth of private cloud solutions and hyperconverged infrastructures,
- Evolution of backup solutions and networking infrastructure.
However the main change is definitively the development of public cloud within each IT landscape, even within Financial Services : starting with SaaS solutions (O365, Salesforce…), public cloud is more than ever becoming a standard for workloads in banks and Insurance organizations and particularly thanks to hybrid and multi-cloud architectures.
There is no doubt the need for private Data Centers in ten years will be significantly reduced due to the adoption of Public Cloud. And indeed, Public Cloud is certainly the relevant solution in regards to the new stakes.
It is now the right time to put this “Data Center subject” on the table. My next blog will focus on exactly the “Data Center of the future” which should be designed to meet key objectives such as flexibility, dynamic adaptation and automation.