Introducing energy efficient benefits while defining your network


Posted on: July 11, 2014 by Adam Dolman

Last week I was having a very interesting conversation with a colleague related to a previous blog post of mine around energy efficient coding and I thought perhaps it was time to revisit, but on a slightly different angle. How can we look at using Software Defined Network to improve energy utilisation in our network and data centre? In a similar way to improving programs by making sure they are as efficient as possible, or a serverfarm by maximising resources, we can also use SDN to ensure our network is running as efficiently as possible.

In a traditional network each device will sit there using power, with the consumption varying a bit as load comes and goes, regardless of if it is taking an active part in the flow of traffic. A typical top of rack switch is measured in hundreds of watts, while a big modular switch could be thousands of watts. And half of this network could be using power while doing almost nothing other than providing redundancy, using a sizable percentage of power in comparison to the active path.

However, in an SDN each device doesn't have to be its own device - we can have the awareness of the bigger picture and perhaps look to introduce lots of power savings. We can also even build in an awareness of trends, to allow the network to preempt traffic changes, with learning algorithms allowing a network to become more efficient and better over time.

With this, we can look to use SDN to route traffic in the most efficient way for power utilisation, rather than tuned for performance optimisation. By viewing the network as a whole, we can look to put some paths in to a 'power saving mode' where they can use a lot less power by powering down large parts of themselves in to a standby mode, and ensure the paths we do have are utilised as the maximum efficiency we can. If load builds too much, we can look to make use of other paths, while still trying to do this in a way that is the most efficient use of power.

Some of this may need new hardware, such as having efficienct standby states that use less power than being in a 'ready' state, but can quickly come back online in sub-second time in the event of a failure or need to shift traffic, and the control plane OS running on the most energy efficient hardware available, while itself also being coded from an energy efficient perspective.

When I look in to the data centre of the future, I can see a situation where we have applications developed from an energy efficient manner, balanced on highly efficient server hardware by algorithms to ensure they run together in the most energy efficient way, with a network moving traffic around as efficiently as possible to result in a data center that uses a lot less power than today. And indeed, all of this balancing of applications, servers and traffic could have an overall control plane that is managing the whole piece as a single entity, using learning algorithms to get better over time.

Share this blog article


About Adam Dolman

Head of Public Cloud, Atos Global IDM ESO and member of the Scientific Community
Adam Dolman is currently the Head of Public Cloud for Atos Global IDM ESO, a member of the Atos Scientific Community, and an Atos Expert in the Cloud Domain.  He is responsible for the engineering and development of the Atos’ public cloud offerings, including Azure, Azure Stack and AWS, as part of the Atos Orchestrated Hybrid Cloud. Previously he worked for Atos Major Events for 7 years, most recently as the Technical Operations Manager for the Rio 2016 Olympic Games for Atos Major Events, responsible for the architecture, security, deployment and management of the Rio 2016 Games infrastructure.  He has worked at Atos since 2005, with particular interests in networking, cloud and digital transformation. Adam holds an MA in Computer Science from the University of Oxford and numerous professional qualifications.

Follow or contact Adam