Is your voice the new DevOps power tool?

Posted on: March 31, 2017 by Darren Ratcliffe

We all know the evolving role of DevOps in an organization, the function removes the barriers, real or artificial, between the IT creative and the IT functionalist to mould this in to zen like harmony. To enable this new tools have come to the fore, to support making this vision a reality.

Generally this is in the form of platforms and these platforms seem to be everywhere these days, whether it is a platform with a specific purpose such as ServiceNow / Slack / etc., an application platform such as Cloud Foundry / Openshift or a container platform such as Kubernetes / Mesos. The one key element they all have in common is to make the people within the DevOps function super- efficient by driving out manual mundane tasks and introducing (continuous) automation by default.

So you're one of these advance organizations, DevOps is in place and this is all great, you have streamlined your IT function and you have made it super-efficient but you know sooner or later one of the 'higher ups' will ask so what now! At face value it may seem like all avenues are exhausted and everything is automated but you know that now you are on this journey there will be a continuous focus on improving and differentiating.

One possible option we have been considering is to see if it is possible to transfer the primary operations command execution from the fingertips to the voice. Using voice commands, especially ones supported by Natural Language Processing (NLP), to trigger actions and status reporting in the context of DevOps may seem a far-fetched proposition. However it is not uncommon these days to see teams configuring / training Bots that are integrated with the likes of Slack for a similar purpose, albeit that fingertips are still required!! With the advent of the platform economy has come the increasing maturity of the API and with this the creation of Bots has become a relatively straightforward task.

The emergence of the 'Personal Assistants' is interesting, Amazon has created Alexa, Microsoft has Cortana, Apple has Siri and Google also has its personal assistant. These are all in various forms of maturity and release with the primary focus at present being that of serving an individual. Further, the emergence of 'Service Assistants' as part of a wider service automation & robotics play is starting to happen but this focus is very much on the end user / consumer.

The next logical evolution in my opinion is the emergence of the 'DevOps Assistant' that could either complement or replace the DevOps Bots, so we decided to conduct an experiment.

For the purpose of the experiment we chose to use Amazon's Alexa as it already includes a published voice service and skills kit. We also elected to use AWS Lambda as this allowed us to execute only the functions required as part of the real-time Alexa triggers as opposed to having some LRP (Long Running Process) hanging around, costing money waiting for a request.

With this in place we built the VIM (Voice Interaction Model) and skills for Alexa, we wrote a few lines (and I mean a few lines) of Node.JS to interact with our Compose API and loaded this into Lambda. This was all incredibly easy to do, it didn't require any deep developer knowledge and in the main it was more a sequence of configuration steps.

The results were that we could ask Alexa for the status of the Compose service (Compose is our multi-cloud application orchestration and management platform), what application blueprints were listed in the catalog, what application services were currently running, on / across which clouds & their status plus we could also ask Alexa to deploy a new application service. Given the minimal effort it took the results were pleasing with Alexa being capable of interpreting the instructions and executing the tasks.

As stated this was an experiment and is not intended to be used in production anytime soon, it has produced an interesting demo though and one that triggers a lot of interest and thinking with regards to its potential.

The potential as I see it has two main possibilities:

  • It enables the potential to drive DevOps efficiency further by introducing the ability to overlay the tactile keyboard commands with more nuanced conversational instruction
  • Through the ability to interact vocally it introduces the potential for greater accessibility to perform DevOps roles

This experiment and others like it to follow will improve our understanding of the potential as the supporting tool sets evolve and we look to integrate the possibilities provided by more advance techniques using cognitive, machine learning and eventually AI.

Share this blog article

  • Share on Linked In

About Darren Ratcliffe
Distinguished Expert & Cloud Domain lead
Darren Ratcliffe became the Head of technical strategy, Business & Platform Services, Centres of Excellence in July 2016. Previously he was the Cloud CTO of Atos and Canopy. Darren has led the creation of many innovative cloud services. His objective is to support Atos customers on their digital journey. He is passionate about open innovation and helping the enterprise take a fresh look at their business models supported by cloud, digital and emerging technologies.

Follow or contact Darren