Alexa, deploy the month-end workload
Alexa seems to be all the rage at the moment. Amazon’s interactive personal virtual assistant is a cloud-based service which uses devices like the “Echo” and “Echo Dot” to listen for commands and react accordingly. In the home you talk to Alexa to play music, listen to the news, buy things (from Amazon of course) and perform many more activities.
It’s a very well executed package, though not without its challenges. There was somewhat famously an incident at the beginning of this year when a TV broadcast triggered Amazon Echo devices in the US to place orders for a Dolls House. I myself experienced something similar when BBC Correspondent Rory Cellan-Jones turned my heating on at home during a technology news programme.
Amazon have made available a software development kit so that organisations (and indeed anyone) can provide services that can be accessed by Alexa. So when I said “many more activities” I really mean it – ordering Pizza, seeing what’s on at the cinema, order a taxi (Uber) and the list is growing every day.
How hard can it be to do this? Well, it turns out, not very hard at all. I set up my own little Alexa service one lunch time in less than an hour. I already have an Amazon Web Services (AWS) account. What CTO wouldn’t? You can run these for free as long as you stay within the resource use limits for the free tier. Then you can deploy some sample code in Lambda (part of AWS), which is an event-driven, serverless computing platform. This runs code in response to events, automatically managing the compute resources. The code in this case is just a function that receives some input parameters and gives a random text response which Alexa reads out.
Once this is set up you can create a “Skill” in the Alexa developer portal, defining the voice commands that Alexa should respond to, and then hook this up to the function in Lambda, and that’s it. “Alexa, open Mike’s App and tell me a fact.”
Of course instead of simply responding with a random piece of text the Lambda function can instead access other APIs and services. This is where the power of Alexa grows. Atos is currently working on a number of API integrations together with our customers to provide services through the Alexa experience.
We have a proof of concept that we have constructed to deploy workloads on Atos’s Canopy Cloud infrastructure using voice commands. This has been made possible by integrating Alexa, through the Lambda function, with the Canopy Compose API.
Using voice for control is one thing, but what I think is more interesting is Alexa (and solutions like it) are becoming trusted platforms for the provision of multiple services to users. A single, simple, natural, interface point through which the end user interacts with many service providers … meaning that they no longer need to go to separate websites or even apps to make use of those services. The natural language interfaces of Siri (Apple) and Cortana (Microsoft) provide similar access to information sources, but I think two things are making Alexa particularly interesting. a) The echo devices that are sitting, always waiting for your command (no pushing buttons) around the home (and possibly in future the workplace), and b) the easy extensibility of Alexa which is making available so many more interesting interactive services.
So the likes of Siri and Alexa are now starting to own the customer interface. The world of service specific apps is now becoming relegated to a series of API interfaces. This model is something we are also predicting will come to the business to business world; trusted industrial data platforms where information and services will be shared and consumed. You can find more about this in our Journey 2020 publication.
“Alexa, put the kettle on.”