Coding a Chat Bot
Artificial Intelligence and Bots (of many types) have been attracting some major attention in recent months. There is a great deal of hype in the industry about what has been achieved already, the potential and the possible threats of these technologies. From the exciting prospect of diagnosing diseases and producing medicines tailored to an individual, to the ominous menace of publishing and promoting falsehoods, overthrowing Governments, or worse, there is certainly a lot to think about.
We ourselves have jumped on the bandwagon, selecting the topic of AI and Bots for our Atos IT challenge this year. Now in its 7th iteration, this global engineering competition for University students around the world has grown significantly since its launch. The topic certainly seems to have struck a chord, with over 200 student teams entering the 2018 competition, triple the number of previous years.
But what is involved in producing such a solution? In a previous blog, I explained how I constructed an Alexa-based conversational interface one lunchtime. That’s one example of how bots can interact with AI (also termed Machine Learning) algorithms to interpret requests and provide appropriate responses. To keep me on my technical toes I decided to revisit the topic and see what else I could do. After all, we are asking students to undertake this challenge, so I’d better be sure I know how to do it too.
Instead of taking a voice route this time, I decide to take a look at an interactive chat bot. We are increasingly deploying these types of solutions for customer interaction. You may have experienced something like this yourself with an online help desk or financial institutions, for instance.
In our product portfolio we have a WebRTC-based collaboration tool called Circuit. You can use it for text chat, voice calls, video, screen-sharing and many other forms of collaboration. We use it extensively internally within Atos, as well as offering it to our clients. There’s a real hidden gem with Circuit; the API.
Using the API you can programmatically create and interact with the collaborations; perhaps enhancing a discussion with additional information, automatically responding to questions, you could go off and do queries on other systems and feed that back. Really, the sky’s the limit.
So I decided to create the CTO Bot.
I happen to have recently acquired an ultrabook – a very small and lightweight laptop, with the intention of using it whilst travelling, instead of lugging around a larger and heavier machine. This pristine (Windows 10) device would be the ideal blank canvas to record what I needed to do to get my Bot up and running. I decided to create the Bot using node.js, a language in vogue for such activities at the moment, so downloaded and installed a number of pieces of software to get me going:
- Visual Studio Code – a free code editor that has built-in support (and plugins) for all sorts of languages and environments
- node.js together with npm, the node package manager
- Python, just in case it is needed for some ancillary scripts
- MongoDB – I anticipate storing Bot commands and responses here
- Git – for source code version control
- The Circuit SDK – the API library which enables you to interact with Circuit
Having applied for and received access to the Circuit sandbox environment, and credentials to use for my application, I was able to begin my journey. Some of my Circuit developer colleagues have already published examples and articles which were very useful to get me started.
To cut a long story short I was able to use these examples to create a WORKING CTO Bot. I learned a bunch of really interesting things on the way, including the use of bunyan for formatted application logging, which is really nice.
So I have a working Bot. It automatically responds to commands issued by users in any discussion that the Bot is participating in. I define a command simply by prefixing it with an exclamation mark and the Bot recognizes this. One example command is “!commands”, which lists the available commands!
Moving into Production
So, what next? The Bot runs on my laptop in a test environment, but that laptop will not be turned on all the time of course. I need to productionize it. There are several options - in my previous explorations for Alexa we used Amazon’s Lambda serverless compute solution. This time I decided to deploy the node.js application to Azure.
Setting up the Azure environment is fairly straightforward, though somewhat involved as everything needs to be linked together. You set up a resource group in the region of the world you want to run the application in, then define the web application using an “App Service Plan” which sets the resources and scalability desired. I also created a Cosmos database with the MongoDB API. I then defined a local git repository in Azure which permitted me to set it up as a remote repo on my test machine, and I could push code to it for deployment. This is a simplified description as there are various database and deployment credentials you need to set up too, but this is all quite well documented with examples provided by Microsoft.
Getting the deployment to work and application to launch required some additional magic in the node configuration files (package.json, and .deployment).
I repeated the Azure configuration a couple of times to get my naming conventions aligned, but again once you know what you are doing this is very quick (using either the command line on an Azure Console or the Dashboard). Enhancing the Bot with MongoDB functionality is still a work in progress, but the connection is there.
Finally, that small low-power laptop is more than ample for lightweight development like this.
I haven’t delved into the AI aspects with this particular project, but have demonstrated that getting a simple Bot functioning is a rapid activity these days, even if you’re starting from scratch. I look forward to seeing how the students do … and the team I am coaching in particular!