Could you believe a virtual humanoid was actually human?


Posted on: November 20, 2017 by Tibor Bosse

As Artificial Intelligence (AI) systems become increasingly sophisticated, academia, entertainment and industry alike are exploring how human behavior is influenced by smart technology. This concept was turned upside down in Ex Machina, a film that brought complex ethical issues to life when a humanoid named Ava, created by tech guru, Nathan, developed her own feelings and intentions; with fatal repercussions.

Thankfully, my own line of research is a little safer (!), and I am specifically focused on the potential of virtual humanoids – human-like ‘characters’ in virtual environments that communicate with humans or with each other using natural signals. Such systems can be used to support people in many different areas of everyday life – from education and healthcare to the training of social skills in different professional environments. Here, I explore how these fascinating ‘beings’ are brought to life in virtual scenarios as well as what needs to be addressed before we will see widespread cultural acceptance of them.

The core components of a virtual humanoid

Research into virtual agents is an interdisciplinary endeavour that combines expertise from social signal processing and programming – connecting various software systems together to create the humanoid itself.

These virtual agents are roughly composed of three distinct parts:

  • Input – analysis and processing of incoming information about the human counterpart’s behavior, including language, facial expressions and gestures
  • Output – generating behaviour for the virtual human itself, including language, smiling and animations
  • Bridging the gap between the input and output – generating a response from the virtual character based on what the human user has said or intimated with non-verbal signals. This is the most difficult component to get right and requires the agent to be capable of possessing goals and a more humanlike inner world, such as motivations and emotions

The benefits of virtual humanoids for society

There are already several use cases for humanoids being introduced into society, one of which is in therapy for children with autism. Trials are already in progress, with robots being used to help autistic children explore basic human communication and emotions. Elsewhere, we could soon see virtual systems being used in mental healthcare. While sensitivities remain around ensuring human interaction is provided wherever needed in patient care, some people prefer a more anonymous type of intervention when discussing embarrassing or difficult situations and could be quicker to share their problems when talking to virtual agents compared to real doctors.

Virtual humanoids can also be used in training scenarios to develop certain social skills. In 2015, we worked on a trial project with Dutch public transport company, GVB, to help tram drivers deal with aggressive passenger behaviour; an issue of significant concern in the Netherlands. The company reports over 500 aggressive incidents (insults, threats, physical violence) against employees every year and as a result, there was a need to better prepare these individuals through dedicated training. Traditional methods involving actors in role play are expensive, so we designed a computer-based training system. Users were placed in a virtual scenario that included a dialogue with a character who suddenly began shouting, becoming increasingly angry. The user’s task was to de-escalate the character’s aggressive behaviour by applying the appropriate communication techniques.

Understanding and plausibility

While the potential for virtual humanoids in all walks of life is huge, there are still certain challenges that must be solved. Firstly, we need to address natural language processing and ensure that the virtual agents we develop actually understand what the human user is saying. Of course, we’ve become accustomed to talking to virtual assistants like Siri and Cortana in our smartphones over the past few years, but these interactions tend to be very simple and transactional, with single responses to questions. In reality, conversational dialogue is much more complicated; you must keep track of the history of what the other participant said and what context it was said in; which is very difficult for the virtual humanoid to process.

To be effective, virtual agents must also be believable and realistic; with human users almost forgetting that they are interacting with a machine. This is something that we explored for GVB’s trial project, questioning whether we could include some level of physical threat in the virtual environment to make the training more realistic, although we were testing the boundaries of what was ethically possible here!

Ultimately, we will be more open to virtual humanoids if we can see hard evidence that they are being used to improve society with good effect. Countries such as Japan are making significant inroads in this field and there will be more widespread cultural acceptance of virtual agents the benefits are clearly communicated to a global audience.

Share this blog article


About Tibor Bosse

Associate Professor in the Behavioural Informatics Group at VU University Amsterdam
Dr Tibor Bosse is an Associate Professor in the Behavioural Informatics Group (previously called Agent Systems Research Group) at VU University Amsterdam. The Behavioural Informatics Group studies the design, implementation and evaluation of intelligent computational systems that analyse, simulate or influence human behaviour. Examples of such systems include smartphone applications to help people adopt a healthier lifestyle, virtual reality-based training environments for security personnel, and social robots to support patients with mental health disorders. Within this context, Tibor’s current line of research focuses on the development of Intelligent Virtual Agents (IVAs), i.e., human-like characters in virtual environments that communicate with humans or with each other using natural human modalities. His work has an emphasis on the use of IVAs for training of social skills such as aggression de-escalation and cultural awareness. His main interest is to enhance both the believability and the effectiveness of IVAs, by endowing them with dynamic computational models of human behaviour, which are rooted in psychological and social theories. Such models enable IVAs to generate human-like behaviour as well as to understand it. Tibor is also the vice-chair of the Benelux Association for Artificial Intelligence and the co-founder of the Amsterdam Applied Gaming Research Community.