Artificial Intelligence: do we need an ethical code?


Posted on: February 28, 2019 by Fabio De Pasquale

2018 was the year of AI, artificial intelligence, all the big technology companies are increasing projects dedicated to this theme and a lot of these projects are very ambitious and with an international focus.

AI has demonstrated its full potential in various areas of science and technology. It has been used for gathering and analyzing data across a variety of fields such as medical research, military missions, citizens' safety, space missions and many other things, more common and tangible like the creation of new smart cities and semi or completely autonomous vehicles.

The challenge

AI can also have negative sides if the power of these systems is badly managed, because it could become very dangerous.

And we are not talking about the risk of killer robots roaming around our cities, but more "subtle and just as dangerous" usages of already existing applications.

Therefore, it is necessary to highlight the importance of having a solid ethical framework around the use of this technology, given the real impact it already has on people's lives. When we talk about ethics, we are doing it in terms of transparency, security, equality, inclusion, privacy, and those values are what we try to show within our work and plans.

It is important that when these types of tools are used they are safe, that those responsible for developing these systems based on the data do so with ethical criteria. This implies that you have to teach people so that they're able to train the machines. Ethics must be injected into the algorithms, so that if a mortgage is requested, that person is not discriminated against because of their age, sex or ethnicity. An algorithm must never be biased by racial or social status. Because nobody learns without input, much less a robot, so it must be fed with information and data that adhere to principles.

European legislation

In 2018 the European Commission selected 52 "Human Intelligences" to face the ethical challenge of artificial intelligence and on December 18, 2018 published a draft of an ethical code that contains numerous useful indications for the practical application of the fundamental principles of European law in the development of intelligent systems.

The guidelines also include a checklist that can be used at design stage to measure the adherence to the ethical recommendations of the European Commission. It is certainly a step forward compared to other similar experiments conducted around the world, however, remained too far from the actual technological reality.

The guidelines included in this 36-page draft outline two fundamental factors to which artificial intelligence must conform:

  • Ethical purpose: The AI must respect, as we said, the human rights and current regulations.
  • Technical robustness: The AI must guarantee that, even when used with good intentions, the lack of technological expertise in its management does not cause unintentional damage.

The guidelines, in addition to recommending "robustness and security of systems", focus primarily on the centrality of human beings in their relationship with artificial intelligence: human dignity and freedom must come first, especially when the algorithms come into play.

According to the European Commission, these guidelines seek to ensure that European AI and that of foreign companies that offer their services on European soil demonstrate "responsible competitiveness" and do not intend to "stifle innovation".

The text includes both its utility to protect the rule of law and the more controversial applications such as autonomous weaponry or mass surveillance.

Just as Europe has proclaimed itself a global reference in the protection of private data, forcing large non-EU companies to assume European legal standards in this field, the European Executive hopes to achieve a similar impact in the field of ethics for AI.

Conclusion

The autonomy of people must always prevail over artificial autonomy, therefore it must be guaranteed a power of supervision by the people on the machines, so as to limit the decisions of the latter.

The "super-system administrator" must therefore remain human.

Share this blog article


About Fabio De Pasquale

UX Consultant, Worldline
Graduated in architecture EU in the University of Rome "La Sapienza", since 2007 I have specialized in web graphic design and since 2011 in the design of mobile applications. I joined Atos in 2015 as User Experience Consultant at Worldline Mobile Competence Center in Barcelona and where I’m also the UX & Design deputy team leader. My specializations are user research, accessibility and user experience definition for mobile applications, IoT, wearables, vocal assistant and chatbot. I’m part of Worldline and Atos expert network at global level. I have been also member of the Worldline Juniors' Group, an international, cross-functional network of talents.

Follow or contact Fabio