AI Ethical Concerns

The history behind AI

In 1950, Alan Turing, a British mathematician and scientist, developed a unique experiment that compares machine intelligence with human intelligence. His experiment is called the imitation game, and it is designed to determine if a machine can pretend to be a human being during a short conversation. The initial question being: can machines think?

This test was a kind of limit which, if crossed, would mean that the machine is equivalent to a human. In 1966, the first program called ELIZA managed to pass the Turing test by analyzing a text, searching for keywords, then answering its interlocutor in a completely coherent way. The program managed to convince several interlocutors that it was a real person.

However, the imitation game described by Turing only tests the ability of a machine to deceive an examiner, while human intelligence covers a significant number of other facets of emotional intelligence.

Emotional intelligence: a key element to define ethical AI

Many skills contribute to emotional intelligence, such as self-awareness, self-regulation, motivation, empathy, and social skills. Moreover, intelligent behavior is not always in line with our logic. The Turing test evaluates the efficiency of a program but certainly not the human intelligence. Passing the Turing test doesn’t mean that your program is intelligent. The problem is far more complex.

Therefore, it is not simply a matter of addressing AI from a technical point of view but from a legal, moral, and — more importantly — ethical point of view. These elements must not be dissociated without each other because with them we can initiate discussion about right and wrong use of technology.

On the other hand, we can define AI in the simplest terms as a machine imitating intelligent human behavior. Thus, when we have ethical concerns about AI, we must consider if the behavior it is imitating breaks the concepts of right and wrong. As long as AI is not capable of processing abstract ethical ideas, we can’t hold it accountable for its actions. For that reason, this responsibility rests with humans.

Trustworthy AI: what are the main guidelines?

Arguably, the most popular current initiative to give guidelines on ethical behavior in AI systems was set up by the European Commission. Ethics Guidelines for Trustworthy AI is a document created by a high-level expert group on artificial intelligence. It describes what is necessary for an AI solution to provide to be classified as trustworthy.

First, the solution should be lawful, complying with all applicable laws and regulations. In addition, it’s important that it’s ethical and therefore ensuring adherence to ethical principles and values. Lastly, it should be robust, both from a technical and social perspective, since even with good intentions, AI systems can cause unintentional harm.

Atos is integrating best practices in delivering ethical AI and we also assess an organization’s AI concerns through different angles.

Going beyond technology framework

Ethics by design is a multidisciplinary approach for developing intelligent systems that have ethics as a foundational principle of the entire solution lifecycle. Atos follows an ethics by design framework, which includes tools, methods, governance, regulation and culture as the five key principles for leveraging the framework laid by the IEEE for implementation assessment.

As it’s been demonstrated, getting AI ethics right isn’t just a moral responsibility, but a business one as well. Therefore, adhering to its norms is vital and shouldn’be taken for granted.

About the authors

Tomas Pinjušić

Tomas Pinjušić

Associate Cybersecurity Consulting Group, Atos

Tomas is an Associate Consultant at Atos. As such, he’s been working closely with senior cybersecurity consultants and assisting them in their initiatives. The main goal of his activities is to help global companies to have a secure ecosystem and go much further in their security aspirations than mere compliance.

He has developed the AI Business & Cybersecurity Maturity Assessment offer which exists to help companies discover how proficient are they with securing their AI models and utilizing cybersecurity solutions with advanced capabilities. In addition, he’s been working on the Partners in the Spotlight webinar initiative and coordinating activites for The Forrester Wave Q3 2021.

Nemanja Krivokapic

Principal Cybersecurity Consulting Group, Atos

Nemanja is a CyS Global Principal Consultant, experienced Cybersecurity Practitioner with 20 years of professional experience, committed, proactive and creative mind in an Ever-changing cybersecurity landscape. His focuses are InfoSec Governance and Strategy, GRC, Management Consulting, and Project transformation programs. He has successfully managed several engagements and he is one of the key contributors to the overall Global practice initiatives. He is PMP, CISM & Data protection certified and he is currently finalizing a master’s in information security.

Interested in next publications?

 

Register to our newsletter and receive a notification when there are new articles.