Regulation and AI
Interestingly last weekend, I watched terminator trilogy again. Almost 4ears ago, the idea of Skynet and machines taking control over humanity was very futuristic at that time. Personally, I have the feeling that the very fast emergence of Generative AI technologies are fuelling our paranoia about AI. It may be the same paranoia that it creates such a rush for legislators to regulate AI and especially Generative AI.
Now, does it mean that regulations shall anticipate any future implication of new technology and put some limitations to innovation? Or, can we imagine that such new regulations can trigger new innovations?
Something I noticed is that we are seeing more and more conversation about ethics and AI. This may be new for IT technologies but, when considering life science, and for example, biotechnologies, regulations and especially ethical limitations to research were developed in parallel and anticipating technical progress so we avoid such ethical issues .
Without entering into too much details, for example, human cloning is prohibited through various legislations around the world and genetical therapies are regulated. Some other regulations are also existing for GMO (genetically modified organism). With AI, technology is more accessible, and monitoring will be more difficult and as well the enforcement of AI ethics regulations but regulators are eager to anticipate on potential evolutions and preventing them vs. ex-post regulations.
With GDPR and privacy regulations, we had our first experience on how IT technologies can impact our life and therefore, the needs to regulate to protect us as a human being. Now, with AI, impact can be deeper and therefore it is probably the first time that we are in a situation where an IT technology is not only about new capabilities and technological advance but can deeply impact our life.
Restrictions on Innovation are coming from GenAI licenses first
In software, twenty years ago, the open source revolutionized software licensing with intellectual property mechanisms to ensure openness of the code and the collaboration around such software. With GenAI, we are seeing new type of conditions about the usage of the GenAI and the output which are touching on ethics.
This is clearly demonstrating that the innovators themselves are concerned about their own innovation on GenAI. For example, OpenAI limits the use of ChatGPT to their usage policies. Obviously, illegal activities will be restricted but here the restrictions are going beyond this and for example, you cannot use the output for the management of critical infrastructures, automated determination of eligibility for credit, employment… and also any automated financial, legal or medical advises.
Obviously, all of such licenses and the specific restrictions will evolve over time and it is likely that some of the restrictions will be lifted when you are using the commercial version of such GenAI, as some of them are more driven by liability considerations.
AI Act and how it impacts on innovation?
On June 2023, significant European business leaders such as Renault, Siemens, Airbus expressed serious concerns in an open letter about the EU AI Act especially as it will affect our ability to innovate in Europe compared to other continents and especially United States which is inclined to adopt a lighter approach. As of today, the draft of the AI act is likely to impose extra costs for development and deployment of AI solutions in Europe and with some parts of the legislations still to be clarified (for example, the standards applicable to the dataset used for training). This may affect investment in Europe which will affect our economical development and as well our technological sovereignty.
The Open source community voiced as well some concerns as the AI act will be applicable to the open source development and will create unnecessary burden on such development. This is very surprising when you are considering that such open source development are likely to be a source of transparency and fairness to access when developing large foundation models.
Regulatory sandboxes – are they the solution?
A recent paper published by the OECD explains how governments should leverage regulatory sandboxes. A regulatory sandbox will allow innovators to develop innovative solutions with a waiver on certain legal or compliance processes so they can test their innovative solutions without the extra burden and costs associated with such requirements.
Obviously, such sandboxes will be temporary and limited in size but will provide a unique opportunity for innovators to test their innovation in the market and at the same time the policy makers will be able to make more informed decisions. Interestingly, authors of the OECD papers indicate that Spain kicked off an EU AI regulatory sandbox pilot program with the objective to test the future EU AI Act.
In conclusion, let’s hope that the EU AI Act will evolve favourably for innovators over the coming months, the text is likely to become final by the end of the year.
 OECD (2023), “Regulatory sandboxes in artificial intelligence”, OECD Digital Economy Papers, No. 356, OECD Publishing, Paris, https://doi.org/10.1787/8f80a0e6-en.
About the author
Head of Intellectual Property / R&D
Yann DIETRICH is Head of Intellectual Property / R&D
In his role, Yann and his team are managing all intellectual property at Eviden, patents, software, trade secrets and trademarks and are getting more and more involved in new form of intellectual property and legal issues around data and AI. His main priority is to secure all intellectual property developed at Eviden and leverage them to ensure the freedom to innovate of Eviden, its customers and partners.
He is also a lecturer in several French universities and very involved in IP and AI initiatives at international level.