The last stage (for now) in the race for regulation is in the area of Artificial Intelligence
In unsuspected times, we anticipated that 2023 would be the year of legislation – and compliance. And our expectations proved to be more than well-founded, both in the financial context and in several other areas. The speed and relevance of the constant technological, social and environmental innovations are, in fact, imposing, and will impose, the implementation of regulatory frameworks that ensure innovation in the protection of all stakeholders. That is why, in such a context, the need to introduce regulations and guidelines affects mostly the technology sector. Suffice it to refer to the issuance of the MiCA (Markets in Crypto-Assets) Regulation, concerning the Crypto world, or even to the DORA (Digital Operational Resilience Act) Regulation, both in the EU. Which confirms itself as a forerunner in this race for regulation with the European Parliament’s approval, last 14 June, of the AI Act, namely a document with which the European Union intends to “regulate Artificial Intelligence in order to ensure better conditions for the development and use of this innovative technology”, as stated on the European Parliament’s official website.
The origins and the goals of the AI Act, the first Artificial Intelligence regulation in the world
Going deeper into the content of the AI Act, it is worth referring to the document published on its official website by the European Parliament itself on the use of Artificial Intelligence in the EU. The drafting of the first AI regulation in the world has its roots in April 2021, when the European Commission proposed the creation of the first EU regulatory framework on Artificial Intelligence. The intention was to analyze and classify AI systems “according to the risk they pose to users”, resulting in more or less regulation. With the AI Act, the Parliament intends to “ensure that Artificial Intelligence systems used in the EU are secure, transparent, traceable, non-discriminatory and environmentally friendly”. The new rules, recently approved by the European Parliament, therefore set obligations for suppliers and users based on the level of risk of each AI system, implying that these are analyzed to assess their category.
Discovering the AI Act: the three levels of Artificial Intelligence systems risks and the related obligations
The highest level of risk has been defined as “unacceptable risk” and entails a ban on AI systems that pose such a threat. The types of AI in this category include:
- cognitive behavioral manipulation of people or specific vulnerable groups;
- social classification of people based on behavior, socio-economic level, personal characteristics;
- real-time and remote biometric identification systems, such as facial recognition.
However, rare exceptions will be allowed, such as biometric identification systems used at a distance for the prosecution of serious crimes and subject to court authorization. Then we move on to “high risk”, namely the AI systems that adversely affect security or fundamental rights. These are:
- AI systems used in products subject to the EU General Product Safety Directive (toys, aviation, cars, medical devices and lifts);
- AI systems that fall into eight specific areas (specified on the European Parliament website) and will have to be registered in an EU database.
These include, for instance, AI systems that could be used to influence voters and systems used by social media with over 45 million users. Finally, the third risk band is that of “limited risk” AI systems, in respect of which only certain “minimum transparency requirements must be met to enable users to make informed decisions”, such as the explicit disclosure of any interaction with AI. Importantly, limited risk systems include Generative AI ones, which we will discuss more specifically in a moment. Now that the Parliament has passed the AI Act and set out its negotiating position in the plenary session on 14 June, the goal is to negotiate and seal an agreement with the EU countries in the Council for the final drafting of the law by the end of 2023.
The Generative AI within the “AI Act”: what does the European Union say about that?
As mentioned a few lines ago, within the AI Act, the European Parliament has placed Generative AI systems – whether text, such as ChatGPT, or image, audio or video (including deep fakes) – in the category of limited risk systems. As a result, companies offering Generative AI services will have to comply with simple transparency requirements, namely:
- explicitly communicate that the content has been generated by an Artificial Intelligence;
- design the model in such a way as to prevent the generation of illegal content;
- publish summaries of copyrighted data used for training.
Getting to the heart of the matter, it must be specified that Generative AI represents a world still to be explored, both on a regulatory and technical level. Generative AI models in fact expose users, but also companies, to great risks in cases of overconfidence towards them. The MIT’s magazine Technology Review agrees on this issue – “this technology is not yet ready to be used on a large scale” – as well as researchers from Stanford University, Georgetown and OpenAI, with reference to the manipulation of subjects for propaganda purposes. But there is no shortage of examples on the limits of factual accuracy of Generative AI models, as we write in a post specifically dedicated to the topic. Following an interesting analysis by Reuters journalist Karen Kwok, policymakers are following different lines with respect to AI. While the EU, China and Canada are trying to build a new regulatory architecture, currently India and the UK have spoken out against the need for special regulation for AI. The United States, on the other hand, seems to be holding an intermediate position, with President Biden’s proposal for an AI Bill of Rights, contextual to the internal discussions in Congress on the possible need for targeted regulations.
Is it possible to reduce the risks of Generative AI? Yes, with RAG: the Aptus.AI Chat use case
Certainly the European Parliament’s approval of the proposed regulation is a big step, as evidenced by the words of the President of the European Parliament, Roberta Metsola, who called it “legislation that will undoubtedly set the global standard for years to come”, as reported on Altalex. But the story of the first Artificial Intelligence law is still to be written, both at European and, more importantly, global level. Our point of view at Aptus.AI, as always, is to pursue innovation without reservations, but with extreme care, so that the output of certain services is reliable and risk-free for end users. Within our RegTech SaaS we have in fact already integrated a Conversational AI service, based on text generation models designed not only to provide real-time information, but above all to provide factually reliable information. And this is even more important in fields such as regulatory analysis, since legal information must be both up-to-date and reliable in order to be used. One of the methodologies we use for our regulatory chat’s answers to be reliable is Retrieval Augmented Generation (RAG), which we present in detail in a dedicated post. In summary, with the Aptus.AI’s chat service, it is possible to instantly receive answers from Generative AI that would have required hours of work and specific legal expertise.