The Future of the EU Artificial Intelligence Act
In the absence of crystal-clear considerations to be drawn on the legal domain, it is advisable to refrain from celebrating the AI Act (as well as its alleged substantial failure) as a turning point or a historical achievement. Let the final legal provisions speak, first.
On December 9, 2023, after an exhausting three-day, marathon-style trilogue, the European Union co-legislators reached a provisional agreement on the content of the future Artificial Intelligence Act.
As rightly observed, this is an agreement at a principles level, on the solutions to be adopted in the AI Act. Once the text is finalized in all its technical details, the baton will pass to the Member States representatives, who are then supposed to endorse it, before the co-legislators take the floor again for confirmation and formal adoption.
At the moment there is no text, only statements and press releases are available: this is why it is of utmost importance to choose carefully which words to use, and also which words to avoid, in the absence any certainty from a legal perspective.
Over the last weeks, the European Commission and the European Parliament issued, respectively, a Q&A factsheet and a press release that sketched out the content of the upcoming piece of legislation covered by the political agreement.
However, in the absence of a legal text, which still has to be elaborated and fine-tuned, it would be premature to delve into a purely legal analysis. As the devil lies in the details, and the wording definitely matters when it comes to legal provisions, the future content of the AI Act might lead to reconsidering the actual reach of the achievements presented by the EU institutions.
Likewise impossible, at the moment, is to predict the impact that the future and still-to-be-designed piece of legislation will have on industry.
The political agreement seems to strike a compromise among the different societal needs at issue (law enforcement and privacy, but also the freedom to conduct business)
Some insights on the future EU regulation
Having said that, the agreement reached by the EU co-legislators marks significant progress that highlights at least a couple of points with remarkable legal effects.
The first point worth noting concerns the nature of the AI Act. In the course of the legislative process, the texts voted by the co-legislators and the one proposed by the Commission mirrored a significant shift in the approaches behind the AI Act.
As noted by scholars, these different versions range from a market-oriented regulation primarily aimed to set rules on the safety of products and services to a fundamental rights-driven and human-centered piece of legislation.
This shifting paradigm can be sensed, among others, concerning two particular components of the regulation, namely the rules governing biometric identification systems and those on foundation models (which did not even exist in the proposal of the Commission).
With respect to the former category of rules, the political agreement reached by the co-legislators outlaws the use of real-time biometric identification in publicly accessible areas for law enforcement purposes.
According to the Q&A issued by the European Commission, however, this prohibition will be subject to a series of exceptions, including law enforcement activities related to sixteen crimes, target search for specific victims, and other similar circumstances.
Like in the case of the use of AI for post-remote biometric identification, these exceptions will require prior authorization by the competent authority and notification to the market surveillance authority and the data protection authority.
In the course of the legislative process, calls emerged from academia to include the requirement to carry out a fundamental rights impact assessment in the text of the AI Act. According to the factsheet of the European Commission, the political agreement only partially accommodates this expectation, as it provides for a requirement to conduct a FRIA when it comes to high-risk AI systems, including systems used for biometric identification, where permitted.
The political agreement seems to strike a compromise among the different societal needs at issue (law enforcement and privacy, as well as other fundamental rights, but also freedom to conduct business) but an accurate evaluation from a legal perspective will be possible only once the final text is adopted, as the language through which exceptions (and requirements) are carved out might have a significant impact on the actual scope of rules and prohibitions.
With respect to foundation models, the content of the political agreement is more difficult to capture and most likely will be subject to deeper negotiations when translated into legal terms.
The language of the European Commission’s Q&A features a new shift to the categories of general-purpose AI models and large generative AI models. But having that said, it remains to be seen to which extent the political agreement resisted the call for mandatory self-regulation by the Italian, German and French governments.
A political (and legal?) compromise
Based on this preliminary overview of the political agreement announced by the EU co-legislator, the predominant feeling is that the latter marked a compromise designed to accommodate various instances.
Against this background, it is perhaps reasonable to expect that none among the specific approaches outlined by scholars as rationales backing the different texts of the AI Act voted by the EU institutions has guided the EU co-legislators in this phase; except for the fear to lose the “global battle” over AI leadership, which found new impetus after the recent Biden’s Executive Order.
Accordingly, in the absence of crystal-clear considerations to be drawn on the legal domain, it is advisable to refrain from celebrating the AI Act (as well as its alleged substantial failure) as a turning point or a historical achievement. Let the final legal provisions speak, first.
IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.