Which Governance for AI? The Case for an ad-hoc Authority

The establishment of an ad-hoc authority would not deprive existing authorities of their respective powers but would rather facilitate cross-sector cooperation, as required by the quickly evolving nature of AI.
Number: 22
Year: 2023
Author(s): Marco Bassini

The ongoing conversation among the European Union institutions that started when the Commission released its proposal also features a variety of views on which (and how many) authorities both at EU level and at Member State should lead the governance of AI.

Basssini 4

Over the last months, much of the debate concerning the European Union AI Act proposal and its legislative path towards approval focused on remarkable substantive regulation profiles, such as the definition of proper obligations for the actors in the AI value chain.  

Even more, discussions around Generative AI gained hype over 2023, as the unprecedented rise of foundation models such as ChatGPT made it visual the risk of disconnection between regulation and the evolution of technology in the original AI Act proposal, which did not address general purpose AI. While discussions on Generative AI spanning from law to economics increase day-by-day, an aspect having a remarkable impact has been quite neglected so far: which governance for AI?  

One or More Authorities? 

Indeed, the ongoing conversation among the European Union institutions that started when the Commission released its proposal also features a variety of views on which (and how many) authorities both at EU level and at Member State should lead the governance of AI.  

The AI Act proposal establishes two layers: at EU level, it provides for the establishment of a body/office; at Member States level, it requires, alternatively, establishing an ad-hoc supervisory authority or designating one or more already existing authorities as competent supervisory authorities.  

Whereas the definition of a European body, albeit subject to some divergences in the legislative process, does not raise much debate, the question on what Member States should do has recently gained attention as the Spanish government, ahead of the AI Act approval, decided to establish a supervisory authority for Artificial Intelligence. It is worth noting that even the versions voted by the Council and European Parliament diverge on whether there should be one or more authorities tasked with the governance of AI at Member States level (and whether a new ad-hoc or rather already existing authorities should take this responsibility). 

Both the options present strengths and weaknesses. 

Establishing a new authority with a specific mandate would offer the advantage of a specific expertise on AI of its members. More important, this option would promote a holistic approach to AI and its regulation.  

This benefit would derive from the absence of a sectorial approach that is a distinguishing feature of existing authorities, the mandate of which is focused, e.g., on the fairness of markets and the protection of interests with a societal impact (such as privacy).  

The importance of such a global perspective should constitute an added value considering the peculiarity of the context: a supervisory authority with a technology-centered mandate is indeed unprecedented. However, as I will question in more detail below, one could also dispute that a supervisory authority is needed for a technology as such. 

On the other hand, this option could trigger conflicts with the mandate of other authorities, most notably due to the potential for a larger convergence between AI and other domains. If significantly empowered, the supervisory authority may more likely exercise competences that, if not conflict, at least affect or otherwise influence the exercise of powers by other authorities.  

A simple example derives from the domain of data protection: should an ad-hoc supervisory authority have the power to enforce data protection rules vis-à-vis AI systems or should it leave this power to the already existing and experienced data protection authorities? In the former scenario, a risk of inconsistencies could emerge; in the latter scenario, one could wonder which powers the AI authority should concretely exercise.  

Bassini 5

Establishing a new authority with a specific mandate would offer the advantage of a specific expertise on AI of its members. More important, this option would promote a holistic approach to AI and its regulation 

The Risk of Fragmentation  

The alternative policy option consists in designating one or more authorities that already exist. Also, this scenario may determine both pros and cons. It would bring the advantage of leveraging the competence and experience of authorities already operating in the relevant fields.  

This way, data protection authorities could enforce the GDPR and other norms on personal data also vis-à-vis AI systems (as already happened in some jurisdictions: e..g, in Italy, in the ChatGPT resolution of the Italian Garante); competition authorities, in turn, could review commercial practices based on the use of AI models, and so on.  

Such a sectorial distribution of powers, which is consistent with the current setting of supervisory authorities (as said, there is no specific authority for a specific technology yet), would deprive an ad-hoc authority of its raison d'être: in a nutshell, every administrative agency would continue to take care of its own business (including, where relevant, AI systems).  

Against this background, this model could also determine criticalities: first of all, a unifying view on AI would be missing in fact; in addition, a risk of fragmentation could arise to the extent certain activities might not be easily assigned to a specific authority. 

Having mentioned the theoretical advantages and disadvantages of both options, which is the model to be preferred? Answering this question requires tackling a preliminary point, i.e. why supervisory authorities make sense. 

The reason why administrative authorities have spread across Europe over the last three decades lies in the desire to limit political influence over the discipline of areas (predominantly markets) traditionally subject to State control and, more recently, to market liberalization. This trend made the effectiveness and efficiency principles prevail over volatile political addresses and priorities in matters of a significant technical nature. Technical regulation therefore prevailed over legislation as it promised to better accommodate societal interests in the fair functioning of markets.  

Against this background, AI supervisory authorities, in cooperation with the European body that will be parallelly established, will play a key role in the application and implementation of the AI Act.  

It is however unusual that a supervisory authority is in charge with a technology and its development such as in the case of AI, unless this technology is considered as a market itself (a peculiar one) and regulation has the goal to govern it considering its unprecedented societal impact. 

A portrait of Frank Easterbrook

In 1996, former seventh circuit chief judge Frank Easterbrook compared cyberlaw to the 'law of the horse", arguing against what he saw as an unnecessary specialized branch of law. The same debate may come up with respect to AI

From the “Law of the Horse” to the Law of AI  

This point mirrors, to a certain extent, the debate on the need for a branch of regulation named ‘cyberlaw’ at the time of the rise of the Internet. In 1996, former seventh circuit chief judge Frank Easterbrook famously compared cyberlaw to the ‘law of the horse’, arguing against what he saw as an unnecessary specialized branch of law. The same debate may come up with respect to the rules that will govern AI: do we need specialized rules? Can’t we instead rely on existing notions and categories?  

On a closer look, the point also concerns governance and the debate on whether a specific AI supervisory authority should be established. If the authority, as the AI Act proposal provides, will handle pivotal technical aspects of the AI market, then the option to appoint an ad-hoc body may be well-suited.  

Adaptive (and thus quickly evolving) regulation is particularly needed in a fast-growing market such as AI. This type of rule can be more properly defined by an administrative authority with technical background.  

Also, in the cyberlaw debate Lawrence Lessig challenged Easterbrook ‘law of the horse’ metaphor by stressing how the law of cyberspace could unveil a new potential for the role of regulation. Twenty-five years later, we can say that Lessig was right and Easterbrook, despite making a reasonable claim, probably underestimated the disruptive nature of cyberspace and digital technology in the legal domain. 

So, also an ad-hoc AI authority can have a role and make sense, provided that each of the existing authorities preserve their power on the respective scope of competence.  

It would not make sense to designate one rather than another authority, as in the age of digital convergence technology can raise issues in a variety of domains: among others, data protection, competition law and media. So, no one can claim a stronger say nor legitimacy over AI governance more than other authorities.  

Against this background, the establishment of an ad-hoc AI supervisory authority in each Member State could constitute an added and indispensable value, which will not deprive existing authorities of their respective powers but would rather facilitate stronger cross-sector cooperation, more and more required by the unprecedented, quickly evolving nature of AI.  

 

IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.

If you want to stay up-to-date with the initiative of the Institute for European Policymaking@Bocconi University, subscribe to our monthly NEWSLETTER here.