The Global Race to Regulate AI: Biden’s Executive Order Spillover Effects on the EU AI Act

The EU AI Act reflects a risk-based approach. The Biden Administration’s Executive Order promotes standards and best practices to unleash the full potential of AI
Number: 44
Year: 2023
Author(s): Marco Bassini

On October 30, 2023, the Biden Administration issued the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence Executive Order 14110, which marks a key development in the global race to regulate AI

Bassini 3

On October 30, 2023, the Biden Administration issued the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence Executive Order 14110, which marks a key development in the global race to regulate AI. The Executive Order articulates a vast array of commitments on a variety of sectors that are likely to be influenced by the implementation of AI system on large scale. After the adoption of the Blueprint for an AI Bill of Rights in October 2022, the US Administration seems to have adopted a more pragmatic approach, despite the predominantly programmatic nature of the Executive Order, which nevertheless embodies a clear agenda-setting ambition.

Despite the difficulties reported over last weeks in the ongoing negotiations on the AI Act proposal, whose coming into being might dramatically be called into question, the adoption of the Executive Order provides a chance to delve into a more in-depth comparison between the EU trends and US attitude in regulation of AI.

Various among the prima facie comments on the Executive Order 14110 have focused on the dichotomy between the so-called “Brussels effect” well-captured in Anu Bradford’s 2020 book (The Brussels Effect: How the European Union Rules the World), and a possible, yet still undemonstrated “Washington effect” on the field of AI regulation.

This is a partially misleading perspective that fails to grasp the actual added value of the recent stance by the US Administration.

The executive order marks an important call to action, but its practical results are at the moment limited to the level of principles, having no remarkable legal impact, with the exception of a few provisions.

It definitely witnesses the efforts made by the US government to address the challenges posed by the spread of Artificial Intelligence systems, which already took shape in the Blueprint for an AI Bill of Rights; but making accurate predictions in legal terms on its practical impact in other jurisdictions would be premature.

One result, however, can already be reported: a huge, albeit indirect, pressure on the EU lawmakers, which are now facing a deadlock in the negations concerning the AI Act. It might not be a coincidence that last week Italy, France and Germany came to an agreement calling, among others, for a softening of the rules applicable to foundation models. Might this move have been driven by pressure caused by the Executive Order? 

The legal status of the AI Executive Order

Executive orders are presidential directives that do not rely on a specific constitutional provision. They result from the exercise of the executive power vested in the President, which has broad enforcement authority under Article 2 of the Constitution.

However, if not backed by legislative measures, they are unlikely to have a significant impact. This is the main reason why comparing the Biden-Harris Executive Order and the AI Act proposal that is currently discussed by EU institutions is an interesting exercise, yet not sufficient to understand the actual implications of this landmark initiative.

The Executive Order mirrors a predominantly political ambition: to put the US at the forefront of the race to AI. The order breaks down a variety of principles and lays down some definitions, then defines a set of goals that will require further legislative action.

The Executive Order mandates cover a variety of areas: safety and security, innovation and competition, equity and civil rights, workers’ rights, protection of weak parties (students, patients, passengers, and consumers), privacy, and promotion of the use of AI by the federal government. It then aims to strengthen the American leadership abroad.

The Executive Order, thus, intends to pave the way for the adoption of measures that would significantly impact different key sectors, establishing the necessary safeguards to protect individual and societal interests vis-à-vis the spread of AI systems.

It aims to create the conditions for the US to lead the AI revolution, by exploiting the full potential of the technology without hindering human rights and freedoms.


The Executive Order mirrors a predominantly political ambition: to put the US at the forefront of the race to AI

Comparing US and EU Efforts

Despite the lack of directly prescriptive rules and its limited enforceability as it is, the Executive Order draws sufficiently broad guidelines and perhaps is likely to have a more comprehensive and huge impact than the future AI Act from a social and economic perspective.

The European AI Act primarily aims to categorize AI systems into different classes of risk, which reflects the essence of the risk-based approach. It adheres to a regulatory approach that focuses on the use of technology rather than on technology per se. This methodology, however, may also lead to undesirable consequences given its lack of flexibility combined with the unprecedented development of AI systems.

The rise of generative AI, for example, has caused debate among EU lawmakers on the proper legal regime to be designed for this technology.

As general-purpose AI is likely to have different downstream applications, each one having a different level of risk, commentators have pointed out that the risk-based approach should be revisited and proposed the adoption of an ad-hoc risk category for generative AI systems.

The US Executive Order takes a different approach, which reflects its legal nature. Aimed to promote standards and best practices, including in the context of regulated sectors, and based on extensive stakeholders’ consultation, it seeks to accommodate the desire of industry (but also of the federal government) to unleash the full potential of AI.

Consistently with the US understanding of the role of innovation at the intersection with fundamental rights, this goal is not achieved - as opposed to the EU - by laying down a comprehensive and detailed set of obligations, but rather by setting binding guidelines and only in a residual way resorting to regulation (for instance, with respect to generative AI).

This will not prevent, in any case, the competent authorities for the relevant sectors that have been empowered by the Executive Order from setting more prescriptive and detailed rules and enforcing them, similarly to what could happen in the EU, where the design of a proper governance for AI is still debated (as I highlighted in a previous post). But what can be predicted is that most likely in the US a “holistic view” on AI regulation will be (at least) partially missing, given the number of authorities called to action by the Executive Order.

The US approach marks in any case a significant departure from the “regulatory anxiety” of EU lawmakers vis-à-vis disruptive technology and follows the general skepticism in the US legal culture about the role of regulation, most notably when it comes to emerging technologies and possible interferences with human rights.

Moreover, the scope of application of the Executive Order overall extends far beyond the purely AI domain. While the EU mostly focuses on the establishment and functioning of the internal market and on the implications of AI systems for data protection, the Executive Order takes the regulation of AI as an expedient for an American “gold-plating” that, if realized, will lead to substantial reforms in the social and productive sphere. In EU law, gold-plating describes the controversial practice of Member States to extend the scope of EU directives (which set obligations for Member States to fulfill certain objectives, but leave them room to determine the appropriate measures) when transposing them into domestic legal systems.

In the context of the Executive Order, it is likely that US governmental agencies and – why not ? – Congress will do something similar, taking the regulation of AI as the occasion to effect profound reforms mirroring the variety of economic and social sectors impacted by the AI revolution.

The cross-sector approach adopted by the Executive Order parallels in fact the many diverse industries and areas where AI is supposed to have an impact.

Also, despite the Executive Order’s aim to foster American leadership in the race to Artificial Intelligence, it does not provide any “Washington effect”: unlike the AI Act proposal, the Executive Order does not per se target non-American entities and its focus lies entirely on American companies and the federal government; however, it calls for a strong international framework to govern the development and the use of AI. It goes without saying, however, that also the upcoming developments in the EU legal framework, and most notably the possible changes regarding the legal status of foundation models, will have huge impact on US-based companies.

Room for Convergence

Having highlighted the main differences between the Executive Order and the AI Act proposal, it is worth noting that some convergences can be seen as well.

The most interesting intersection definitely concerns privacy and data protection, which also appears as the most challenging aspect as far as the US is concerned.

The Executive Order focuses on privacy both in the principles listed in Sec. 2 and in the specific Sec. 9.

On one hand, it acknowledges that AI “is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires”, leading to increasing risk of exploitation and exposure of personal data.

Therefore, the Executive Order makes clear the commitment of the Federal Government to ensure that “the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks”. Interestingly enough, the Executive Order encourages resort to privacy-enhancing technologies among the available technical tools. This way, it makes visible its reliance on the power of technology as a modality of regulation.

On the other hand, Sec. 9 further elaborates in greater detail this commitment, reiterating the acknowledgment that AI can facilitate the collection or use of personal information or the making of inferences about individuals.

It is difficult not to see, along these lines, an implicit but warm urge to Congress to pass data protection legislation.

As noted by some privacy scholars, this would significantly help put into practice the mandates established under the Executive Orders (including mandates other than that under Sec. 9).

However, it seems unlikely that an ambitious goal such as setting comprehensive data privacy legislation can be achieved at this historical moment.

Also, the California Consumer Privacy Act has planted the seeds for the flourishing of state statutes aimed to empower citizens against the processing of their data most notably by businesses: because of this wave, it is unlikely that Congress will take the floor on a matter on which it was traditionally reluctant to step foot.

This is perhaps a key sector where EU law could actually play an influence and produce its “Brussels effect”: not the AI Act proposal, but rather the General Data Protection Regulation, which already applies to US-based organizations to the extent they process data of European residents when offering them products or services or monitoring their commercial behavior in the EU.

Favero 2

Unlike the AI Act proposal, the Executive Order does not per se target non-American entities and its focus lies entirely on American companies and the federal government

Political Pressure in Brussels

The Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is a clear warning that the Administration does not underestimate the key role of regulation in the global race to AI.

Despite it being an ambitious and comprehensive call for action that goes far beyond the mere AI domain, it has to be seen to which extent the great expectations raised by it will be actually fulfilled.

This will largely depend on Congress’s ability to translate into practice the commitments and mandates made by the Administration.

As noted above, however, the huge political pressure deriving from the Executive Order can already be sensed in these weeks in Brussels, in a phase characterized by tension among the EU institutions where a shed of uncertainty surrounded the future of the AI Act proposal.

So, rather than a proper Washington effect comparable to the Brussels effect in legal terms, we can expect the first practical consequences of the Executive Order in purely political fashion.

IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.

If you want to stay up-to-date with the initiative of the Institute for European Policymaking@Bocconi University, subscribe to our monthly NEWSLETTER here.