What does the EU AI Act mean for Research?
While increased bureaucratic documentation might make the European AI research landscape less appealing, better documentation will likely result in more thorough and potentially meaningful research.
AI news arrives in waves, but the sizes of the waves vary greatly. There has now been a significant breakthrough: the European Union has agreed on the AI Act. While its final form has yet to be agreed upon, the AI Act will regulate high-risk AI, social media providers' responsibilities, and various other issues. Many people in the AI community have been paying close attention to these developments. But what does this one imply for the field?
This act does not respond to recent big-wave AI news about large language models and ChatGpt. Its creation predates those applications by some time, but it is now inextricably linked.
ChatGpt-like tools are not exactly new: they have existed in various forms for decades. Those were, however, usually only available to an expert audience and were based on much smaller data sets.
The public availability of ChatGpt has sparked public interest (particularly in apocalyptic scenarios) and shown what it means to put AI into the hands of the public. Many in the field were probably more impressed and surprised by the uptake and reaction than by the technology itself (which is impressive).
Most AI researchers support regulation of the field, though not for the reasons one might expect. Few experts expect an AI apocalypse (though those few have a broad platform).
It is unclear that this new technology fundamentally changes the aspects people are concerned about.
Yes, it can instruct users on how to construct a bomb, and we can prompt them to say something racist. However, those things were already true of and on the internet.
While Large Language Models (LLMs) can increase misinformation, it is unclear whether this will result in a qualitative difference or whether it will be different from the human-run troll farms that were previously used to influence US elections and other events.
Their concerns extend beyond OpenAI CEO Sam Altman's dramatics in front of the US Congress, whose motivation was seen as an attempt to halt the race while one company is ahead.
The reason most people in AI support regulation is the same reason they tend not to have smart speakers: they are less wary of the technology than the people who own it.
The proposed AI Act addresses those concerns, but it also has some implications for the field, its research, its relationship with consumers, and its place in the EU's scientific landscape.
While the proposed legislation makes exceptions for research, it would likely establish technical standards, ethical requirements that must be followed, and more extensive documentation and transparency regulations.
These would also hold researchers accountable and promote more responsible and ethical AI development.
In many fields, this move towards more transparency has already begun some time ago, with data statements, model cards, and ethical considerations as part of scientific publications.
Some worry, however, that more stringent regulations may stifle research creativity and innovation at the cost of more bureaucracy.
While increased bureaucratic documentation might make the European AI research landscape less appealing, better documentation will likely result in more thorough and potentially meaningful research.
While increased bureaucratic documentation might make the European AI research landscape less appealing, better documentation will likely result in more meaningful research
A Blessing for AI Research
The implicit assumption in many AI discussions is that the EU must catch up to the US and China or risk falling behind. Pessimists believe that more regulation will weaken the EU's position.
Some worry that stringent regulations will discourage tech firms from establishing themselves in the EU, undermining the region's position in the global tech industry.
It is unclear, however, whether that assumption is justified. Like GDPR, the AI Act sends a signal to other political actors by setting a precedent for protecting consumers and encouraging responsible AI.
In a desirable market, strong regulation will compel many companies to comply, making it easier for other jurisdictions to enact similar legislation.
Researchers benefit from this situation as well. A more regulated environment for AI can increase funding and support for AI research to adhere to these regulations.
Second, to implement its requirements, it creates a need for innovative solutions to ensure AI technologies meet these standards, driving the development of new methodologies and fundamental research.
Finally, by prioritizing individuals' rights and privacy, the legislation can increase trust in AI technologies and improve their widespread acceptance and use in society—any scientific field benefits from such broader acceptance.
The main criticism from academics is the definition of systemic risk via the number of FLOPs (floating point operations). FLOPs measure how much math a computer can do in a given amount of time.
Like all AI models, large language models are essentially a series of arithmetic operations with a large number of variables (also called parameters). An earlier version of Gpt had 175 billion parameters. That means it required billions of computations to find the right values, and it takes as many to use the model.
Higher FLOPs indicate that larger models, which tend to have more capabilities, can be used. Several factors, however, work against this simple metric: researchers are constantly working on minimizing models, that is, achieving the same results with fewer computations. Simultaneously, computing power is becoming increasingly affordable. Most of the groundbreaking AI achievements that required supercomputers to produce could now be quickly replicated on standard laptop computers.
The EU has left itself a back door by allowing for a redefinition of how many FLOPs define systemic risk. Still, this moving target sets up a constant wild goose chase.
Overall, though, the AI Act promises to be a blessing for AI research in Europe and in general.
IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.