"The releasing of ChatGPT will be remembered as one of the most irresponsible acts of the 21st century": a conversation with Daron Acemoglu

MIT economist Daron Acemoglu shares his ideas on AI and automation at the center of the new book he co-authored, Power and Progress, Our 1000-Year Struggle Over Technology and Prosperity 
Number: 2
Year: 2023
Author(s): Stefano Feltri

“You can do some pretty impressive things with AI as a technological platform, but I am not necessarily an optimist because there are also some very negative paths that AI could take as a technology. We have a confluence of factors that make the negative use of this technology much more probable than positive use”, MIT economist Daron Acemoglu argues.

Atlman Open AI

“You can do some pretty impressive things with AI as a technological platform, but I am not necessarily an optimist because there are also some very negative paths that AI could take as a technology. We have a confluence of factors that make the negative use of this technology much more probable than positive use”, MIT economist Daron Acemoglu argues.  

Daron Acemoglu co-authored a book with his MIT colleague Simon Johnson that has sparked a lively debate on the future path of artificial intelligence and automation: Power and Progress, Our 1000-Year Struggle Over Technology and Prosperity

In their book, Acemoglu and Johnson argue that the equivalence between technological progress and growth is not consistent with our recent history.  

 Since the industrial revolution, the ultimate impact of technological change depends on how it impacts the workforce. Innovation always leads to higher productivity, but not always to increased share prosperity depending on whether machines complement or substitute humans.  

In this interview, Acemoglu shares his ideas on technology, AI and inequality. 

Daron Acemoglu, what is your top priority in terms of things that can go wrong with AI technology? 

I think that — for most technologies, with some exceptions like nuclear weapons — the most worrying negative use is the case where the production process deploys the technology in a way that increases inequality, weakens workers, and fails to create shared prosperity. 

A second concern is whether the new technology centralizes the control of information in a way that weakens democracy, restricts political participation and deteriorates institutions.   

The former is mostly associated with automation and monitoring or surveillance of workers.

The latter is via control of information and propaganda, but also dysfunctional forms of communication that may arise because of how information is monetized. We see many historical precedents to support our concerns that these negative effects are important. 

That's why we use the historical record to argue that this is not just a recent phenomenon. Moreover, in both of these cases, digital technologies of the last 40 years have already been used in ways that support these types of automation and information control.  

In that process, we have done two other things that make the future of AI more dangerous.  

 One is that we have dismantled a lot of the regulatory framework that countries used to have to shephard technology towards more socially beneficial goals.

Second, we have allowed the tech industry to monopolize to a concerning extent, and this monopolization will only exacerbate the dangers of new technologies, especially the dangers related to the control of information. 

Do you think that there is any kind of policy that the US and the EU can adopt to minimize those risks?  

Yes, but we don't think that there is an easy solution to these problems. First of all, we need to be clear about our aspirations and the realism of these aspirations.

In our opinion, the aspiration should be a more “pro-human” direction of AI’s evolution, which means using AI to help workers — to increase their productivity, to create new tasks, to give them greater agency and autonomy — and also to empower citizens.

We also want to provide better information and allow for more decentralization, so that we can have a real conversation about what we collectively want from AI.  

 The US Congress has been conducting hearings on AI, but the conversation is only between senators and the heads of AI largest companies. No worker has a say, no civil society organizations, no journalists, no ethicists. We really need a broader conversation.

The people who are going to be affected the most by AI are workers, and so far they have had no voice in this process. In our book, we also suggest specific policies for changing the market and shaping incentives in the right direction, such as taxes on digital ads.   

Corporations like Alphabet (Google), Microsoft, Facebook, Apple, Amazon are the largest that humanity has ever seen. Bigness gives them a lot of economic power, and also a lot of social and political power.

That's why quite a bit of the book is devoted to what happens when individuals or groups accumulate disproportionate social power. I think that, together with economic reasons, this concentration of power creates concerns around issues of antitrust and competition. 

Fiscal policies can also have a role in supporting the development of AI in more pro-human directions: lower taxes on capital than on labor provide a strong incentive for automation.   

If I interpret your “pro-human” approach correctly, you mean that if we use AI to support existing jobs, we can make workers more productive without making them unemployed. On the other hand, if we use AI to substitute existing jobs, we may increase productivity a little bit, but we will damage social cohesion.   

Exactly. But I would go a little bit further than that: making workers more productive in existing jobs is actually not enough.  

If you look at history, the really transformative thing for labor is when technology creates new tasks. It's not just like I make you a better writer, or I make you produce more widgets.

I also expand your creativity or the creativity of others, so that they provide new tasks for you, new technical occupations, and new abilities to provide services.  

We don't know how current occupations are going to change in the next 20 years, but we can create many more new tasks for educators, teachers, nurses, creative artists, manufacturing workers.

Technology like automation should not simply simply make workers more productive, but it should also expand the menu of tasks that workers perform.  

Acemoglu
One of the critical aspects of generative AI is that the human feedback that is required to properly train these models suggests that we have to “educate” AI about what is good and what is bad, what is true and what is false.

Has ChatGPT and other AI technologies already changed jobs and tasks? 

There are not many examples to show that ChatGPT has transformed business, because it has not really had time to do so, yet. But there are prototypical examples of how it could.  

 If we look at programming, AI is not necessarily creating new tasks, but some AI-driven tools are making mid- to high-level programmers more productive. For example, GitHub Copilot uses generative AI to draft code scripts, which programmers leverage with some debugging, allowing these experienced coders to perform some tasks much faster.

Although this is not entirely revolutionary (GitHub libraries and online forums for sharing and discussing code already existed before this) but this interface is faster and more flexible in some ways, so this implementation of AI increases productivity.  

There are a few companies that already use AI. For example, some companies in customer service are incorporating generative AI tools to provide better information to customer service representatives for troubleshooting customers’ problems. 

You asked ChatGPT how AI is going to impact inequality. What was the answer?  

To be more precise, we asked whether ChatGPT would reverse the trend toward greater inequality that we have observed in many areas around the globe throughout the last forty plus years. ChatGPT said that we should probably not expect this to happen.  

 One of the critical aspects of generative AI is that the human feedback that is required to properly train these models suggests that we have to “educate” AI about what is good and what is bad, what is true and what is false.

How can we train AI to understand values if we, as human beings, do not agree on what those shared values should be?  

We cannot understand new technologies without focusing on the visions that drive them. The Industrial Revolution in Britain is inseparable from the contemporaneous rising middle class, replete with industrialists who had big aspirations and a technological focus.  

You cannot understand industrialization in the beginning of the 20th century without understanding Henry Ford's vision to create mass production.  

Today, AI development is driven by a very specific vision, which we believe to be faulty.

The people at the helm of this technology are incredibly homogeneous.

The vast majority tend to be the so-called “upper-middle class”: rich, white, and male, with a specific sort of libertarian worldview. I think we need a broader set of perspectives. 

This problem has been exacerbated by the arrival of ChatGPT and GPT-4. Who decides how to train large language models? Should ChatGPT be allowed to produce, if asked, Nazi propaganda or to say that Benito Mussolini was the greatest Italian politician of all time? Who decides that?  

Well, it turns out that a bunch of engineers in the basement of OpenAI decide what is true and which limits to impose (or not) on the freedom of expression.  

Some economists have been trying to model the possibility that AI can lead to human extinction. Do you believe that this is a serious issue?  

 No, all the talk concerning existential risk and killer robots take the focus away from much more urgent — admittedly mundane — problems, such as: excessive automation, inequality, the monopoly over information or the repression of labor unions. Those are the sorts of topics that we have to worry about. 

Sam Altman says that he is ready to cooperate with the US government to regulate AI. I believe that he actually means what he says. But he is also asking to be protected from any newcomers in the industry that could destabilize his market position.

I can't read his mind, of course, but we recognize that he's certainly much more articulate and effective than, say, Mark Zuckerberg [of Meta/Facebook] or many of the other “big tech” founders who have testified in front of US Congress. 

I think that we should be wary of his requests for additional regulation of only new entrants to the markets of AI technologies like ChatGPT. OpenAI still has to take responsibility for its own choices and actions, not just whichever companies come next. In 10 years time, we may look back on what OpenAI did — releasing ChatGPT and GPT-4 to the public in this way — as one of the most irresponsible acts of the first quarter of the 21st century. 

What could happen to cause us to see the ChatGPT release as so irresponsible? 

 It could dismantle the whole education system, just to mention one possible negative effect. Teachers, schools and in fact students are completely unprepared for these sorts of tools.  

How many children may now rely entirely on ChatGPT to write essays or do homework? How will this affect their learning? How will teachers adjust? Will they end up spending all of their time trying to catch-up to the use of generative AI by the students or introducing tweaks into homeworks and exam questions so that they can stay one step ahead of students using ChatGPT? What happens if students who were already falling behind stop learning even more? Who will be held responsible? 

I don’t think we have thought enough about these questions. And there are many more when you consider the use of these tools in social media, in advertising, and in labor relations (what happens if managers threaten that they can replace workers with generative AI tools?). 

acemoglu 3

I think that for most technologies the most worrying negative use is the case where the production process deploys the technology in a way that increases inequality, weakens workers, and fails to create shared prosperity. 

IEP@BU does not express opinions of its own. The opinions expressed in this publication are those of the authors. Any errors or omissions are the responsibility of the authors.

If you want to stay up-to-date with the initiative of the Institute for European Policymaking@Bocconi University, subscribe to our monthly NEWSLETTER here.