In this bonus episode of the Me, Myself, and AI podcast, Nobel Prize-winning economist Daron Acemoglu joins host Sam Ransbotham to challenge some of the most common assumptions about artificial intelligence’s future. Drawing on his book Power and Progress, Daron argues that technology doesn’t have a fixed destiny — and that today’s choices will determine whether AI boosts workers or simply accelerates automation and inequality. He makes a case for focusing on new tasks that complement human skills, rather than replacing them, and warns that current incentives push AI toward centralization and automation by default. The conversation tackles productivity myths, reliability risks, and why regulation should proactively steer AI toward social good.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Allison Ryder: Hi, everyone. We’re back with a bonus episode, profiling another thought leader in the technology research space. MIT institute professor Daron Acemoglu is a Nobel Prize-winning economist and the author of Power and Progress. He joins Sam today for a conversation spanning technology advancements, limitations, and regulation. We’re back on March 10 with more new episodes. For now, we hope you enjoy this conversation.
Daron Acemoglu: I’m Daron Acemoglu, institute professor at MIT, and you are listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at MIT SMR since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.
Hi, listeners. Thanks again to everyone for joining us. I’m excited to be talking with Daron Acemoglu, professor of economics at MIT. Daron works extensively on economic development, labor economics, and the economics of technology. In 2024, he was awarded the Nobel Prize in economics for this work. His insights on the interplay between institutions, technology change, and inequality are particularly relevant for businesses today. Of course, our listeners will be most interested in Daron’s thoughts on AI. Daron, [it’s] great to have you on the podcast.
Daron Acemoglu: My pleasure. Thanks, Sam.
Sam Ransbotham: Your work spans institutions, technology, and equality. Can you share some of the themes in general from your past research?
Daron Acemoglu: I got into economics because I was fascinated by what I saw around me in my very young teen years about very divergent economic, political, and social outcomes across countries, huge disparities in terms of wealth, in terms of poverty. Those interests have framed my research and my focus on institutional factors [that] determine the effects of history; the effects of how society is organized; the rules, the laws, the norms, and technology as the prime channel via which human ingenuity and human decisions impact economic productivity and economic well-being.
Throughout, I have been fascinated by the interplay between institutions and technology and by how institutional factors and technological factors have evolved over time. So a lot of my research has focused on, for example, why there has been a huge divergence in economic fortunes of different parts of the world since the 16th century or thereabouts. It is very much related to, for example, the fact that European powers colonized the rest of the world and shaped the institutional trajectories of very different nations around the world in very diverse ways.
I’ve also been fascinated by the industrial revolution and how we started this process of using knowledge, science, and various skills in improving the way that we can actually start producing goods and services.
Sam Ransbotham: That’s all really salient for what’s going on right now. You have a recent book, Power and Progress. I think I was reading the preface of a revised edition, where you noted that things sort of changed on you underfoot. How have the recent changes changed some of your thinking?
Daron Acemoglu: I think two things are worth noting there. The main thesis of Power and Progress is that technology does, to some extent, what we want it to do. It does not have a preordained destiny that will take us in one direction or another. We have a lot of agency, a lot of choice in shaping the future of technology, and different futures correspond to different winners and losers, different benefits, different costs, different productivities.
We tried to make that point by going into history, showing how critical periods during our recent history, like the last 1,000 years, have led to sometimes big technological breakthroughs but with huge losers, and sometimes those forces have been reversed, and gains from technological betterment have been shared more equitably. That message, I think, is more relevant today than ever. AI is a particularly versatile technology. It provides so many different futures for us.
The narrative that there is a determined natural future of AI, and we are all going there whether we want it or not — and, ultimately, we’re all going to become incredibly more prosperous out of that — is just simplistic. Fighting against that narrative, I think, is very important today because that narrative lulls us into a sense of helplessness and sense of complacence that could be quite costly. On the other hand, of course, in 2021, 2022, when we were writing, it was impossible to foresee how rapid some of the advances in generative AI would be. But those advances haven’t really changed the basic trade-offs and the basic messages that we wanted to convey in the book.
I talked at the high level about different directions of AI. What are they? I think, simplifying it, you have a couple of poles that are pulling in different directions. I would single out — in the production process — automation, which is the dream of most AI models today, especially under the banner of artificial general intelligence (AGI), which aims for large language models or other generative AI tools to reach levels of capabilities comparable to the best workers across a very wide range of domains. The reason why that is viewed as attractive is that just like previous rounds of software that improved cognition in different domains, that can then be used for automating tasks. So AGI is very tightly interwoven with the automation agenda. Automation is great. It gets rid of some routine tasks, some boring tasks.
When it’s applied in the physical domain, such as with cranes or robots, it could remove the most dangerous tasks from the human work schedule, but automation also doesn’t benefit workers by itself. It takes away tasks from workers. It is beneficial to capital and capital owners and not so much for workers in general.
So at the other pole, we have things that are complementary to humans, meaning that technology enables humans to do more things or better things or completely new things. These new things [are] what I refer to as new tasks. So if you look at people around you, many of the occupations you’ll see involve things that could not even be imagined 50 or 60 years ago. As a journalist, you’re going to be making videocasts and podcasts and [using] technologies for research that require completely different skills than somebody 60 years ago going to the library and sifting through books. Those are some aspects of new tasks. So are many of the physical occupations in manufacturing that involve much more technical work. Those have generally been very good for productivity and for worker wages and employment.
That’s one dimension in which the future of technology could have very different effects depending on whether we go [in] the automation or the new task direction. I would also like to add, whether we use technology for information centralization or decentralization is also important in that many of the early hopes about computers were centered on decentralization. People could [do things] in their garages that IBM as a centralized organization couldn’t do. Personal computers enabled that to some extent, not anywhere comparable to the hopes of pioneers of computing in the ’60s and the ’70s.
But today, we are going in the opposite direction. Large language models are information centralization tools. They collect all of the information. They aim to collect all of the information of humanity ultimately, and then centralize that and process that in a centralized manner that then gives you answers. So there’s less for the decentralized human mind and human participation to do.
Centralization and automation are two different poles, but they are complementary. When I’m talking about new tasks, it is really about enabling the technology to go in a direction that can really help workers, help individuals, not just big corporations. So it’s going back to those aspirations that were already present in the late 1960s and 1970s. My work shows how new tasks, when they have been activated, have led to productivity gains and have led to wage gains and employment gains.
Sam Ransbotham: If we think about these new tasks, though, what kinds of things should businesses be looking for? If people buy this argument, and then they want to go down this path, what do they need to do?
Daron Acemoglu: Actually, I think it’s disarmingly simple. AI is really an information technology, a very powerful information tech. It’s not an automation technology. AI is not thinking anywhere like the human brain. Instead, it has some truly impressive capabilities that the human brain doesn’t have, and it lacks some of the judgmental and creativity-related capabilities that the human brain naturally has.
As an information technology, what AI is very good at is sifting through gargantuan data sets and [finding] relevant context and information for some specific task or specific context or specific application. So if you’re an electrician and you encounter equipment that is behaving in a way that you haven’t seen before, or completely new equipment that you don’t have experience with, and if you have the right AI tool, that can immediately and reliably give you information about why that sort of unexpected behavior is occurring, or what … things you need to know about this equipment and how it interacts with the particular type of electricity grid or the environment that it is situated in.
Those are the kinds of things that regular electricians would have to work decades to get the experience in an imperfect way. So we can significantly improve what electricians, what nurses, what educators, what journalists, what academics could do using AI in order to perform more sophisticated tasks or new tasks and acquire much better information. I think while AI, generative AI, together with the right sort of scaffolding from good old-fashioned AI that does pattern recognition, could provide that kind of ideal tool for human new tasks, that’s not the direction in which AI is being developed. In fact, none of the big companies are pouring even a small fraction of their investment into developing AI as a pro-human, pro-worker tool.
Sam Ransbotham: Let’s connect these last two points a little bit. These are, as you say, being developed by big companies. When I think about the electrician and your scenario, wouldn’t they naturally get recommended solutions that come from, let’s say, advertising models that are built into the large language model?
Daron Acemoglu: Right. So right now, today, as an electrician, you can take ChatGPT with you and you can ask questions, but there are several problems with that.
First of all, it has not been designed or optimized for that task. Second, it’s not reliable. So a much higher degree of reliability is necessary. Third, it has not been trained on the domain-specific information that all of the relevant electrical equipment, and [it does not have the] deep understanding of the electric laws and electronics that would be necessary. And most importantly, it has also not been trained on use cases of best electricians dealing with similar problems from which AI could learn. So it is not designed for that task, and it hasn’t been trained with high-quality, domain-specific data. All of those restrict your ability to use ChatGPT or similar tools, and that’s the reason why whenever employers are given a push toward using them, the first thing they want to do is just use them for automation, because that just seems to be the path of least resistance.
Sam Ransbotham: To think about that a little bit more, there’s nothing that says that we couldn’t train those models over those domain-specific knowledge bases. Maybe we’re just early days, and that could come out. I think that’s plausible, but I’m not sure if [there are] economic incentives for people to do that.
Daron Acemoglu: The economic incentives are not there, because this is not the business model of the leading corporations. That data doesn’t exist, and it won’t exist unless we have property rights and data, and we have proper data markets. The current architecture of large language models may create hard limits on reliability, whereas in situations like this, reliability could be a very important constraint.
For example, imagine we do this with nurses, and one in a thousand times, they give you the complete opposite of what they should do, and you poison the patient. I think one in a thousand seems very small, but, actually, in medical applications, that will be an unacceptably large casualty rate. So it’s really a different architecture and different sort of preparation training of these models that may be necessary.
Sam Ransbotham: I think your error rate is an interesting one, because I’m not really sure what I think about that. Half the nursing students graduate in the bottom half of their class — that’s just how averages work.
Daron Acemoglu: But as a result, we don’t allow nurses to make those decisions at the moment. Except in a few cases where you have highly trained licensed practitioner nurses, nurses cannot prescribe drugs. They cannot make emergency decisions. When a patient is having problems, they have to wait for a physician to come. That’s the margin that we’re talking about. Nurse-complementary technology would expand what nurses do in those domains. No, you couldn’t do that unless all of the nurses become even better trained than licensed practitioner nurses, or the AI models get much better.
Sam Ransbotham: Let’s push on the nursing example a little bit more. My daughter has recently learned how to drive. I’ll make you nervous — I think we both live in the same area. She’s a good driver though. But she hasn’t seen millions of almost-wrecks yet, and I would love for her to have that experience. By analogy, the nurses may not have seen these esoteric cases in a way that we were just talking about — these AI models are fabulous at storing lots and lots of information and recalling that.
Daron Acemoglu: I think there are many things that can be done. The future of technology is rich. If you integrate AI with virtual reality, you can have personalized experiences where your daughter could experience very dangerous situations sitting in front of a computer. I can tell you from my own experience, when you get behind a wheel, you think you know, [but] you don’t.
Sam Ransbotham: We talked a little bit about incentives. Let’s talk about measurement a little bit. I think [one of] the issues we’ve always had is that we can measure the number of widgets, [but] we have a lot of trouble measuring the outputs of our knowledge economy. How important is measurement, and is there anything that we can do to try to improve that?
Daron Acemoglu: I think measurement is very important, and there are some puzzles that we should bear in mind. I think these puzzles do feed into my concerns, and also skepticism, about some of the claims. We definitely do live in an age of innovation, according to many measures.
If you look at the number of patents at the [U.S. Patent and Trademark Office], they have quadrupled over the last 40 years. We get an incredible array of new apps every day on our phones. We have much faster turnover of electronics in quite a significant way. When I use my iPhone that’s a couple of years old, everybody says, “Wow, you’re really missing out.”
When people were using rotary phones, dial phones, you could use the same model for 30 years, and nobody would bat an eye. So there is a sense in which we are getting a lot of innovations, but using the standard measures of economists, we don’t see much improvement in productivity.
In fact, we’re having slower productivity improvements today than we did in the ’50s, ’60s, ’70s, those boring pre-digital days. What’s up with that? Well, the people from Silicon Valley and economists who are sympathetic to that perspective would say, “That’s all a measurement problem. You’re just not making allowance for how high quality some of the products you’re getting now is, and the Bureau of Labor Statistics is overestimating inflation. You have in the middle of your palm, a supercomputer, [a] superpowerful machine that allows you to access information [that was] never possible before.” So all of these things they think are the reasons why you shouldn’t look at macroeconomic data; you should ignore all of the economists or data sources. There’s some truth to that, but I think it can be exaggerated.
We did not measure the benefits from antibiotics that well either, but you’ve still got amazing improvements [in] many directions, in terms of GDP, in terms of output of the pharmaceutical sector and lives saved. Life expectancy increased tremendously with antibiotics. Well, life expectancy is not increasing. We’re not seeing any of the AI-facilitated pharmaceuticals do anything yet.
Perhaps time will change that, but we just don’t have objective measures that show huge gains from AI as of now. I don’t think that’s just a measurement problem, but measurement can help [us] understand where the bottlenecks are and also improve perhaps certain assessments of … the impact of AI in different sectors. But I think a lot of it, again, comes down to what I was talking about: If you overdo automation, if you overdo information centralization, you’re not actually going to get all that promised productivity boom.
Sam Ransbotham: I’m bought in. What do we need to do here? I mean, as an individual, what does an individual need to do, given that I can shake my tiny fist against the FAANG companies? What should individuals be doing here?
Daron Acemoglu: I think a lot. At the end of the day, society consists of individuals. If a lot of individuals change their mind, that has an effect. Part of the reason why tech companies have so much power is because they have what Simon Johnson and I called in Power and Progress persuasion power. They have persuaded the rest of society that their intentions are benign, their technology is good, and they will not misuse it too much. There’s a lot of counterevidence to that, but we still sort of believe it. We still believe the leading AI companies when they say, “We have this amazing godlike technology, believe that it’s godlike, and that it will be used just in your service, your own personal god.”
Absolute power corrupts, absolutely. I don’t know that we should really believe those claims. Different individuals will have to reach their own conclusions, but enough individuals, a critical mass of them, changing their views would have an effect through democratic process.
Who are individuals who have a lot of say? Hundreds of thousands of people, perhaps more, who work as engineers and scientists in these corporations. They determine the direction of research. If they decided next year that they want to work not on automation and AGI but developing more pro-worker, pro-human technologies that will help workers and human decision makers and decentralization, that’s what we would get. That’s an individual decision.
Another individual decision [involves] entrepreneurs. A lot of new ideas come from startups. Right now, startups are aligned with the big companies because their dream is to be bought up by the big companies. That’s the way you become a billionaire right now. Again, that’s a choice. Different values, different priorities, different regulatory systems. Perhaps we should really be much more vigilant in mergers and acquisitions, then that could lead to very different dynamics.
Sam Ransbotham: Good. I hope our students are listening because I do think that most of our students are out there trying to come up with startups, with their goal of being an acquisition by one of these large companies.
Daron Acemoglu: If I wanted to be rich, that’s what I would do, too.
Sam Ransbotham: Maybe our measurement problem extends both to our productivity as well as to our incentives, if that’s how we’re measuring success.
Daron Acemoglu: Yeah.
Sam Ransbotham: You mentioned regulation. Let’s touch on that a minute. I tend to think we can have market forces perhaps do a better job of aligning incentives than regulation. What can regulation do here, particularly when we are dealing with goods that are not physical goods?
Daron Acemoglu: I would like to say three things about regulation. First of all, regulation is always tricky. Look at Europe. Europe is so far behind in AI and many areas of tech because they’ve not been very conducive to innovation via their regulatory system. Too many organizations, too much interference, that can be very bad. So you have to balance things.
Second, some regulation on health-critical, information-critical, democracy-critical things is absolutely necessary. You cannot let AI models pretend to be doctors without having some sort of assessment that they are actually giving adequate information.
We apply tremendous barriers to anybody becoming a quack doctor, while we should apply the similar standards to AI models. But, most importantly, we may need a change in the philosophy of regulation. Regulation should not be a reactive thing where we try to stop whatever AI companies are trying to do. I think we need proactive regulation that helps the AI industry move in a more socially beneficial direction. That starts by recognizing what this socially dimensional direction is. I’ve argued it’s pro-worker, new tasks, more decentralization. It then recognizes why the current playing field is tilted against it and tries in a soft way, without stopping or killing the market process, to correct those distortions and provide a living chance to the alternative directions.
Sam Ransbotham: That’s a moment of hope there. That’s good. Let me switch a little bit. Our show is Me, Myself, and AI. Let’s let people get to know you a little bit. How did you get interested in these things?
Daron Acemoglu: I’ve always been interested in technology as the engine of the industrial revolution, of the rapid growth process, and that brought me, together with my studies of labor markets, to focus on automation. I’ve been working on automation for over 20 years. Then, when AI models started making rapid advances in the mid-2010s, I got worried about what that would imply from this aspect of the future of work, what it would imply for wages and employment. And that made me invest more time and resources into AI and understanding AI, understanding societal implications, but also understanding the technology. And I think it’s fascinating. It’s super promising, but also super scary.
Sam Ransbotham: I think that’s a nice way to wrap up that balance that you keep coming back to. One of the things we like to do in the show is ask you a bunch of rapid-fire questions. [Tell us] the top thing that comes to mind. What did you want to be when you grew up? When you were a kid, what did you want to be when you first were thinking about a career?
Daron Acemoglu: I wanted to become a social scientist.
Sam Ransbotham: That worked out for you then. What’s the biggest misconception that people have about artificial intelligence?
Daron Acemoglu: That it will somehow completely replace humans. I think at the end, AI will be something that works alongside humans. The better we understand that and how to achieve that, the better we will be in shaping the future of work and the future of humanity.
Sam Ransbotham: How do you personally use these tools?
Daron Acemoglu: I use it just like other people. I sometimes ask questions to ChatGPT, and most of the time, I am both surprised by how good it is and disappointed that if I really trusted everything I got from it, I wouldn’t be doing so well.
Sam Ransbotham: I have to push back a little bit there. I find that if I know something about a subject and I ask a question, I’m disappointed in the results.
Daron Acemoglu: Exactly.
Sam Ransbotham: And if I don’t know much about the subject, then I’m impressed with the results.
Daron Acemoglu: That’s it.
Sam Ransbotham: That ought to worry me.
Daron Acemoglu: Even when I know about the subject, I am impressed by how good it is able to synthesize the basic knowledge there, but it always pretends to know more and gives answers that are really incorrect because it’s extrapolating too much.
Sam Ransbotham: What has moved faster than you expected with artificial intelligence?
Daron Acemoglu: The large language models. Their reasoning capabilities are truly impressive.
Sam Ransbotham: It’s been great talking to you. This has been a fascinating conversation. I love your balance of both optimism and concern, and I think that’s a nice way to wrap up this session. Thanks for taking the time to talk with us.
Daron Acemoglu: Thank you, Sam. This was a lot of fun.
Sam Ransbotham: Thanks for listening. Me, Myself, and AI Season 13 premieres on March 10. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.