In this episode of Me, Myself, and AI, OpenAI’s chief economist Ronnie Chatterji describes how artificial intelligence is reshaping both the economy and scientific innovation. Ronnie discusses the dual economic impacts of AI — the near-term boost from infrastructure investments like chips and data centers, and the longer-term productivity gains as AI tools integrate into enterprises and consumer life. Beyond consumer convenience, he notes, the key question for economists and corporate leaders alike is when — and how — AI will unlock sustained economic value inside organizations.
Tune in for his perspective on how AI can help researchers test ideas faster, combine insights across disciplines, and make better choices about which problems to pursue.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Allison Ryder: How will AI enable the future of interdisciplinary collaboration? Find out on today’s episode.
Ronnie Chatterji: I’m Ronnie Chatterji, chief economist of OpenAI, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at MIT SMR since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.
Hi, listeners. Thanks for joining us. I think that anyone listening by now knows I tend to think all of our episodes are exciting, but today, I especially do. Our guest today is Ronnie Chatterji, chief economist at OpenAI. Ronnie, [it’s] great to have you on the podcast.
Ronnie Chatterji: Sam, thanks for having me. I think this will be an exciting one. I don’t know if I can beat the standard of all your other episodes, but we’re going to do our best.
Sam Ransbotham: [It’s] a lot of pressure. I read a stat that OpenAI has about 800 million weekly active users as of October. I think [in] August it was 700 million, [in] March [it was] 500 million [users]. And [there have been reports of] 600 billion tokens a minute. You throw a hundred million here, a hundred million there, it starts to add up after a while. I suspect, then, that most of our listeners are pretty familiar with OpenAI, but can you give us a quick introduction to the company and what it does?
Ronnie Chatterji: I can tell you the first time I ever heard of OpenAI was in late 2022 when I was working in government, and a bunch of folks were reciting some poems that sounded almost too catchy, too perfect to be written by a human, but there they were reading them. They were really funny; some poetry aligned around a particular individual or funny habits someone had. When I came back, they said, “Hey, it’s this thing called ChatGPT.”
Most people will know OpenAI through the ChatGPT product. But, of course, you know that ChatGPT was sort of an accidental consumer product because the team at OpenAI had been working on an API for developers to build on top of AI models before that. But when it came time to release this consumer product, they weren’t really sure how it’d be received. The quick adoption — almost a hundred million users in just two months — really was unprecedented, larger than any other consumer product that I’m familiar with, and really set the stage for the rapid pace of generative AI adoption we’ve seen over the last few years.
Sam Ransbotham: [The ease of innovation has] been huge. … Ronnie is also [a] professor at Duke University. He knows [the] diffusion of innovation theory. Putting that chat interface out there really clicks all the relative advantage, complexity, trialability, observability, compatibility, all the things that we know from Everett Rogers.
Before, like you say, that fall of 2022, when I talked to people, they knew about AI, but they hadn’t used it. Now, practically everybody has used it. You’ve got a lot of people using it. And I think that free access got a ton of users, but it requires massive amounts of money to keep these things going. How are the economics going to start playing out with OpenAI? For context, you’ve just had the “buy now” button integrated and [are] starting to integrate apps. How are the future economics going to play out here?
Ronnie Chatterji: Well, because the adoption was so unprecedented, I think now we’re trying to figure out where AI is going to actually deliver value, both in the enterprise, which we can get to later, but also on the consumer side.
And there [have] been so many other products released, if you think about Sora, OpenAI’s recent video product. When you think about the economics of AI, we started at the very beginning when I got the job. They said, “Look, we need an economist to make sense of the economics of generative artificial intelligence.” I was a good choice, I think, because I’d worked a lot on these issues, as you mentioned, in my academic role at Duke University, and learned a lot from scholars around the world, around how technology diffused across organizations. That was a big part of my research and my dissertation.
At the same time, I’d worked in government and thought a lot about technology policy and how government investments and critical infrastructure like semiconductors could be really important for national competitiveness. And finally, I was just really interested in business. This gets to this point you’re making here about the “commercializability” of AI. I taught in a business school for my entire career. I’m really interested in how my MBA students are going out in the world and going to make a career for themselves. So many of them are going to be working in this AI space in some form or fashion.
The question is, “What’s the business model going to be?” I think you’re starting to see some really interesting things emerge. OpenAI’s products, like ChatGPT, have big subscription bases, so some people are already paying to use ChatGPT or the API, and that’s obviously an important source of revenue. Enterprises are increasingly signed up. A large percentage of enterprises around the world — this isn’t just in the U.S.; this is globally — are signing up for some sort of artificial intelligence product. And a lot of it is ChatGPT Enterprise. So that’s another model by which we’ll generate revenue.
But going forward, the sky is really the limit in terms of thinking about the intersection between business and AI. The thing you mentioned around helping consumers shop is really interesting to me. I bought a pair of jeans, actually, that just arrived yesterday using ChatGPT. And why’d it work for me?
I [don’t] like to go shopping. I asked Chat, I said, “Look, these are the jeans I’ve liked in the past. Here’s the colors I’m missing.” I’m at a certain age where I want to make sure I look cool but not too cool. You know what I mean, Sam?
This is really important for someone like me working inside a tech organization. I can’t [look] like I’m trying too hard. ChatGPT got all that, actually, and recommended two pairs of jeans that I purchased. And [since] they arrived yesterday, I’ve been pretty happy. I can see that facilitating shopping for someone like me, who really doesn’t like to go shopping, [offers] tremendous value for a lot of consumers and [could] create a lot of cool business models around it. And that’s just one example.
Sam Ransbotham: I don’t want to denigrate productivity. I think we’re going to have a lot of productivity, but what about something bigger? What about science? I think you’ve talked about sciences. You can probably correct me here: The analogy was walking down a corridor with a whole bunch of closed doors. You don’t know what’s behind the doors. Maybe these tools can help us peek behind the doors. What’s that option?
Ronnie Chatterji: When I was in graduate school, there [were] a whole bunch of folks studying the economics of science and technology. For a little bit at the beginning of my career, I was in business school and I was talking to people at MIT Sloan and Harvard Business School and Duke’s Fuqua School of Business, where I’m a faculty member now, and they were all interested in the economics of science. And I said, “Hey, you know, why is science so important? Why should economists care so much about science?” And in the courses that I took and the professors that I worked with, I began to understand that science is sort of the ingredient to innovation, and innovation drives economic growth.
So I started to think much more deeply about the research and development process, the role of companies and universities and governments in funding scientific research. When you think about a new tool like AI and its potential to accelerate scientific research and innovation, then you’re talking about a tremendous game changer for the economy. And, yeah, when I think about science, I think about a scientist deciding where to spend her career.
I’m a social scientist, but some of us, we have to make similar decisions: What do you want to work on? For me, it was entrepreneurship and innovation. But for that scientist facing that endless corridor with doors on either side, she has to decide what she’s going to spend her life on. What lab is she going to join? What skills is she going to acquire? You make those choices early in your life. A couple of postdocs later, it’s kind of hard to switch.
I feel like what AI can do for the scientific community is try to look behind some of those doors; figure out where there might be more potential; help run quicker experiments, more revelatory experiments; and help that scientist who’s early in her career figure out what door she wants to spend her life working behind and then unleash innovations like we’ve never seen. That’s where I think the real promise is for innovation and economic growth.
Sam Ransbotham: I want to [touch upon] a couple of things there. Let’s push a little deeper. What’s it mean to look behind a door? That’s a great analogy, but how does that play out? Take me from looking behind a door as an analogy to actually sitting down in front of a computer and typing something into OpenAI.
Ronnie Chatterji: Let’s talk about that. I can take this analogy a little further, and maybe your listeners will go with me. Maybe they won’t. But suppose she opens up the door, the first door on the left. Behind there, there’s a stack of literature, scientific papers and books and things that have been written about that particular area. And then there’s a bunch of what you might call jigsaw puzzle pieces over there. Some of them have been put together in little pieces, and [it looks] like the left corner of the puzzle is complete, but there [are] other parts of the puzzle that aren’t complete at all. She’s trying to figure out which pieces fit into that puzzle to make it whole and get a sense of what’s going on in this world.
It could be that this door leads to another door, which is connecting multiple fields. Often we see innovation at the intersection between different fields, like biology and chemistry, and of course, [it goes] even deeper than that. So one of the challenges is how do you know which pieces to put together? How do you know which novel combinations are actually going to be useful? And, of course, just combining them isn’t always enough. You have to put them through a lot of different tests to figure out whether they’re durable, whether they’re giving you scientific insight. So I sort of feel like AI could help us both brainstorm which combinations might be most useful and help us run through some of those combinations and figure out which one’s most fruitful.
That’s really the promise, I think, of AI in science. Folks working in those fields are finding really important applications in all parts of the scientific process. But for me, I think about it in terms of trying to brainstorm new ideas, make new combinations, make testing and experimentation more efficient, and then when you do find things that work, scaling them faster. That’s where I really think AI can make a big difference. But it’s going to be folks in science applying this and finding these things, and I’m sure [they] will unlock lots of really interesting process innovations over the next several years.
Sam Ransbotham: That’s come up with several of our guests. We had Moderna on, we had Pirelli making tires. … One of the nice things is these tools have such amazing memories that, I think for me, when I go look for something — actually, I’m going to make fun of Ronnie here for a second. We were just looking for some headphones, and you see him pulling out the same drawer two or three times looking for some headphones. The machines don’t have to do that. They remember where they’ve looked, and that combinatorial search can really explode and narrow down what’s a good place to search: “Hey, have I already looked here?” That’s big.
Ronnie Chatterji: I agree. This is why I should have asked ChatGPT where my headphones were, right?
This would’ve been better, but I agree with you. Think about a world where you and I are working on a project together and we’re working with AI as our co-investigator, our coworker, and there’s a shared memory between us, about what you’ve worked on, what I’ve worked on. As we talked about earlier, so much of the interesting scientific discoveries are at the intersections of our knowledge bases, and AI can play a role in intermediating between those.
A lot of human collaboration is hindered by what they call tacit knowledge: I know more than I can tell. If you and I are working together in a situation where AI can help to reveal some of that tacit knowledge, maybe it’ll unlock new discoveries. That’s super interesting for me.
Sam Ransbotham: I’m getting a little philosophical here, but you’re talking about the intersections between different sciences and disciplines. What’s our advice to people? Are these tools going to be so awesome at going deep that we should be generalists? Are these tools going to be so awesome at being generalists that we need to go deep to differentiate? Those are two big different doors on different sides of the corridor.
Ronnie Chatterji: [These are] massive questions and go back to people’s debates over should I become a generalist or a specialist, and [the use of] T-shaped leaders, and all these different questions we’ve been thinking about in business schools for a long time. What I’ll say first is we have to have humility about this.
I wish I knew the answer to your question with a hundred percent certainty. If I did, I would share it with my children right away because they’re asking the same questions, or they will when they get a bit older.
We do live in a really uncertain time. I’m very sympathetic to young folks who are trying to figure out which direction to go. I do think, though, going deep, even if it doesn’t end up that you are going into that field and doing that exact work, is really useful because of the discipline of going deep. When you ask an audience to raise their hands if they studied engineering [as an] undergrad when you go to a place like Duke or MIT, you’ll see a big number of hands raise up. But when you ask people, “Keep your hand up if you still work as an engineer,” then a lot of those hands go down. Does that mean that the engineering degree, the undergrad degree, was not worth it? No. Those folks will raise their hands right back up and say, “No, I use it as a product manager. I use it as a consultant. I use it in banking. I use it in my everyday life.”
So I do think the idea that you have to choose a direction that has to be perfectly aligned with the job that’s out there, that probably wasn’t true even 10 years ago, and it’s definitely not true going forward. I wish I had the perfect advice to navigate the exact career that everyone should do. It’s very, very difficult, but I do think going deep on something and following your interests, that’s usually a good way to start. And then you can figure out how to morph that into the career opportunities that are available to you. I know that sounds kind of mushy, but I actually don’t think there’s a very direct path that I can advise anyone other than that.
Sam Ransbotham: Just for the record, I did not slip him a 20 to say that it’s OK to be a reformed engineer. I think, as listeners know, I’m a chemical engineer [who] no longer does any chemical engineering. And I didn’t also slip him a 20 to talk about the benefits of going deep, because I feel like the ability to consume information from these models depends on having depth. Otherwise you’re blindly trusting.
Ronnie Chatterji: I agree — it’s a complement. I’ve been thinking about this a lot with the early-career job market. I think one of the interesting things for people who are more senior in their careers, you’d think that ironically, senior people would be less likely to use AI tools and then they’d be less advantaged in the job market. If you think about AI tools as complements, though, to expertise and experience and deep learning, you’d actually expect [them] to help those older workers better. You know, some of the patterns we may be worried about in the job market may be about that, where AI is a real complement for people with deep expertise.
The question we have to answer is, “How do folks who are just starting off in their careers … get that expertise if the job market starts to change?” That’s something, as an economist, I think a lot about. For me, when I’m reading about economics using ChatGPT, I know the papers, I know the literatures. I can tell when something isn’t exactly right or if there’s a hallucinated citation, but for a younger economist, they might not have that. So it is maybe a stronger complement to me doing a literature review than it would be to a person just starting off. I think that’s a really important dimension of AI that’s not always talked about.
Sam Ransbotham: Let’s [go] back. One of the things I also wanted to push on a little bit is you alluded to the idea that one of these doors we’ll open will have a huge big find behind it. I think it’s super plausible that we’ll have productivity benefits. I’m faster at reading something or summarizing something, so it’s maybe hard to quantify, but I know I’m a hundred percent there.
What’s the likelihood we’re going to open a door and find a new electricity or a new nuclear power, or what’s the next GPT, or general purpose technology, out there that we may find? Is it possible that these things are going to help us do that?
Ronnie Chatterji: I think it’s possible. I think what you’re seeing now is the pathway to the kind of productivity increases that you’ve already bought into. If you look at the paper that we just wrote [on] how people use ChatGPT, you’ll see a path [to] how people are going to use this to improve their writing, ask it for help to make decisions more efficient and streamlined. And a lot of us are already doing that in our personal life and at work. So you see a path to saving us time, saving us money, resulting in a lot of consumer surplus, as economists call it, coming from AI tools.
The next piece, though — and this is, I think, almost a requirement if you’re going to see the transformative economic growth that many are predicting — is through innovation and the scientific process. I think that is where the questions you asked, like, “How will AI be able to do that?” are going to become really important. I do think it’s possible. The reason I think it’s possible is because the capabilities of AI are moving really, really fast.
I think, as an economist, the one thing I didn’t realize before I joined OpenAI a year ago is how quickly the capabilities are evolving and moving. When you look at our performance on the International Mathematical Olympiad or some of the recent evaluations related to economically valuable tasks — the GDPval, as it’s called — you see quite a steep curve of improvement on these tasks.
I think as an outsider, I was sort of generally knowledgeable that things were moving in the right direction but not really seeing the shape of the curve. When you look at that and you start to think, “What would that look like if that rate of growth continued?” then you start to think about AI being able to do really amazing things, so I do feel like that’s a real possibility. [It’s] hard to put a percentage on it, to be honest. I think we’ve got to plan for either scenario, right? That it unlocks these great secrets and innovations, or that it doesn’t.
Sam Ransbotham: You alluded right there to the extrapolation problem. It’s really tempting to look at these numbers and draw a line between the Olympiad a year ago, the Olympiad now. You may be referring to some of Erik Brynjolfsson’s work recently talking about — I think 4% to 72% was one of the numbers. It’s really hard not to draw a line between 4% and 72% and keep going, but you don’t get over a hundred percent unless you’re a football coach. And then, all percentages are thrown out.
How do we think about extrapolating these things? I mean, one thing is we do the easy stuff first, so we’re going to get the biggest gains first. So that points to an idea of diminishing return. You also alluded to an idea of combinatorics where you might get super-linear returns because of benefits in two areas coming together. So how do we extrapolate?
Ronnie Chatterji: I think early on we saw the capabilities of AI evolving according to the so-called scaling laws, which is really interesting, right? The more compute, more powerful chips, the more data that you have, that the models could become continuously more powerful as you added more of that. And then, of course, there was a reckoning about whether scaling laws applied. We saw reasoning models and other innovations that kind of pushed it forward and said, “Look, OK, now there’s a new set of scaling laws, which is if the model takes a longer time to think, it’s going to be able to solve these problems better.”
So we don’t know exactly how quickly these new innovations and micro-innovations will arrive. I think this is why labs like OpenAI exist. I think what’s happened over the last 10 years, and you see this among the AI researchers — I’m privileged to work with some of the best in the world, of course — they’re living at a time where their field has been completely transformed. It’s pretty inspiring, as a fellow researcher, obviously from a different field, to see people who are in the midst of a time when their field is utterly being transformed and that look in their eyes [that] anything is possible. I get it. I haven’t lived through that in my field. But I can get what it must feel like, and I’m kind of watching that from the outside and seeing that.
So I do think that we have to temper that, of course, with the idea of how much more can you see increases in some of these dimensions, your point earlier, and in tech. And is there a sense of a general intelligence that’s going to be good at everything, or are you going to have more specific applied models? I tend to be someone, because I work so much in organizations, [who is] very practical about this and [will] say, “OK, there might be really intelligent models, but when it comes to solving problems inside organizations, I happen to think that we’re going to need some specialized solutions in finance or health care or education that are going to solve particular problems and may be, more importantly, aligned with regulatory and institutional realities.”
So I actually think the question is both “How capable will the models become?” but also, “Can we find ways in those verticals to adapt them, to fine-tune them for those application verticals, to make them not just smart but effective?” That to me is almost as hard as making the models smarter themselves.
Sam Ransbotham: We wrote a paper, gosh, almost a decade ago when the analytics boom was out there about how much easier it is to make models faster and more sophisticated and better, but how much harder it is to get organizations to use that. So there’s a gap that’s inevitable.
Let me switch [to] dark here for a second. You make a bunch of tools. We’ve been happy all this conversation talking about how science is going to use these tools to find amazing things. [But] so are the bad guys, right?
You’re making something that is in fact a tool, and people will use tools for nefarious purposes. What’s OpenAI going to be able to do about that? I’m not naive enough to think that we can stop everything, but I think we can slow things down.
Ronnie Chatterji: I think there’s tremendous interest, and really from the very beginning, at OpenAI around the risks to our safety that come from increasing capabilities of AI. I have numerous colleagues who are working full time to address these safety issues across a bunch of different dimensions — everything from national security to mental health, to thinking about how AI is going to affect participatory democracies and governments. All these things are things that people at OpenAI are working on.
One thing I think is really fun [about] working there, being an economist, is you find people in each area who are experts in their fields working to think about the impact of AI on their particular area of study. These are also very practical people who are thinking about what the usage data is actually telling us, how to implement some solutions in the real world, as well as building on a lot of academic expertise. It’s a really good combination.
Safety is core to the mission of OpenAI, a big, big topic of discussion internally that we have. I think that OpenAI feels like disseminating information about the capabilities of the model — both what it’s able to do that is really going to be exciting but things that are going to be potentially problematic — is really important. So I think the transparency piece is a big part of the culture, and also the notion that we’re going to probably have to work together across organizations with governments around the world to really solve these safety issues. My economics work overlaps with that a little bit, but it’s mostly by watching these folks work in the organization [that] I can say that’s the kind of approach they’ve taken.
Sam Ransbotham: I like the idea that there [are] some economists involved to offset some of the technologists who might be pursuing things from a pure “Can we make it better?” standpoint, without thinking about some of the resulting consequences.
We have a segment where I’m going to ask you a bunch of rapid-fire questions. Just say the first thing that comes to your mind.
Is AI making you spend more or less time with technology?
Ronnie Chatterji: I think it’s less time for me because I answer a lot of questions quickly with AI where I would usually use another tool, and I now can just get the direct answer. That shopping [experience] was a good example. I’d be playing around on a lot of different websites, looking at different things. So for me, it’s a little less. I don’t know if that’s true for everybody.
Sam Ransbotham: What’s the worst use of AI? How are people using this in a way they shouldn’t?
Ronnie Chatterji: Outsourcing your critical thinking. This is something I take really seriously as a professor and someone who just loves to learn. It’ll be a shame if people are using this to avoid going deep or not engaging with really difficult questions and just having a chatbot spit it out for you. I really hope [that] my kids and their colleagues and classmates, as they get older, are not going to do it that way. We really have to work hard on that, everybody, but that’s something that’s the worst possible use that I can think of.
Let me put aside the really deep existential safety risk that you talked about earlier; the outsourcing [of] critical thinking is really, really important to me, that we make sure that doesn’t happen.
Sam Ransbotham: What’s frustrating you about AI right now?
Ronnie Chatterji: I think the popular debate about it. In one sense it’s super exciting, and people are knocking on my door and coming up to me at the cocktail party. … So I can’t be angry about that. There’s a lot of interest. But the debate is pretty binary in a way, like “AI: good” or “AI: bad.”
There’s a lot of discussion around predictions and forecasts that are really hard to make. Everybody wants to click through some sort of list of different jobs and their likelihood. And the hard thing about this is you’re never going to be called to account for making a good or bad prediction. So we just have predictions. At the same time, I’m trying to do some of the work to show, “Hey, here’s how people are really using it.” But by the time you do that research and that work, it takes a while and people say, “Oh, I already know that. That’s exactly how I was using it.”
I think we have to continue to create space for real data, real analysis on how AI’s being used, not just by the labs but by outside academics. At the same time, let’s all enjoy the stories about it because we need to read something every day with our cup of coffee. But the real news gets made when we come up with new insights from analyzing the real usage data and the way people are using AI.
Sam Ransbotham: So what’s the best use, then?
Ronnie Chatterji: Right now, and I don’t know if it’s the best use, but it is the use, we did the biggest analysis so far. We have 1.5 million conversations from ChatGPT. And we basically find that there [are] really three big use cases. One is seeking information. You can think about that traditionally, like a web search, and that’s how I found the jeans, right?
Then you got practical guidance, which is a really interesting one, which I think is something that economists need to think more about. But a lot of people are using AI just to inform decisions, make better decisions, streamline decisions. And AI as a decision assister is really, I think, the key place that I think it’s being used really effectively now. That creates a lot of consumer surplus but doesn’t necessarily show up in GDP.
The last piece is writing, which, [when] you think about most white-collar jobs, [is] the focus of a lot of the job discussions out there. Most of those involve writing, whether you’re a consultant or a financial analyst or a tech product manager. So when it comes to writing, that’s really one of the big uses. If you double-click into that, you’ll find that most of the writing work is submitting something that you’ve already written or you want it edited or a summary, not doing the thing where you’re outsourcing completely to ChatGPT. So that feels pretty good. I want to do more work on the writing piece, but those are the top three uses right now on the consumer side of the platform.
This is not the API or ChatGPT Enterprise. That’s really important. People read this stuff and they say, “Oh, so this is what everyone’s doing at work.” No, no, no. This is the consumer side of ChatGPT, the accounts that someone like you or me would use on our personal side. I will say there’s a lot of work being done on this, too, which is really interesting, but we have a whole other set of work to do to unlock how people are using it at the enterprise level.
Sam Ransbotham: That’s great. Actually, I feel like anything we’ve talked about today, we could talk for a whole episode on.
Ronnie Chatterji: I’ll be back. Let me know.
Sam Ransbotham: Well, thanks for joining us today.
Ronnie Chatterji: Sam, it’s an honor to be here. Thanks for having me.
Sam Ransbotham: Thanks for listening. Speaking of using AI in our personal lives, [I’ll be] joined next time by Will Croushorn, a product manager at Wendy’s. We’ll be talking about how Wendy’s is using AI to enhance the drive-through experience. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.