In this episode of Me, Myself, and AI, host Sam Ransbotham speaks with Vineet Khosla, CTO of The Washington Post, about how AI is reshaping the way news is produced, delivered, and consumed. Vineet argues that journalism itself isn’t broken — but the formats people use to consume news are rapidly evolving, especially as audiences increasingly interact with information through AI. The conversation explores how the Post is experimenting with personalized AI podcasts, AI-powered research tools for journalists, and conversational news experiences that help readers understand not just what happened but why it matters and how it connects to other world events.
Behind the scenes, the Post is deploying artificial intelligence across the entire organization, and Vineet shares details about the organization’s “AI everywhere” philosophy.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Allison Ryder: How can AI help companies meet customers where they are, especially when their behaviors and needs evolve quickly? Find out how one news outlet turns this challenge into an opportunity on today’s episode.
Vineet Khosla: I’m Vineet Khosla from The Washington Post, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at MIT SMR since 2014, with research articles, annual industry reports, case studies, and now 13 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.
Hi, listeners. Today we’re joined by Vineet Khosla, chief technology officer at The Washington Post. The Post isn’t just a newsroom. It’s a giant technology machine that delivers journalism to millions of people around the world every day. And Vineet leads the teams that build those systems behind the breaking news and audience experience and security and AI, we’re hoping based on the discussion today. So we’ll talk about how technology is shaping journalism and maybe a little bit about what audiences don’t see behind the scenes, and what the future of news might look like. Vineet, thanks for being here.
Vineet Khosla: Thanks for having me, Sam. I’ve been listening to your podcast for a while, so it’s a pleasure to be finally on the other side of it.
Sam Ransbotham: Maybe we can talk a little bit about what happens behind the scenes with the podcast. Let’s start with something that many listeners feel. I think consuming news in our modern world can be pretty overwhelming and fragmented and tough to understand. And that may be especially true for a younger audience who [is] more raised in a different digital world than I was. So from your side, what’s maybe currently broken about how we’re experiencing news, and what needs to change?
Vineet Khosla: The way I view it is there is not something broken about news. If we zoom out, we should think about journalism as a discipline, not a format. When you start to think about it solely as a format, it may seem broken to the younger audience. The difference is they’re just consuming it very differently than you and me. I use this example: We used to just read the news, then came radio. We heard the news, then came TV. We watched the news, then came AI. We started talking to and asking the news. In all of these changes, the consumption of news actually increased. The value of news in our society actually increased. We are just consuming it very differently at different times of the day.
Sam Ransbotham: That consumption is a big deal. I want to know only the news that I care about. I don’t want to hear stuff I don’t care about, but I want to be aware that the stuff I don’t care about is happening. I don’t want to be in a bubble. Other industries have really struggled with this, if you think about the streaming industry and retail and music. What is personalized news going to be for The Washington Post?
Vineet Khosla: That’s a question I’ve grappled with for the last two and a half years. I’m not from the news industry. I come from outside. So when I landed here, I realized there are two things news does that [are] very important. One is it tells us what is important in the world, and then it tells us why it is important. That’s the sense making, right? The personalized aspect is taken over by social media. They already tell you what’s important. So by the time they come to us, there are very few things we are telling them [that are] different than they already know.
But the “why,” that is the core value that we provide. And that’s where I think we have to have a balance of [personalization] — you need to be data-driven, but you need to use your data almost like a compass, not a GPS. It is still the onus of the newsroom, a responsible ethical newsroom with journalistic standards, to make sure the news we give out to people is not so personalized that it becomes an echo chamber and a reinforcement of their beliefs.
It’s a hard thing to balance, because we understand looking at Big Tech outside, if you go deeply personalized, you will have [an] audience, you will have clicks, you will have money, you will have revenue. For our industry to balance both of these — meet the consumer where they are, give them the news they actually need, don’t give them too much when they’re not ready for it, but at the same time, make sure we are being very even and our perspective and our opinion is coming through — is very important.
Sam Ransbotham: I think what you’re describing is a really difficult Goldilocks problem, which is you want to do enough but not too much. It’s not too hot, not too soft, just right. We want to know about the whole wide world that’s going on, but we also care about opinions that are closer to what my prior opinion was. I try to be pretty active about keeping news sources in my life that I dislike intensely.
How do you maintain journalistic integrity in that process then, when you’re choosing … the kinds of things that you focus on and don’t? This has been going on for years, so this is not a new problem.
Vineet Khosla: I think it’s a multifaceted problem. First it actually starts with the newsroom. I do believe our newsroom, with its standards and the way they do reporting, they’re trying to put a very fair perspective out. What you will see if you come to our application is there are actually many different ways to consume [news]. You can read it. You can listen to it. We just started an AI podcast, where the AI chooses some articles that you might be interested in and turns it into a podcast. You have the option of going to the homepage, which is edited by our editors. This is the expert perspective on what is happening [in the world]. You can go to the “For you” tab and just read personalized news.
So from our side, what we ensure is we give you many options, and we educate you with good products and design [for] why these options exist. Hopefully somewhere between that, you get out of your echo chamber.
Now we want to go beyond that too. If you go to our homepage, you will see an old-style ticker at the bottom of our WashingtonPost.com, where we are letting other news organizations [show] what they’re putting on their homepage, almost for free, on our site to say, “Hey, these are other things that are happening,” because it’s quite possible we’re not going to cover everything in every perspective and to keep extending the service to the nation. I really think we need to, as a news company, try and give value to everyone’s life as much as possible.
We recently started something called Ripple. So it’s WashingtonPost.com/ripple, where we are going to opinion sections across America and trying to bring their content, [through] partnerships with them, to our consumers, to our users. It’s a hard problem, but you do need people who are solving it, and you also need people on the other side who want it to be solved, people like you.
Sam Ransbotham: That’s a really fascinating idea, the idea of trying to surface those ripples from lots of different places. Let’s be frank: You’re not going to be perfect at doing that, but I think that’s inevitably part of the process. The cost of not doing it is probably more extreme than the cost of making some algorithmic problems there.
I know you’ve had trouble with the podcast in terms of personalization and trying to get that extreme personalization. Can you share with us a bit about how that project has gone?
Vineet Khosla: We realized there is a market need in the middle of [heavily] curated editorial podcasts. I almost view them as expert opinions. These are the experts of our company who are saying, “These are the important things you need to know” versus “Sometimes [these] things are not important to the world, but they’re important to me.” I’ll give you one very good example [that] really made me a fan of this product.
[Do] you remember when the Texas redistricting fight was happening, and there [were] a lot of court cases going on? At the same time in India, there were elections happening in the state of Bihar. We covered these two stories, and somehow the podcast, given my interest, talked about the redistricting, the laws, and how the party in power over there is trying to hold on to the votes. And then it contrasted with the elections of Bihar, where some of this might have already happened in the past, and therefore the party that’s winning is banking on the wins coming from those types of redistricting efforts. … Neurons fired in my brain, Sam. I’m like, “Whoa. This is so interesting. I have seen this side in India, and I see what’s happening in Texas. I kind of don’t like it, but thank you for showing me these two [stories].”
Now if you imagine an expert’s view, to 99% of [the] population of America, that second story is not relevant. And even if they’re interested in it, it’s not really going to fire the neurons in their brains the way this podcast did for me. I think that is the gap we are trying to really hit with personalized podcasts. It’s because this is all based on our reporting; this is all factual stuff we did at The Washington Post. We did it because we think this is important for the world to know.
We worked very closely with our newsroom. We tested it very well. And yes, it’s not going to be perfect. It made a few mistakes. Once we launched it, we made sure when we presented it to our consumers, with our design, with the disclaimers, with the warnings, [that] they [understood] that this is a beta experimental product. They understood that there would be mistakes that happen, and we were all as a team watching it very closely.
In terms of technical [issues,] one thing we realized was it has a lot of trouble when you have a lot of third-person references in an article. Let’s say it says, “Vineet said this, and Jennifer said that,” and the following sentences [include] “he” and then “she.” To us, it’s immediately clear who the he is and who the she is. To AI, it might not be. Once we started figuring out those types of problems, we really went back, changed our scripts, changed our prompts. [We] made sure we didn’t change the writing of the article. We just made sure on the AI side [that] we have a way of solving this problem. And the proof of that is we have published about over 100,000 personalized podcasts by now. The completion rate of these podcasts is actually higher than the completion rate of [the] normal podcasts that we publish.
Sam Ransbotham: That’s a beautiful example because it’s going to connect some things, it’s going to miss some things, but maybe when it does, it’s going to be amazing. One of the enduring themes of our show seems to be this exact idea of improvement. One of our early podcast guests mentioned the idea that the first day is the worst day. So when you put this experiment out, you’re going to discover some stuff, like the pronoun problem you mentioned, and how it’s obvious to us which story connects to which one. But you’re going to fix those, and it’ll keep improving.
What’s your plan for this product, for this personalized podcast? I’m already quite jealous [of your 100,000 episodes]. I think we’re just over a hundred, and it’s been exhausting.
Vineet Khosla: Well, I don’t think it replaces the experts. You know, 100 is a lot of work, [and] 100,000 is still a lot of work on the team [that’s] building it because we review problems that … come in. So the work happens, I guess, on [a] different side. For us it happens on the QA side.
But I would zoom out of personalized podcasts and maybe talk more about the AI efforts we are doing over here. And then it would all make sense, right? The way we are viewing AI in our company is we call it “AI everywhere.” It’s an “AI everywhere” approach where we want it in the production of the news. There’s so much [generative AI] can do.
We have a tool called Haystacker, which can go through hours and hours of videos. In what would take people weeks, now our journalists can go and say, “I want to find that person with [the] red cap,” [and the AI goes] through Jan. 6 riot videos and gets that type of information.
You have probably heard all about how big data sets are now no longer a thing journalists fear anymore. They don’t have to manually read it. They can really ask it intelligent questions. So we’re building a lot of tools internally for that side. So that’s one big pillar, [using] AI to help the core mission we have of journalism.
The second … is consumer facing. That’s where [our] AI podcast, “Ask the Post AI,” [and story] summaries … come in. In the case of the AI revolution, I feel like the audience moved before we moved, right? When there was an internet revolution, people had to go buy computers, they had to learn it, they had to get on the web browsers, and then the newsrooms moved to a website. In the world of AI, the audience went overnight.
Sam Ransbotham: I want to push back a little bit on this Haystacker. I really like that name. What you’re saying is “Hey, you want to go through that haystack and do it with artificial intelligence, and find all those needles.” It’s certainly true that we’ve got a lot more content in the world to go through. It’s staggering the amount of things that are happening. We’re getting a lot more content. Are there more needles in that content? Or is there better discovery of the existing needles, or is a lot of the hay that you’re sifting through just a lot of left-tailed junk? Does that make sense?
When I think about a haystack, I think, “OK, let’s grow the whole pile, and when we grow the whole pile, we’ll have more needles because we’ve got more hay.” But we may just be hiding those other needles better.
Vineet Khosla: Both things are right. So let me start [with] the Haystacker project. The name came [from] we are finding a needle in a haystack because we actually already had a haystack. Somebody gave a reporter a lot of videos. Somebody gave a reporter a lot of data and said, “Hey, something’s going on over here,” and it would take them two, three weeks to go through it. So we just help them. We are helping them find that needle instead of them watching it frame by frame. So that’s really the origination of this tool. And this is one of the many tools. A lot of news companies are building these tools.
But going back to your bigger question [about] there is a whole lot more data, and most of it is not interesting. We don’t think it is the job of AI to find all those interesting things and serve them to you without a journalist involved in the middle. So the journalist is usually [using] their instinct, asking questions, trying to find more out of it. And I’m sure you can get to a world where you have really curated data sources. You can take Department of Labor reports out, right? And our journalists use those reports, and they create stories out of [them].
So when you go to “Ask the Post,” and you say, “Hey, what was the unemployment rate in 2013 in [the] agriculture sector?” we may or may not have written about it in a news article. But if [we have access to] one of those data sources that our journalists trust and use, I think it’s fair to use it and give the answer to the question. But once again, there is a newsroom in the loop, like that verification of data. And I think that makes for a little bit higher quality than the general-purpose internet, you know, hoovering ask engines. They have their own place; I’m not taking a dig at them. I’m just saying there’s a different place for that, and what we are trying to build over here in The Washington Post is if you are in the market for trusted news and journalism, and you want some verified facts and have confidence, you should start with us.
Sam Ransbotham: Let’s tie back to how you started this process. You started talking about why. And right now that why has to be part of that; otherwise, like you say, that’s a sharp contrast between the useful search engines, which produce a list but do not produce the why. As I say that, though, I think about modern search technology, and it seems to be trying to use artificial intelligence to move toward more of a why and more explanation. But you were pretty clear about the role of your journalists in this process.
So maybe expand a little bit on that. Where are you automating? What absolutely requires human judgment? How are you figuring out where those lines are? We could talk about individual examples, but what’s the process for figuring out how to decide?
Vineet Khosla: It goes back to AI governance and policies around how we are using AI in the company. We broke it down into three parts. The easiest one I’ll talk [about] first is infosec. We got our infosec team involved, and we said, “Listen, you need to tell us how to not mess it up really bad. You need to tell us what’s happening on the bubble in terms of security and put a policy out.” [This] is easier for us because we are using a [large language model] that we are hosting on a private instance.
Then comes the newsroom aspect: The newsroom and the journalist sat down, and they’ve decided for themselves how they want AI to show up in the work they do — how they will use it, how they will attribute to it, what are the do’s and don’ts.
And then the third aspect is the consumer. This is the tricky aspect because this is what you typically think of as a product, and the approach we have taken is using good design. We want to always inform our consumers, our audience what they are consuming, how much of this is from AI. And it’s a spectrum, right?
Let’s take the example of summaries. We still label AI summaries — “this is an AI summary” — but the way I see people use it and the number of people who are actually looking at the disclaimer or giving us a thumbs-down button on it because they didn’t like it, it’s moving down. It’s almost to the point that nobody is shocked that we have an AI summary, and none of the users are bothered about it. But I’m pretty sure if we put a full AI-generated video — which we haven’t done so far, and we don’t plan on — we would put stronger disclaimers.
So at a product level, we want to lean on design and consumer behavior to make sure we are always informing them when they are using something [that] is AI or not.
Sam Ransbotham: Let’s jump forward though. If we were sitting here together in a decade, you’ve got to be thinking about the direction that the news experience is going. And you’ve mentioned the read the news, listen to the news, watch the news progression that’s happened. You’ve thought about this a lot. Tell me what you think is going to happen in the next decade or so.
Vineet Khosla: If I was that smart, Sam. …
Sam Ransbotham: You wouldn’t be talking to me?
Vineet Khosla: I would be somewhere in New York in the hedge fund business, making my bets.
Sam Ransbotham: OK, we can go shorter. Maybe you can give us a little hint about next month, and we can try to expand from that.
Vineet Khosla: I do sincerely believe the need for news and quality news has never been more. Journalism is a discipline, not just a format. We need to keep adapting our journalism to different formats, use technology where it can help us. And that’s what we intend to keep doing at The Washington Post.
You will start to also probably hear … the ideas around liquid content. Think about the content the way we do. Typically news lasts 24 hours, right? After 24 hours, every newsroom will tell you the story dropped off. They take it off the homepage, people stop talking [about] it. You do a deep investigative piece, maybe [it lasts] seven days. We will pin it somewhere, people will share it, it will have longer legs. But no matter what, after that, it just drops off.
I see a world where people’s curiosity drives the news. News can literally live in infinite forms for a long period of time because somebody could come back and start asking [a] bunch of questions. They could start asking questions, or they could say, “Can you help me write up a report on the change in [Immigration and Customs Enforcement] tactics between [Washington, D.C.] and Minnesota? I really want to understand what was happening in the world at that time [when] it became more violent than it used to be in the past.” I do think this unlocks more news. It actually grows the market more than [the initial] fear of shrinking. And that’s always the fear, right?
When a new technology comes, [there is] first a very genuine fear of shrinking. I don’t want to deny that. Honestly, as an engineer, I see what Claude Code has done in the last two months, and I’m like, “Whoa, there goes my backup career choice. I guess I’m not going to be a super short Java programmer anymore.” But once you get past the fear, I think this grows. AI helps us grow. As long as people and their curiosity and the need to get verified news, information, facts exist, this is going to be good. So that’s the bear. What do you call it in the stock market — the positive side?
Sam Ransbotham: You [need to know] that if you’re going to switch to hedge funds.
Vineet Khosla: Bull is positive. Bear is negative. As you’re realizing, my future career choices are quite limited.
Sam Ransbotham: You better stick with Java.
Vineet Khosla: I’ll stick with Java. But I also do see there is risk around trust. When I look at the future, the thing that worries me the most is the trust of consumers used to be with the mastheads. You would read a newspaper because you trusted that there were standards and procedures and professionals. And then in our lifetime, I [saw] the trust move to creators. People started trusting creators more. They were more influenced by people on Twitter. They were more influenced by Instagram and TikTok people who were telling them the news. And I thought about it. I’m like, “What’s going on over here?”
One is our news did not adapt fast enough, right? That’s true. We did not meet the consumer where they are. But we as humans just generally trust other humans. We trust voice. We trust language. No matter what part of the world you are [in], if somebody speaks any other language, you know that you’re in [the] company of intelligence.
In fact, if I could go back to my Apple days, we had this anecdote. When Siri came, it was the first voice. It was the first voice interaction with your machines. People could talk to it. And then Apple Maps came at the same time, and we had a few incidents where we had wrong data, and people would go on dirt roads and get stuck. The consistent complaints we used to get is “Well, Siri told me to go there.” And that’s when we realized the Siri voice and the Apple voice being the same voice was actually a problem because [users] were putting more trust in it than they should. Their eyes were showing this road doesn’t exist, but they would turn right because Siri told them to.
So I think this is what happened to us: The trust moved from mastheads to people because naturally as humans we trust other humans a little bit more. What worries me is as these AIs become almost a better human than a creator, because they can talk back to you, they can be deeply personalized, they can understand you more than a creator does, I fear the trust will move to the AIs even more than it was with the humans.
Now, given that, what do we do? That’s my hypothesis. The trust to AI that people will have, the relationship we will have, will be very deep. I think the onus is on us, in the news, in the journalism world, to build equal types of experiences so the consumer doesn’t get locked in with a couple of big options that exist in the world outside. I feel hopeful when I see things like MCP protocols come out.
Sam: Model context protocols.
Vineet: Model context protocols. I see agent-to-agent conversations happening. I see enough companies out there, big tech, small tech startups, [that] are working down this path of saying, “Hey, if my agent needs news, I want to connect it with your agent so it can get the right verified news.” So I’m hopeful also, but I’m also very worried about the trust. I want to make sure it stays with people who deserve it.
Sam Ransbotham: Actually, there are four or five things that are pretty fascinating there. One, I had not really thought about that transfer of trust between the different Siri products. … My gut reaction, my naive approach would have been to say, “Hey, that’s good that trust transfers.” But what you pointed out is that when you have two different products with different base levels of accuracy, that you might not want to transfer that trust. That’s an interesting way of thinking about that. I naturally thought, “Hey, more trust is better.” But you can actually signal this is something that should not be trusted with a more robotic voice, for example.
You touched on Siri. Let’s back up here and talk about how you have not always been at The Washington Post. Tell us a little bit about how you got to where you are there and Siri as a part of that journey.
Vineet Khosla: Back in my undergrad days, I got introduced to AI, and I kind of got seduced by the idea of machines doing all the work for me. I was like, “This is great. I’m going to go get a master’s in artificial intelligence, so I can just sit back and relax.” That led to my first job in the mortgage industry. We used to do these AI models for loans. If you remember, the year being 2007, when the great mortgage crisis and the financial collapse happened, my entire industry got wiped out. Turns out nobody was listening to AI when it came to loans.
But that one door closed and a universe opened. I was contributing some open-source code. The founders of Siri saw my code. They invited me to apply for an interview. So I went over to Silicon Valley, and then I spent the next 10 years working with them, building Siri. We were the voice-driven AI for our time, and for the longest time, until Alexa came and Google Assistant came, and that whole universe opened up.
[After] about 10 odd years, I took a hard right turn and I went into Uber Maps. I ran the team that was building the routing algorithms. It was a whole lot of fun. It [involved] graph search. It was hardcore computer science, right? Graph search is as computer science as you get. I really loved that stint. After doing that for about four years, LLMs came on the scene. Then I was like, “OK, I’m going back to my old world of natural language processing.” And I wanted to do something over there.
So I took some time off from Uber. I thought I’m going to reeducate myself. I bought some gardening tools. My wife got really worried. She’s like, “How long are you going to reeducate yourself? You have too many tools over here.” But this Washington Post opportunity came, and all the neurons in my brain fired. I said, “Listen, this revolution is all about language. It’s all about knowledge. This is what newsrooms are. They are the repository of language. They are the masters. They are the experts. They have all the knowledge and information.” And then I interviewed with The Washington Post; they are a great team. I interviewed with [owner] Jeff Bezos, and finally I was like, “Yes, this is what I want to do as my next chapter in life.”
Sam Ransbotham: There’s a whole bunch of things to push on there. One part of that I wanted to pull on, you glossed over very quickly, was that you had made some open-source contributions, and people at Siri noticed it. And that led to [you] being involved with Siri, which led to the Apple acquisition and your involvement there. I particularly like that because I’m a very big proponent in this idea of contributing things. [When] we think about the incentive for contribution, that’s a great story for how being interested, being curious about technology and working on something, and providing evidence of that through an open-source project — there are other ways besides open-source projects, but that’s one great way — can cascade into a very interesting arc around how that developed.
Vineet Khosla: Now that’s true. I got lucky in a lot of ways because I was doing something that people were interested in, and that opened up this opportunity. You’re very right. I do think when you’re early on in your career you should dabble with things a whole lot more [and] then become an expert in [it] because you don’t know who is looking.
Sam Ransbotham: You say luck, though, and I do think that there’s a big part of that luck, but luck only combines well with working on something at the same time. I’ll also make the snide comment that one part of the story I’d like to gloss over is your master’s in artificial intelligence was from the University of Georgia, and I’m a Georgia Tech person, so I want to quickly gloss over that. You can have bad luck as well.
Vineet Khosla: No, I actually do think it’s an important one. I have deep, deep respect for Georgia Tech. Of course, you have [an] amazing computer science program, robotics program, AI program. What University of Georgia was offering uniquely at that time, and still does, is its interdisciplinary program. So I studied language, I studied philosophy, I studied the theory of mind, I studied first-order logic, and then I also studied all this statistical AI, which is basically 99.99% of the AI as people understand it now. So congratulations, you guys won.
Sam Ransbotham: One other part of that was you mentioned graph-based [work]. Why do you think that the graph-based approaches are so interesting? Why did that catch your eye?
Vineet Khosla: Well, it was a classic routing problem. We were doing maps and routing, so you have to route over graphs and edges and nodes. Those algorithms, you studied them in school, right? That’s what caught my interest.
Now for Uber, there was a twist. The twist was that routing for a transit is very different — when I say mass transit, I don’t mean buses, I mean like taxis and Ubers — than personal routing.
We settled on a metric, which was 10 meters or 10 seconds. If your map is wrong by 10 meters, or your ETAs are wrong by 10 seconds, you don’t have a great experience. If your Uber stops 10 meters farther away than where you are, you are running to catch it. You’re putting yourself in an unsafe situation. Maybe you’re crossing the street. If you didn’t reach [it] in time, and your Uber is standing over there, maybe that guy’s getting a ticket, the traffic is backed up, the cops are on the case.
So for us, the level of accuracy was actually way more than what Google and Apple do. And we had to scale not linearly. With Apple and Google, the number of phones they sell is the number of map directions that will happen, while we [were] trying to balance a market. So for one rider, you would probably reach out to 100 drivers to see when they can get to them. And similarly, for 100 drivers, you reach out to 100 riders. It’s possible that the driver [who’s] closest to me is five minutes away, and the driver [who’s] closest to you is one minute away. But I might switch the order of drivers so we both get a driver in two minutes, and then the market is balanced. Otherwise, I would have canceled it because mine was five minutes away.
Once you start poking at [the problems], you see this is a very different routing problem. Of course, graph search and the routes and the Dijkstra [algorithm] is at the heart of it, but the layers we had to keep putting on it to get to a balanced marketplace [were] just very exciting. No one had really done that before.
Sam Ransbotham: That seems fun. Actually, you mentioned Dijkstra’s algorithm and these things. It makes me happy to think that these core ideas still maintain. I mean this matching problem you just described is a classic example of the generalized assignment problem. These are some root problems in operations research and in graph theory and mathematics. It’s fun to see that not everything is statistically picking the next probable word. [I’m] glad to see some of these old-school things come through and come back.
Vineet, this has been a fascinating look at where journalism and the technology behind it, I think, may be heading. The future of news clearly seems more personalized and more AI-powered in many ways, and more complicated in many ways. And I’m glad that you and others are working on it. Thanks so much for joining us today.
Vineet Khosla: Thanks for having me, Sam.
Sam Ransbotham: Thanks for listening. On our next episode, I’ll talk with Andrew Palmer, a journalist at The Economist. We’ll learn how another news outlet is thinking about AI. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.