A chemical engineer by training, Angela Nakalembe worked in the sciences and management consulting before landing at YouTube as the company’s engineering program manager for trust and safety.
At YouTube, Angela explains, AI has become a first line of defense against harmful content. The technology not only accelerates content moderation tasks but makes the process more humane, by filtering out problematic content before it reaches a human reviewer. To combat the proliferation of AI-generated content that may be hard to discern from assets created by humans, YouTube, its parent company Google, and others have joined the Coalition for Content Provenance and Authenticity Alliance to establish standards for the origin of content they observe. Also on today’s episode, Angela shares some personal experiences using large language models (LLMs) and Google’s own AI tools to illustrate how she sees individuals using AI in the future.
Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.
Transcript
Allison Ryder: What inspired a chemical engineer to work in program management for trust and safety? Find out on today’s episode.
Angela Nakalembe: I’m Angela Nakalembe from YouTube, and you’re listening to Me, Myself, and AI.
Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast from MIT Sloan Management Review exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at MIT SMR since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.
Hey, listeners. Thanks for joining us again. Today, we’ve got Angela Nakalembe, engineering program manager at YouTube, with us in the virtual studio. Angela, thanks for joining us.
Angela Nakalembe: Thanks for having me, Sam. I’m excited to be here.
Sam Ransbotham: Typically, we start with some background, but it’s really hard to imagine any listener that doesn’t know about YouTube. If you’re listening to a podcast, you probably know about YouTube. But still, can you give us a brief overview of YouTube and, in particular, what your role there is?
Angela Nakalembe: YouTube is an online video sharing platform, started about 20 years ago as an online community for folks to share videos on any topic that they were interested in. It got acquired by Google (Alphabet) shortly [there]after. And since then, it’s grown into this behemoth of an online presence, with almost 2 billion, 3 billion users monthly and created such an incredible online community for creators to connect and also make a living as well.
What I do specifically at YouTube is I work behind the scenes within YouTube’s trust and safety department, where, as an engineering program manager, I’m responsible for overseeing a lot of the tools and feature launches — a lot of them [are] now using AI — that help keep the platform safe from things like hate speech, misinformation, and graphic content and the like. So it’s pretty rewarding work. I feel like as we’re in this ever-evolving world of AI, [as] we’re seeing this technology get ever more integrated into our lives, it’s really important that we find ways to not only use this technology to power our products and make it easier for us to send emails but also to find ways to protect the people who are forming online communities.
Sam Ransbotham: You teed it up. What is happening with artificial intelligence [at] YouTube right now?
Angela Nakalembe: One of the ways we’re using AI is to basically turbocharge our content moderation efforts and make it a lot faster but also a lot more humane. Imagine your job is to sit for eight hours a day, five days a week, reviewing videos filled with sexually explicit content or graphic violence or misinformation. That’s the reality that exists for a lot of the human moderators YouTube and other online social media platforms use to moderate content and keep that violative content off the platform. And doing that day in, day out can have such a huge emotional and psychological toll on our reviewers.
But now with AI and machine learning, we’re able to really change that and evolve the role of the reviewer and have them work hand in hand, not just solo but hand in hand with these AI tools. Specifically, we’re developing tools that can act as a first line of defense [to] flag or catch harmful content before it ever reaches a person’s eyes, whether they’re on the platform or a human reviewer.
So while human reviewers are still needed as a step to kind of verify what these AI models are doing, they no longer have to carry the weight and that burden alone. And I think to me, that’s one of the most powerful uses of AI. We are using AI [by] not particularly replacing people but helping protect them in a sense. Because now, we are reporting much higher reviewer satisfaction scores in terms of well-being.
We’re also receiving a lot of positive feedback from our users or creators who notice a much faster handle time for a lot of issues or cases that they’ve reported. So things are getting taken down a lot faster, or things are not actually making it onto the platform, so people feel it’s a lot safer. It’s been incredible to see how we’re able to use this technology to overall make it a much better experience for everyone across the board, both within YouTube and out in the world.
Sam Ransbotham: We had a family friend who’s a policeman, and he worked in the parts of the police force that dealt with things you didn’t want to see. He talked about the turnover that job has, even though … they believe in the job. It’s a rough job, and it’s the kind of thing that they end up taking home with them inadvertently, even if they don’t want to. They can’t just forget [what] they’ve seen all day.
[Now that] you mentioned it here kind of makes it sink in, this idea of you are in some sense replacing human workers, and that’s a place where we really don’t want humans to be doing something. Typically, with artificial intelligence, we think about dirty, dull, and dangerous jobs.
Angela Nakalembe: There’s still an incredible need for these human moderators, right? There’s a lot of nuance that AI just can’t understand, you know? … I think AI is fantastic because it gives that first line of defense and tackles a lot of the more gory, very obvious instances of policy guideline violations, but then that leaves space for humans to handle the work that needs a little bit more nuance. So there’s still need for human support there.
Sam Ransbotham: At the same time, we were talking about the time spent curating content and you’re not having a decreasing number of submissions. At the same time, we have artificial intelligence helping people produce content much more quickly. And so then you’re getting much more content that you need to curate. But at the same time, I know you’re having some struggles there with exactly what you do with AI content. Alphabet is working on developing these models, and at the same time, you’ve got an interesting position of curating them. What’s the YouTube take on that?
Angela Nakalembe: That’s a really good point. I do want to underscore that one of the benefits of AI has been frictionless content creation. It’s helping people express themselves a lot easier across language barriers as well, and scaling ideas that they previously weren’t able to, right? But the downside of that becomes when anyone can generate high-quality text or video or images. Instantly, we’re flooding the internet with a lot of content that looks legitimate but really isn’t.
That’s a big thing that we’ve been struggling with, not just YouTube but in general, with the rise in AI. A good example could be last year. [It] was a really big election season globally, and we saw an uptick in huge AI-generated videos that people would create about political opponents saying the most [rage-baiting] information you can think of to get a rise out of people and spread all that misinformation.
We’ve put [in] a lot of time at YouTube on thinking about how we can prevent the spread of misinformation and make sure people are more aware of the modality of creation of the content that they’re consuming. A really good example I’d want to call out here is at Google and YouTube, what we’re working on is this big initiative called [the] C2PA alliance. It’s the Coalition for Content Provenance and Authenticity, or C2PA [for] short.
It’s basically an open technical standard for publishers, creators, [and] consumers to establish the origins of whatever content they’re observing. It’s a group of tech companies from across the world: Adobe and Microsoft, a bunch of other companies are involved as well, and are committed to using this, and at least with YouTube, they’re requiring our creators to validate whether their content was created with a camera or with AI.
That’s one of the ways we’re mitigating it. We’re also training our models to flag any misleading or low-effort or manipulated content a lot faster. The goal is really trying to restore context or what we call cognitive security, which is people’s ability to be able to distinguish what’s real or not in a world where it’s becoming increasingly difficult to tell the difference. Back to the example I shared earlier, last year it was somewhat easy to tell the difference between what was AI-generated versus not in terms of those political videos and such.
But fast-forward eight months later — Sam, there are some videos that I watch that [at] first glance, second glance I could not tell that [they were] AI-generated. It was very, very realistic, right? As the technology evolves, I think we as a company — Google, YouTube — are trying our best to keep up with that and make sure that we can find ways to help people distinguish what’s real versus what’s not.
Sam Ransbotham: I’m glad you mentioned provenance though, because I think, in general, not just [with] video, provenance is a big deal for any sorts of data that we’re working with. Where data comes from, how it’s used, what its source was, I think that’s gotten much more difficult. It’s wonderful that we have all this data and this video and content available, but at the same time, it has made it much harder to know where that came from. If I collected data myself, I knew where it came from, but if I used data from someone else, I’m one step removed. And I think provenance in general is for everyone to start paying more attention to.
In some sense, [with] such market power that YouTube has, a lot of the onus then comes to you, and what you do ends up becoming to some degree a de facto standard in the marketplace and, you know, not to put more pressure on you, but. …
Angela Nakalembe: None taken.
Sam Ransbotham: The things you’re doing are pretty important there. I want to switch a little bit. … Your title is engineering program manager, and you’re actually an engineer. I always have to bring this out. I was a chemical engineer back in the day. You were a chemical engineer.
Angela Nakalembe: Yes.
Sam Ransbotham: Tell us a little bit about how you ended up at YouTube.
Angela Nakalembe: My background, as you mentioned, we are both chemical engineering majors, and I had a passion for chemistry. [I] always [had a] curious mind. And that’s what drove me to it in the beginning, but I quickly realized that (1) there was not a lot of chemistry involved in chemical engineering, and then (2) as a chemical engineer or just as an engineer in general, you get to work on all these exciting projects and initiatives and problem-solving, but you don’t have quite a bit of autonomy in terms of the direction of where you want to take all of the work that you’re doing. I became increasingly hungry for the ability to … sway business strategy and decisions on where we took product.
So that’s where I made the switch from engineering into management consulting, of all places. But it was a fantastic time because it allowed me to basically create my own, what I would call, rotation program. I was able to [be] front-facing with clients that are Fortune 50 companies across a wide range of industries, doing all these sorts of digital transformations and helping them scale their technologies [and] maximize their impact globally.
I ended up learning that I enjoyed working in the tech space because it was just always evolving in terms of industry, like consumer technology, is always evolving. I love working on tooling or products that people get to use in their everyday life and having a direct impact on that — the joy from being able to turn your TV on and tell somebody that, “Hey, part of the work I do every day is to keep this app safe, or I helped launch XYZ feature.”
It’s very rewarding, and very nice when you can easily pinpoint that. That’s how I landed on working in technology and work that I could have a direct impact on. So I started looking for platforms or roles at companies that I got to use on an everyday basis. YouTube was really high on my list because there [are] very few things that I watch or consume more than YouTube outside of books and the like.
So I landed at YouTube, kind of fell into trust and safety after jumping around at different places, and really just fell in love. I think I enjoy the process of being able to build tools that keep the platform safe, not just for its users, but also we really are the beating heart of trust and safety.
It’s really fulfilling work. As AI becomes more of a presence in the work that we’re doing, which honestly happened two and a half, three years ago, it’s been exciting to be at the forefront of that — having to think ahead of the curve, to make sure the app still [is] delivering the level of quality experience for our users that it has been, even as this technology comes in and becomes enmeshed in our everyday lives.
Sam Ransbotham: As you’re talking about the engineering, I saw some parallels. One of the plights of the engineer is that there’s never the headline in the newspaper: “Bridge doesn’t fall for one more day.” Right? It’s only when the bridge falls that it’s in the headline.
Angela Nakalembe: Yep.
Sam Ransbotham: This is what engineers around the world are constantly frustrated about, by the fact that they do their job means that nobody knows about it. And just like, as you’re saying, if you’re on YouTube, no one sees content that’s inappropriate, then they don’t think about the fact that they didn’t see that. So it’s hidden there.
Angela Nakalembe: I’ve had to train myself to think of that as a sign that we’re doing a good job and be like, “You know what? If no one’s talking about us that means we’re doing exactly what we need to do, and we should pat ourselves on the back.” So [I’m] very proud of the work that we’re doing and that we’re going to continue doing. I like that bridge analogy that you shared.
Sam Ransbotham: What’s different about traditional engineering is that traditional engineering has, like civil engineering and chemical engineering, established principles. … The physics isn’t changing on you every day. [With] building materials, you know, there are improvements. I’m not denigrating the fact that there [are] improvements in those industries, but it feels very different than how rapidly — or you said over the last two and a half years — the tools have changed. So maybe some of the engineering analogy breaks down, and you have to respond so quickly.
Angela Nakalembe: Yeah, 100%. I feel like there’s never a dull moment. I think that’s [one of] the reasons that I’ve enjoyed being in this space. You have to constantly be learning and constantly evolving and constantly looking at new ways to use the tooling of the skills that you have at your disposal. It used to be, you just have a set of languages that you’re really good at coding, but now … vibe coding [is] coming into play. Now everyone’s learning how to build things in half the amount of time using these LLMs, which has been an exciting opportunity for us.
I played around and built an app over the weekend with one of the Google DeepMind Lab’s tools. And it was really exciting. … These are things that I historically didn’t think I’d be able to build out, or at least it would take me a really long time to, but it’s been really cool to see all the things that you can sort of build out with this technology when you really focus on using it as a learning tool, as a tool for good. It goes back to what I was saying earlier about adaptability.
Like my story itself, I started out in chemical engineering, and now [I’m] in trust and safety. As humans, we are very adaptable. I think that’s one of our superpowers that we can harness. And as this technology becomes more prevalent in the workforce, I think we’re better suited, like feeling the fear of the change that’s coming with this tool, but then also guiding that fear toward a curiosity and using that to learn more about this technology and how we can basically bulletproof — whether it’s our careers or just our lives — how we can use this technology for good.
Sam Ransbotham: You’ve got some lofty ideas there, but I have to go back and push you.
Angela Nakalembe: Sure.
Sam Ransbotham: What was the app? Come on. Listeners want to know. What was the app that you made over the weekend? We’re not going to let that go so quickly.
Angela Nakalembe: I was trying to figure out what I wanted to do next with my hair. It took me an embarrassingly long amount of time, but it was really fun, and I was able to generate a little app where I can just upload a photo and then scroll through a bunch of different hairstyles, and figure out what it is I wanted to do. Different hairstyles, hair colors. It was a fun little exercise.
Sam Ransbotham: You responded well to me putting you on the spot. Thank you.
Angela Nakalembe: I might go off on a bit of a tangent here. Sam, do you know, as of 2025, what the No. 1 use case for AI was?
Sam Ransbotham: If it’s my class, it’s doing homework.
Angela Nakalembe: You know what? I think it’s close. Let me actually pull up the picture because someone sent this to me a couple weeks ago, and it was actually kind of mind-blowing. Homework is definitely up there.
In 2024, the No. 1 use case was generating ideas. I think homework is probably somewhere around there. Therapy and companionship was No. 2, and then specific research. But in 2025, the biggest use case for generative AI is therapy and companionship. And No. 2 is organizing life. And then No. 3 is finding purpose, which is so interesting.
When you look at those top three things … I mentioned earlier how our communities are becoming increasingly online. People are now also leaning onto generative AI as a confidante, as some form of your community. That presents such an interesting dynamic. [For] people using this tool for emotional support or for emotional connection, how do we remind people as they use this tool that AI is not sentient?
It might appear to be, but it really isn’t. It’s very convincing. But we need to be very cognizant of the parasocial relationships that could be formed from using that and the cognitive vulnerabilities that could come as a result of becoming over-reliant on these tools. At Google, YouTube we’re building guardrails to help prevent this type of thing from happening — people becoming overly reliant on this technology or developing some sort of … well, how do I put it? Basically putting safeguards against role-play that could simulate sentience. And make sure that people are still very aware of the fact that when they’re engaging with these tools, they’re engaging with a machine, and this is not a real person.
Back to the concept of cognitive security, it’s crucially important for us as we build with AI tooling [to] make sure that AI remains helpful and human-centered, but we do not want it to pretend to be human. So how do we keep that at the forefront of all the work that we’re doing?
Sam Ransbotham: That’s — to tie back to what you started this by saying — learning more about the technology and learning how it works, and learning what it’s good for and not good for [is] very related to understanding that it is in fact a machine and not an actual person for a relationship. What you speak to, I think, speaks of a potential growing divide between people who do learn more and get better using these tools, and people who then rely on the tools, maybe overly rely or use them in unhealthy ways. We’ve had a digital divide before, and now we have an even larger digital chasm, perhaps, between these people and the way that they’re used.
One of the things that we have is we have a segment where I ask you some rapid-fire questions. We’ve already sort of touched on a lot of the interesting ones here.
Angela Nakalembe: OK.
Sam Ransbotham: What’s moved faster or slower with artificial intelligence than you’ve expected?
Angela Nakalembe: It was the text to text-to-video, text-to-image and the accuracy of it. I think we all remember a few of the images that went viral like a year ago where it was the pope in a fancy winter coat and stuff like that. And it was still very easy to distinguish that wasn’t real. But fast-forward to 2025, it’s becoming increasingly difficult to distinguish between what’s real and what’s not. And to me, that’s the thing that’s kind of taken me aback.
Sam Ransbotham: What do you wish that AI could do better?
Angela Nakalembe: Sometimes I feel like I have to put in a lot of effort in my prompt engineering, and I think it’s really just a matter of time till we get there, [when] I wouldn’t need to be as detailed or as specific with my prompts. And it would be a lot easier for it to retain a lot more context or just intuit based on … past conversations and such what it is I’m trying to accomplish.
Sam Ransbotham: What frustrates you about artificial intelligence?
Angela Nakalembe: What frustrates me? I don’t know if it’s artificial intelligence itself that frustrates me, but I think it’s how it’s being used. Maybe it’s not frustration, but it’s more concern. I did mention about the people using artificial intelligence as almost like an emotional support person. Building those very parasocial relationships, it does kind of scare me a little bit.
Sam Ransbotham: What’s the best way you like to use AI? Was it making your app? What are other ways you personally like to use it?
Angela Nakalembe: The app was a fun way. Another way is I recently signed up for a triathlon, and I had no idea what I was doing. I basically talked to my LLM and said, “Hey, I have x number of months to train for this sprint triathlon. Can you help me create a workout plan?”
I think life planning in general is a fantastic use for it. And so that’s one of my favorite ways I’ve currently been able to use AI for. [It] gave me a fantastic week-by-week breakdown that I can use based on my current goals and my current fitness level.
And then, from a work standpoint, maybe like ramping up onto a new effort and really trying to understand what it is we’re trying to do. NotebookLM has been such a fantastic tool. It’s one of the Google tools out of Gemini where you can just drop a bunch of documents and such, and have it synthesize a really fantastic summary for you. We can create a podcast for you based on any questions you ask.
It’s been a really good way to ingest information in a short amount of time and get up to speed on things, really great. I think those are my two favorite use cases.
Sam Ransbotham: That’s great, actually. Too bad it won’t do the exercise for you, but maybe it will save you enough time by summarizing things that you have the time to exercise.
Angela Nakalembe: Yeah.
Sam Ransbotham: Thanks so much for joining us today. We’ve enjoyed talking with you and thanks for coming on the show.
Angela Nakalembe: Of course. It was great. Thanks for having me, Sam.
Sam Ransbotham: Thanks for listening. Next time, I’ll speak with Jeetu Patel, president and chief product officer at Cisco. Please join us.
Allison Ryder: Thanks for listening to Me, Myself, and AI. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.