For the fourth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In spring 2025, we also fielded a global executive survey yielding 1,221 responses to learn the degree to which organizations are addressing responsible AI. In our most recent article, we explored the relationship between explainability and human oversight in holding AI systems accountable. This time, we dive deeper into accountability for agentic AI. Although there is no agreed-upon definition, agentic AI generally refers to AI systems that are capable of pursuing goals autonomously by making decisions, taking actions, and adapting to dynamic environments without constant human oversight. According to MIT’s AI Agent Index, deployment of these systems is increasing across fields like software engineering and customer service despite limited transparency about their technical components, intended uses, and safety.
Given the apparent governance gap, we asked our panel to react to the following provocation: Holding agentic AI accountable for its decisions and actions requires new management approaches. A clear majority (69%) agree or strongly agree with the statement, arguing that agentic AI presents a paradigm shift due to its ability to perform complex tasks autonomously at scale and its potential to create a superhuman workforce. Many experts argue that management frameworks must be reimagined to match the new dynamic of humans increasingly collaborating with AI agents in the workplace. However, a solid minority (25%) push back on this view, warning that it reflects a kind of “AI exceptionalism” that could distract from holding people and organizations accountable. They believe, instead, that existing management frameworks can be adapted to maintain clear human accountability for the design, behavior, and outcomes of agentic AI systems.
Below, we share insights from our panelists and draw on our own RAI experience to recommend how organizations can introduce new management approaches or reimagine their existing ones to improve accountability and oversight over agentic AI systems and an increasingly hybrid human-AI workforce.
Agentic AI systems challenge traditional management models. A majority of our experts believe that agentic AI requires new management approaches due to its higher autonomy and complexity when compared to earlier technologies. As Shamina Singh, president of the Mastercard Center for Inclusive Growth, observes, “These systems bring unprecedented levels of autonomy, complexity, and risk, requiring organizations to rethink traditional management strategies.” Jai Ganesh, Harman International’s chief product officer, notes that “traditional management systems are designed for deterministic systems,” whereas “agentic AI systems operate independently, are goal-oriented, and have memory and reasoning capabilities, which make their decisions complex, autonomous, and opaque.” This, he argues, “necessitates the definition of agent roles, including permissible decisions, data usage guardrails, ethical boundaries, and escalation of confidence thresholds.” Although these exist with human workers, they are implicit and rely on the judgment of both the manager and the worker. Agentic AI systems, however, require explicitly defined rules and threshold values. Getting them correct can be quite challenging and falls outside traditional management models. Automation Anywhere’s Yan Chow adds, “Proving causation and fault becomes incredibly difficult, especially with complex, autonomous, and opaque AI systems.”
Our experts also point to the superhuman speed and scale of agentic AI as especially disruptive. As Shelley McKinley, chief legal officer at GitHub, explains, “Today’s workflows were not built with the speed and scale of AI in mind, so addressing gaps will require new governance models, clearer decision pathways, and redesigned processes that make it possible to trace, audit, and intervene in AI-driven decisions.”
United Nations University’s Tshilidzi Marwala agrees that “because of its autonomous decision-making, adaptive learning, and high-speed operations, agentic AI is difficult for traditional management models — which were created for human agency — to handle.” David Hardoon, AI enablement head at Standard Chartered Bank, warns, “Old management models, built for human-paced systems, fall short in tracking AI’s dynamic behavior, risking unaddressed errors or harms.” This “can lead to significant consequences if unchecked,” he adds, “necessitating automated monitoring with ethical guardrails.”
Given these challenges, several panelists call for continuous, iterative oversight. As Franziska Weindauer, CEO of TÜV AI.Lab, explains, “These systems make decisions on their own, and those decisions can directly impact people, workflows, and critical decisions.” The governance of AI, she says, means humans stay involved across the entire life cycle. However, Weindauer adds, “it’s not enough to run a checklist once and call it done,” but “to stay accountable, organizations need tools and processes that follow AI systems throughout their use.” Douglas Hamilton, who heads AI research and engineering at Nasdaq, suggests turning “periodic and quick process reviews into a technical, ROI-driven learning and design process.” Managers will require new skills to provide effective oversight, and agentic AI system designers must create systems that enable this type of oversight.
Agentic AI requires us to rethink the relationship between humans and machines. Beyond technical oversight, managing agentic AI also requires clarity about the relationship between humans and AI agents. Apollo Global Management AI lead Katia Walsh advocates for a collaborative model, saying, “If we proceed responsibly, humans will collaborate with, supervise, and ensure ‘AI workers’ achieve their intended goals with integrity.” But others question whether human oversight should always prevail. “The real challenge comes when humans override AI and are wrong,” posits Alyssa Lefaivre Škopac, director of AI trust and safety at Alberta Machine Intelligence Institute. She asks, “Should human decision-making always be prioritized, or do we need to defer to AI in certain cases?” Similarly, Hamilton cautions that “agentic AI systems require managers to think carefully about the costs of being wrong, what interventions are acceptable without human oversight, and which ones would require it.”
Our experts also emphasize that everyone needs to rethink accountability since AI agents are not people — meaning we cannot yet hold them accountable in the same way. “Since AI lacks legal personhood,” Chow notes, “it can’t be directly sued, imprisoned, or held liable in the way humans or corporations can.” Marwala adds, “New legal and ethical frameworks are also essential because existing legal systems do not acknowledge AI as a legal person, necessitating proactive and open management of the AI life cycle that goes beyond conventional performance metrics.” GitHub’s McKinley contends that “since AI isn’t a person or legal entity, accountability for decisions and actions demands a broad, shared responsibility from the start.” She elaborates, “Agentic AI creators must embed things like transparency and human oversight during development, while users must deploy them responsibly, and monitor and document impacts.”
Ultimately, it’s about holding people, not AI, accountable. Not everyone agrees that agentic AI demands new management models. Ben Dias, chief AI scientist at IAG, rejects the idea that “AI requires revolutionary changes to proven organizational practices,” arguing, “Managers routinely delegate to team members whose decision-making processes they cannot fully predict or control, yet maintain accountability through clear boundaries, outcome-focused oversight, and appropriate monitoring.” For Dias, “agentic AI simply represents a new type of team member within this established framework.” RAI Institute chairman Manoj Saxena similarly argues that “if your AI is acting like an employee (or worse, a team of freelancers), it’s time to start managing it like one.”
Others also see calls for new management models as misplaced. For RAIght.ai co-CEO Richard Benjamins, “stating that new management approaches are needed because of transferring accountability from people to agentic AI systems is, at the current state of play, a bridge too far.” Instead, these experts stress that accountability must remain firmly with the people and organizations behind the technology. As UNICEF’s Steven Vosloo says, “It is important to recognize up front that agentic AI (i.e., software) cannot be accountable for its decisions and actions,” and “people who make and deploy the software are accountable and responsible for its behavior.” Partnership on AI CEO Rebecca Finlay shares the view that “holding agentic AI accountable is about holding ourselves and others accountable for the choices we make about how and when to use this new technology.” As Mark Surman, president of Mozilla, sums it up, “Those who implement AI systems and provide them to end users are the ones that need to be accountable, not the AI itself.”
Recommendations
In sum, we offer the following recommendations for organizations seeking to improve accountability over agentic AI systems:
1. Adopt life-cycle-based management approaches. Agentic AI is fast, complex, and dynamic. Implement a continuous, iterative management process that tracks agentic AI systems from initial design through deployment and ongoing use. Instead of one-time reviews, introduce recurring assessments, technical audits, and performance monitoring to detect and address issues in real time. Management approaches should make oversight an embedded part of daily operations, not a periodic or isolated compliance task.
2. Integrate human accountability into AI governance structures. Design management frameworks to explicitly assign specific roles and responsibilities for both the human manager and agentic AI system over every stage of the AI life cycle. Establishing decision-making protocols, escalation paths, and evaluation checkpoints must be part of every agentic AI system deployment to ensure that people remain answerable to outcomes. These structures should reinforce that agentic AI is a tool within human-led processes.
3. Enable AI-led decisions in defined circumstances. While human oversight is essential, the properties of agentic AI stretch its limits. New management approaches should identify areas where AI can and should prevail based on its superior speed, accuracy, or consistency. In such cases, governance can focus instead on defining boundaries, monitoring performance, and ensuring that human intervention is reserved for higher-risk scenarios. These responsibilities should be agreed upon among senior corporate leadership and clearly communicated to managers so that they fully understand their accountability in these situations.
4. Prepare for agentic AI that creates other AI systems. Failure to account for AI systems developed or modified autonomously by other AI systems can result in a significant visibility gap in an organization. Recognizing and integrating these emergent systems will be critical to defining the scope of AI in the enterprise. Governance structures and management approaches that do not account for AI offspring will foster, not mitigate, AI-related risks.
5. When it comes to agentic AI, make the implicit explicit. Since agentic AI systems require explicitly defined rules and threshold values, organizations should clarify the role and scope of agentic AI in their management structures. Just as human labor scales through hierarchical or structured management systems designed to ensure accountability, the integration of agentic AI in the workforce requires a clear understanding and articulation of its scope and a deliberate articulation of its role within these organizational frameworks, including its relation to the human components of an increasingly superhuman workforce.