Anthropic Turns MCP Agents Into Code First Systems With ‘Code Execution With MCP’ Approach
Understanding the Target Audience
The target audience for Anthropic’s new approach primarily includes AI developers, business managers, and technology decision-makers in enterprises looking to integrate AI into their operations. These individuals are typically concerned with the efficiency, scalability, and cost-effectiveness of AI systems. Their pain points often revolve around high token usage, latency issues, and the complexities of managing multiple tools within AI workflows.
Goals for this audience include optimizing AI performance, reducing operational costs, and ensuring seamless integration of AI tools into existing business processes. They are interested in practical applications of AI technology, particularly how new methodologies can enhance productivity and reduce overhead. Communication preferences lean towards technical documentation, case studies, and concise, actionable insights that can be quickly implemented in their environments.
The Problem: MCP Tools as Direct Model Calls
The Model Context Protocol (MCP) is an open standard that enables AI applications to connect to external systems via MCP servers, which expose various tools. These tools allow models to query databases, call APIs, or interact with files through a unified interface. However, the default pattern requires agents to load numerous tool definitions into the model context, which can lead to inefficiencies. Each tool definition includes schema information and metadata, and intermediate results must also be streamed back into the context, resulting in excessive token usage.
For instance, when an agent retrieves a long sales meeting transcript from Google Drive and subsequently updates a record in Salesforce using that transcript, the entire transcript is processed through the model. This can lead to tens of thousands of unnecessary tokens being consumed, especially for lengthy documents, which complicates scalability and increases costs.
The Shift: Representing MCP Servers as Code APIs
Anthropic proposes a new methodology that integrates MCP into a code execution loop. Instead of allowing the model to call tools directly, the MCP client transforms each server into a set of code modules within a filesystem. The model is then tasked with writing TypeScript code that imports and composes these modules, which run in a sandboxed environment.
The process involves three key steps:
- The MCP client generates a directory that mirrors the available MCP servers and tools.
- For each MCP tool, a thin wrapper function is created in a source file, such as
servers/google-drive/getDocument.ts, which internally calls the MCP tool with typed parameters. - The model writes TypeScript code that imports these functions, executes them, and manages control flow and data movement within the execution environment.
This new approach allows for a more efficient workflow. For example, the previous Google Drive and Salesforce integration can be condensed into a short script that only processes necessary data, significantly reducing token usage.
Quantitative Impact: Token Usage Drops by 98.7 Percent
Anthropic reports a significant reduction in token consumption. A workflow that originally required approximately 150,000 tokens when using direct model calls was restructured using the new code execution approach, resulting in only about 2,000 tokens being used. This represents a 98.7 percent decrease in token usage, which translates to lower costs and reduced latency.
Design Benefits for Agent Builders
The ‘code execution with MCP’ approach introduces several advantages for engineers designing agents:
- Progressive Tool Discovery: Agents no longer need to load all tool definitions into context. They can explore the generated filesystem and access specific tool modules as needed.
- Context Efficient Data Handling: Large datasets remain within the execution environment, allowing for operations like filtering and aggregating data without overwhelming the model with unnecessary information.
- Privacy Preserving Operations: Sensitive information can be tokenized within the execution environment, ensuring that raw identifiers are not exposed to the model while still allowing for necessary operations.
- State and Reusable Skills: The filesystem enables agents to store intermediate files and reusable scripts, facilitating the development of more complex capabilities over time.
Conclusion
Anthropic’s ‘code execution with MCP’ approach effectively addresses the inefficiencies associated with traditional MCP-powered agents. By converting MCP servers into executable APIs and offloading work to a TypeScript runtime, this methodology enhances agent efficiency while emphasizing the importance of code execution security. This innovation transforms MCP from a simple tool list into a robust, executable API surface.