«`html
Kong Releases Volcano: A TypeScript, MCP-native SDK for Building Production Ready AI Agents with LLM Reasoning and Real-World Actions
Understanding the Target Audience
The target audience for the Volcano SDK primarily consists of software developers, AI engineers, and business managers involved in AI and machine learning projects. These individuals are typically working in enterprise environments where efficiency, scalability, and integration with existing systems are crucial.
Pain Points
- Difficulty managing complex workflows involving multiple LLM providers.
- High maintenance overhead due to extensive custom code for tool management and error handling.
- Challenges in ensuring security and compliance with OAuth integrations.
- Need for observability and monitoring of AI agent performance.
Goals
- To streamline the development of AI agents with minimal code.
- To enhance productivity by reducing the complexity of integrating various tools and services.
- To achieve reliable and efficient execution of AI workflows.
Interests
- Latest developments in AI and machine learning technologies.
- Best practices for software development and workflow automation.
- Tools that improve collaboration and communication within teams.
Communication Preferences
The audience prefers clear, concise, and technical content that includes code examples and practical applications. They value documentation that is easily accessible and provides comprehensive guides for implementation.
Overview of Volcano SDK
Kong has open-sourced Volcano, a TypeScript SDK designed to compose multi-step agent workflows across multiple LLM providers with native Model Context Protocol (MCP) tool use. This release aligns with the broader MCP capabilities in Kong AI Gateway and Konnect, establishing Volcano as the developer SDK within an MCP-governed control plane.
Why Choose Volcano SDK?
With Volcano SDK, developers can achieve functionality with just 9 lines of code, significantly reducing the complexity compared to the 100+ lines typically required for handling tool schemas, context management, provider switching, error handling, and HTTP clients.
Key Features of Volcano SDK
- Chainable API: Build multi-step workflows with a concise
.then(...).run()pattern; context flows between steps. - MCP-native tool use: Pass MCP servers; the SDK auto-discovers and invokes the right tools in each step.
- Multi-provider LLM support: Mix models (e.g., planning with one, execution with another) inside one workflow.
- Streaming of intermediate and final results for responsive agent interactions.
- Retries & timeouts configurable per step for reliability under real-world failures.
- Hooks (before/after step) to customize behavior and instrumentation.
- Typed error handling to surface actionable failures during agent execution.
- Parallel execution, branching, and loops to express complex control flow.
- Observability via OpenTelemetry for tracing and metrics across steps and tool calls.
- OAuth support & connection pooling for secure, efficient access to MCP servers.
Integration with Kong’s MCP Architecture
Kong’s Konnect platform enhances the functionality of Volcano SDK by adding multiple MCP governance and access layers. The AI Gateway incorporates MCP gateway features, such as server autogeneration from Kong-managed APIs, centralized OAuth 2.1 for MCP servers, and observability over tools, workflows, and prompts in Konnect dashboards. This integration provides uniform policy and analytics for MCP workflows.
Key Takeaways
- Volcano is an open-source TypeScript SDK that builds multi-step AI agents with first-class MCP tool use.
- The SDK includes production features—retries, timeouts, connection pooling, OAuth, and OpenTelemetry tracing/metrics—for MCP workflows.
- Volcano composes multi-LLM plans/executions and auto-discovers/invokes MCP servers/tools, minimizing custom glue code.
- Kong’s AI Gateway/Konnect enhance Volcano with MCP server autogeneration, centralized OAuth 2.1, and observability.
Conclusion
Kong’s Volcano SDK is a practical addition to the MCP ecosystem, providing a TypeScript-first agent framework that aligns developer workflows with enterprise controls such as OAuth 2.1 and OpenTelemetry. This design prioritizes protocol-native MCP integration, reducing operational drift and closing auditing gaps as internal agents scale.
Further Resources
Explore the GitHub Repo for technical details, tutorials, and code examples. Join our community on Twitter and subscribe to our Newsletter. Connect with us on Telegram.
«`