- Deploying AI Agents with powerful RunAgent CLI with a simple configuration file.
- Spinning up a configurable REST-API and WebSocket server for your agents with one cli command.
- Langugae SDK’s for most major languages, to use the deployed agents.

- Focus on the agent development and avoid repeatative process of implementing REST and Streaming API to use the agents through.
- Use their developed agent in any devlopment environment(Web App, Mobile or Desktop App, Games)
- Build application with cross language response streaming, without even thinking about the complex underlying serialization & deserialization logic.
The Problem Every AI Developer Faces
Suppose, You’ve built an incredible AI agent in Python. It uses LangGraph for complex reasoning, leverages powerful tools, and produces amazing results. Your team loves it!Then reality hits:
Your whole team is excited use it! But the frontend team needs to access it in JavaScript, your mobile app team wants it in Kotlin, your Unity team wants it in good old C#, your systems team requires it in Rust.The traditional approach?
Build separate implementations, REST APIs, WebSocket handlers…Sound exhausting? That’s because it is!
What RunAgent Actually Does
RunAgent fundamentally changes how AI agents deployemnt work. Your Python function signatures automatically become API contracts(REST or Streaming) for every supported language.Once
runagent.config.json
of you project points to your Python function, they are automaically converted to corresponding API endpoints, and all language SDKs automatically adapt. No API versioning, no breaking changes. 1
Install the RunAgent CLI
This installs the powerful RunAgent CLI, which is used to deploy and manage your agents.
2
Initialize Agent
Lets start with our minimal Agent example.
3
Setting Up RunAgent config file
Somewhere in your Agent codebase(in this case This
main.py
)main.py
mock_response
function is one of the invocation functions for our agent, so we will add this in runagent.config.json
file:4
Run the Agent (Locally)
5
Use in your application
Using the RunAgent SDKs, you can use your agent in your application, only using the agent ID and the entrypoint tag. Your agentic entrypoint(
mock_response
function) now becomes accessible in:Agent Framework Support
RunAgent works with any Python-based AI agent framework:LangGraph
Deploy your LangGraph agents with built-in state management and workflow execution.
CrewAI
Deploy multi-agent CrewAI systems with coordinated execution and monitoring.
AutoGen
Deploy Microsoft AutoGen multi-agent conversations with step-by-step execution.
AG2
Deploy AG2 conversational agents with fact-checking and multi-turn interactions.
Agno
Deploy Agno AI agents with print response capabilities and streaming support.
Letta
Deploy Letta memory-enabled agents with persistent conversations and tool integration.
Custom Framework
Use any Python-based framework by defining simple entrypoint functions.
Multi-Language SDK Support
RunAgent provides native-like access to your deployed agents across multiple languages:Python SDK
Python client with streaming capabilities. Access your agents like local functions with full type safety and Python idioms.
JavaScript SDK
Full TypeScript support with streaming and Promise-based APIs. Perfect for modern web applications with async/await patterns.
Rust SDK
High-performance async SDK with futures and streaming support. Zero-cost abstractions for systems programming and performance-critical applications.
Go SDK
Idiomatic Go client with context-aware operations and channel-based streaming. Built for concurrent, scalable applications.
All Language SDKs
We’re actively developing SDKs for additional languages including C#, Java, and PHP. Want to contribute or request a specific language? Join our Discord community.
Real-Time Streaming Across Languages
In addition to REST api like responses, you can also stream your agent response super-easily through our SDKs. When your targeted entrypoint streams response, RunAgent makes it feel native in every language SDK:1
Setting Up RunAgent config file
Somewhere an Iterator in your Agent codebase(in this case This
main.py
)main.py
mock_response_stream
function will return an Iterator, and to stream this response, we will add this in runagent.config.json
file, as anotehr entrypoint: The tag for a Streaming Entrypoint must end with a
_stream
suffix. That is how RunAgent identifies it for streaming.2
Run the Agent (Locally)
Spin up the agent just like we did before. But now we have an additional streaming entrypoint.
3
Use streaming in your application
Using the RunAgent SDKs, you can use your agent in your application, only using the agent ID and the entrypoint tag. Your agentic entrypoint(
mock_response
function) now becomes accessible in:- Head to the quickstart guide to see the full magic
- Browse our framework templates for LangGraph, CrewAI, and more
- Check out core concepts to understand how it works