Key Benefits
- 🚀 Deploy agents in minutes (serverless, sandboxed microVMs)
- 🌐 Access from Python, JS, Go, Rust with native-feeling SDKs
- ⚡ Built-in autoscaling + token streaming for real-time applications
Quick Actions
👉 Deploy your first agent
Get started with a complete tutorial
👉 Use frameworks
Integrate with popular AI frameworks
👉 Explore SDKs
Access agents from any language
The Problem Every AI Developer Faces
Suppose you’ve built an incredible AI agent in Python. It uses LangGraph for complex reasoning, leverages powerful tools, and produces amazing results. Your team loves it!Then reality hits:
Your whole team is excited to use it! But the frontend team needs to access it in JavaScript, your mobile app team wants it in Kotlin, your Unity team wants it in good old C#, your systems team requires it in Rust.The traditional approach?
Build separate implementations, REST APIs, WebSocket handlers…Sound exhausting? That’s because it is!
What RunAgent Actually Does
RunAgent fundamentally changes how AI agent deployment works. Your Python function signatures automatically become API contracts (REST or Streaming) for every supported language.Once
runagent.config.json
of your project points to your Python function, they are automatically converted to corresponding API endpoints, and all language SDKs automatically adapt. No API versioning, no breaking changes.1
Install the RunAgent CLI
This installs the powerful RunAgent CLI, which is used to deploy and manage your agents.
2
Initialize Agent
Let’s start with our minimal Agent example.
3
Setting Up RunAgent config file
Somewhere in your Agent codebase (in this case This
main.py
)main.py
mock_response
function is one of the invocation functions for our agent, so we will add this in runagent.config.json
file:4
Run the Agent (Locally)
5
Use in your application
Using the RunAgent SDKs, you can use your agent in your application, only using the agent ID and the entrypoint tag. Your agentic entrypoint (
mock_response
function) now becomes accessible in:Still have a question?
- Join our Discord Community
- Email us: [email protected]
- Follow us on X
- New here? Sign up
Agent Framework Support
RunAgent works with any Python-based AI agent framework:LangGraph
Deploy your LangGraph agents with built-in state management and workflow execution.
CrewAI
Deploy multi-agent CrewAI systems with coordinated execution and monitoring.
AutoGen
Deploy Microsoft AutoGen multi-agent conversations with step-by-step execution.
AG2
Deploy AG2 conversational agents with fact-checking and multi-turn interactions.
Agno
Deploy Agno AI agents with print response capabilities and streaming support.
Letta
Deploy Letta memory-enabled agents with persistent conversations and tool integration.
Custom Framework
Use any Python-based framework by defining simple entrypoint functions.
Multi-Language SDK Support
RunAgent provides native-like access to your deployed agents across multiple languages:Python SDK
Python client with streaming capabilities. Access your agents like local functions with full type safety and Python idioms.
JavaScript SDK
Full TypeScript support with streaming and Promise-based APIs. Perfect for modern web applications with async/await patterns.
Rust SDK
High-performance async SDK with futures and streaming support. Zero-cost abstractions for systems programming and performance-critical applications.
Go SDK
Idiomatic Go client with context-aware operations and channel-based streaming. Built for concurrent, scalable applications.
All Language SDKs
We’re actively developing SDKs for additional languages including C#, Java, and PHP. Want to contribute or request a specific language? Join our Discord community.
Real-Time Streaming Across Languages
In addition to REST API like responses, you can also stream your agent response super-easily through our SDKs. When your targeted entrypoint streams response, RunAgent makes it feel native in every language SDK:1
Setting Up RunAgent config file
Somewhere an Iterator in your Agent codebase (in this case This
main.py
)main.py
mock_response_stream
function will return an Iterator, and to stream this response, we will add this in runagent.config.json
file, as another entrypoint: The tag for a Streaming Entrypoint must end with a
_stream
suffix. That is how RunAgent identifies it for streaming.2
Run the Agent (Locally)
Spin up the agent just like we did before. But now we have an additional streaming entrypoint.
3
Use streaming in your application
Using the RunAgent SDKs, you can use your agent in your application, only using the agent ID and the entrypoint tag. Your agentic entrypoint (
mock_response
function) now becomes accessible in:- Head to the tutorials guide to see the full magic
- Browse our framework guides for LangGraph, CrewAI, and more
- Check out core concepts to understand how it works
Still have a question?
- Join our Discord Community
- Email us: [email protected]
- Follow us on X
- New here? Sign up