The RunAgent Rust SDK provides a high-performance interface for interacting with your deployed agents. It supports both synchronous and asynchronous operations with built-in streaming capabilities.
Installation
Add the SDK to your Cargo.toml
:
[dependencies]
runagent = "0.1.3"
tokio = { version = "1.0", features = ["full"] }
serde_json = "1.0"
Quick Start
Basic Usage
use runagent::client::RunAgentClient;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize the client
let client = RunAgentClient::new(
"your-agent-id",
"generic",
true // local = true
).await?;
// Simple invocation
let response = client.run(&[
("message", json!("What's the capital of France?")),
("temperature", json!(0.7))
]).await?;
println!("Response: {}", response);
Ok(())
}
With Specific Address
use runagent::client::RunAgentClient;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect to specific host and port
let client = RunAgentClient::with_address(
"your-agent-id",
"generic",
true,
Some("localhost"),
Some(8450)
).await?;
let response = client.run(&[
("query", json!("Hello from Rust SDK"))
]).await?;
println!("{}", serde_json::to_string_pretty(&response)?);
Ok(())
}
Authentication
The SDK automatically uses configuration from the environment or database:
export RUNAGENT_API_KEY="your-api-key"
export RUNAGENT_BASE_URL="https://api.run-agent.ai"
Response Handling
Standard Response
let response = client.run(&[
("query", json!("Explain quantum computing")),
("max_length", json!(200))
]).await?;
// Response is a serde_json::Value
if let Some(answer) = response.get("answer") {
println!("Answer: {}", answer);
}
Streaming Response
use futures::StreamExt;
let mut stream = client.run_stream(&[
("query", json!("Write a story about AI"))
]).await?;
while let Some(chunk_result) = stream.next().await {
match chunk_result {
Ok(chunk) => print!("{}", chunk),
Err(e) => {
println!("Error: {}", e);
break;
}
}
}
Error Handling
The SDK provides comprehensive error handling:
use runagent::types::{RunAgentError, RunAgentResult};
match client.run(&[("query", json!("Hello"))]).await {
Ok(response) => println!("Success: {}", response),
Err(RunAgentError::Authentication { message }) => {
println!("Auth error: {}", message);
}
Err(RunAgentError::Connection { message }) => {
println!("Connection error: {}", message);
}
Err(e) => println!("Other error: {}", e),
}
Common Error Types
Error | Description |
---|
Authentication | Invalid or missing API key |
Validation | Invalid input data |
Connection | Network-related errors |
Server | Server-side errors |
Database | Local database errors |
Configuration Options
use runagent::RunAgentConfig;
let config = RunAgentConfig::new()
.with_api_key("your-api-key")
.with_base_url("https://api.run-agent.ai")
.with_logging()
.build();
Framework Examples
LangChain Integration
let client = RunAgentClient::new("langchain-agent", "generic", true).await?;
let response = client.run(&[
("messages", json!([
{"role": "user", "content": "What is the weather like?"}
]))
]).await?;
AutoGen Integration
let client = RunAgentClient::new("autogen-agent", "autogen_invoke", true).await?;
let response = client.run(&[
("task", json!("What is AutoGen?"))
]).await?;
CrewAI Integration
let client = RunAgentClient::new("crewai-agent", "research_crew", true).await?;
let response = client.run(&[
("topic", json!("AI Agent Deployment"))
]).await?;
Best Practices
- Reuse client instances for multiple requests
- Handle errors appropriately for production use
- Use streaming for long responses
- Configure timeouts based on agent complexity
- Use structured logging for debugging
Next Steps
Responses are generated using AI and may contain mistakes.