r9s SDK: One SDK for Every AI Model
Building AI applications often means wrestling with multiple SDKs, each with its own quirks. OpenAI has one API. Anthropic has another. Google’s is different again. Your code becomes a patchwork of provider-specific implementations.
r9s SDK changes this. One SDK, one API, every major AI provider.
What is r9s SDK?
The r9s SDK is the official Python client for the r9s API gateway. It provides:
- Unified Python SDK — Consistent interface across all providers
- Powerful CLI — Interactive chat, audio, images, and more from your terminal
- Agent System — Versioned, auditable prompts with template variables
- Full API Coverage — Chat, completions, audio, images, embeddings, and beyond
pip install r9s
That’s it. You’re ready to use any AI model through a single interface.
Quick Start
Set Up Your Environment
Create a .env file in your project:
R9S_API_KEY=your-api-key
R9S_BASE_URL=https://api.r9s.ai/v1
R9S_MODEL=gpt-4o-mini
The SDK automatically loads these when you run commands or import the library.
Your First Chat (CLI)
r9s chat
You> What's the capital of France?
The capital of France is Paris.
tokens: 12 in / 8 out
That’s a multi-turn conversation with streaming, history saving, and token counting — all built in.
Your First Chat (Python)
from r9s import R9S
with R9S.from_env() as client:
response = client.chat.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's the capital of France?"}]
)
print(response.choices[0].message.content)
Switch to Claude? Just change the model:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "What's the capital of France?"}]
)
Same SDK. Same patterns. Different providers.
Core Capabilities
Chat & Completions
The bread and butter of LLM interaction. Supports streaming, function calling, and all the parameters you’d expect:
# Streaming chat
for chunk in client.chat.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Explain quantum computing"}],
stream=True
):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
CLI equivalent:
r9s chat --model gpt-4o-mini --system "You are a physics professor"
Audio
Text-to-speech and speech-to-text in one SDK:
# Generate speech
r9s audio speech "Hello, world!" -o hello.mp3 --voice alloy
# Transcribe audio
r9s audio transcribe meeting.mp3
# Translate audio to English
r9s audio translate foreign-speech.mp3
Python:
# Text to speech
audio = client.audio.speech(
input="Hello, world!",
model="tts-1",
voice="alloy"
)
# Speech to text
transcript = client.audio.transcriptions.create(
file=open("audio.mp3", "rb"),
model="whisper-1"
)
Image Generation
Create images from text prompts:
r9s image generate "A sunset over mountains, oil painting style" -o sunset.png
response = client.images.generate(
prompt="A sunset over mountains, oil painting style",
model="dall-e-3",
size="1024x1024"
)
Embeddings
Vector representations for semantic search, clustering, and RAG:
embeddings = client.embeddings.create(
input=["Hello world", "Goodbye world"],
model="text-embedding-3-small"
)
The Agent System
Beyond simple chat, r9s provides a full agent management system. Agents are versioned prompts with variables, audit trails, and approval workflows.
Create an Agent
r9s agent create code-reviewer \
--instructions "You are a code reviewer for {{language}}. Focus on {{focus_areas}}." \
--model gpt-4o
Use the Agent
r9s chat --agent code-reviewer \
--var language=Python \
--var focus_areas="security and performance"
Version Control for Prompts
# Update with a new version
r9s agent update code-reviewer \
--instructions "..." \
--bump minor \
--reason "Added security focus"
# View history
r9s agent history code-reviewer
# Compare versions
r9s agent diff code-reviewer 1.0.0 1.1.0
# Roll back if needed
r9s agent rollback code-reviewer --version 1.0.0
Production Governance
# Approve for production
r9s agent approve code-reviewer --version 1.1.0
# View audit trail
r9s agent audit code-reviewer --last 50
Your prompts deserve the same rigor as your code.
CLI Features at a Glance
| Command | Description |
|---|---|
r9s chat | Interactive multi-turn chat with history |
r9s chat --agent <name> | Chat using a saved agent |
r9s audio speech <text> | Text-to-speech generation |
r9s audio transcribe <file> | Speech-to-text transcription |
r9s image generate <prompt> | Image generation |
r9s models | List available models |
r9s agent create/update/list | Manage agents |
r9s command create/list | Save reusable prompt templates |
Pipe-Friendly
The CLI plays well with Unix pipelines:
# Summarize a file
cat README.md | r9s chat -m "Summarize this"
# Process command output
git diff | r9s chat -m "Explain these changes"
# Chain with other tools
r9s chat -m "Generate 5 test cases" | tee tests.txt
Resume Conversations
Chat history is automatically saved. Resume any previous conversation:
r9s chat --resume
# Interactive selection of past conversations
Configuration
Environment Variables
| Variable | Description |
|---|---|
R9S_API_KEY | Your API key |
R9S_BASE_URL | API endpoint (default: https://api.r9s.ai/v1) |
R9S_MODEL | Default model for chat |
R9S_SYSTEM_PROMPT | Default system prompt |
R9S_LANG | CLI language (en, zh, etc.) |
Local Storage
The CLI stores data in ~/.r9s/:
~/.r9s/
├── chat/ # Chat history
├── agents/ # Agent definitions and versions
├── commands/ # Saved prompt templates
└── config.toml # Configuration
Why r9s SDK?
One API, Many Providers Stop rewriting code when you switch models. OpenAI, Anthropic, Google, Qwen, DeepSeek — all through the same interface.
CLI-First Design Not everything needs a script. Quick experiments, audio transcription, image generation — do it all from your terminal.
Production-Ready Agents Prompts are code. Version them. Audit them. Approve them for production.
Batteries Included Chat, audio, images, embeddings, moderation — the full AI toolkit in one package.
Get Started
pip install r9s
Set up your .env:
R9S_API_KEY=your-api-key
R9S_BASE_URL=https://api.r9s.ai/v1
Start chatting:
r9s chat
Links: