LLM Providers
Configure which Large Language Model providers your agents use. Moxxy supports built-in providers with first-class integration and any OpenAI-compatible provider via custom registration.
Supported Providers
| Provider | Type | Models | Auth |
|---|---|---|---|
| Anthropic | Built-in (first-class) | Claude Opus, Claude Sonnet, Claude Haiku | API key or OAuth session |
| OpenAI | Built-in | GPT-4.1, o3, o4-mini | API key |
| Ollama | Built-in (local) | Any locally hosted model | None required |
| OpenAI-compatible | Custom registration | xAI/Grok, Google Gemini, DeepSeek, etc. | Base URL + API key |
Anthropic (First-Class)
Anthropic is the primary, first-class provider in Moxxy with full streaming support.
Available Models
| Model | Best For |
|---|---|
| Claude Opus | Complex reasoning, long-form analysis |
| Claude Sonnet | Balanced performance, general use |
| Claude Haiku | Fast responses, lightweight tasks |
Authentication
Anthropic supports two authentication methods:
API Key:
Set the ANTHROPIC_API_KEY environment variable:
export ANTHROPIC_API_KEY="sk-ant-xxx"OAuth Session:
Moxxy supports Anthropic OAuth session tokens with automatic console token refresh. This is useful for development and testing without a static API key.
OpenAI
Available Models
| Model | Best For |
|---|---|
| GPT-4.1 | General use, complex tasks |
| o3 | Deep reasoning |
| o4-mini | Balanced reasoning, cost-effective |
Authentication
Set the OPENAI_API_KEY environment variable:
export OPENAI_API_KEY="sk-xxx"Ollama (Local)
Run models locally with no API key required. Moxxy auto-discovers a running Ollama instance on the local machine.
Setup
- Install Ollama from ollama.com
- Pull a model:bash
ollama pull llama3 - Start Ollama (if not running as a service):bash
ollama serve
Moxxy will automatically detect and connect to the local Ollama instance. No API key or additional configuration is needed.
OpenAI-Compatible Providers
Any provider that implements the OpenAI-compatible API can be registered as a custom provider. This includes xAI/Grok, Google Gemini, DeepSeek, and others.
Register a Custom Provider
curl -X POST http://localhost:3000/v1/providers \
-H "Authorization: Bearer mox_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "xai",
"base_url": "https://api.x.ai/v1",
"api_key": "xai-xxx",
"models": ["grok-3"]
}'Examples
xAI / Grok:
curl -X POST http://localhost:3000/v1/providers \
-H "Authorization: Bearer mox_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "xai",
"base_url": "https://api.x.ai/v1",
"api_key": "xai-xxx",
"models": ["grok-3"]
}'Google Gemini:
curl -X POST http://localhost:3000/v1/providers \
-H "Authorization: Bearer mox_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "gemini",
"base_url": "https://generativelanguage.googleapis.com/v1beta/openai",
"api_key": "xxx",
"models": ["gemini-2.5-pro", "gemini-2.0-flash"]
}'DeepSeek:
curl -X POST http://localhost:3000/v1/providers \
-H "Authorization: Bearer mox_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "deepseek",
"base_url": "https://api.deepseek.com/v1",
"api_key": "xxx",
"models": ["deepseek-chat", "deepseek-reasoner"]
}'Provider Management API
All provider operations are available through the REST API at port 3000 with Bearer token authentication (mox_ prefix).
List Installed Providers
curl http://localhost:3000/v1/providers \
-H "Authorization: Bearer mox_YOUR_TOKEN"Install / Register a Provider
curl -X POST http://localhost:3000/v1/providers \
-H "Authorization: Bearer mox_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "provider_name",
"base_url": "https://api.provider.com/v1",
"api_key": "xxx",
"models": ["model-a", "model-b"]
}'List Available Models for a Provider
curl http://localhost:3000/v1/providers/{id}/models \
-H "Authorization: Bearer mox_YOUR_TOKEN"Environment Variables
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY | Anthropic API key |
OPENAI_API_KEY | OpenAI API key |
Security Best Practices
API Key Storage
- Store provider API keys using environment variables or the vault system
- Never hardcode keys in agent personas or configuration files
- Never log or print API keys
Key Rotation
- Generate a new key in the provider dashboard
- Update the environment variable or vault entry
- Restart the gateway
- Revoke the old key
Troubleshooting
Invalid API Key
Error: Authentication failed - check your API key- Verify the key is correct and not expired
- Ensure the key has proper permissions
- Check that the correct environment variable is set
Rate Limiting
Error: Rate limit exceeded- Wait and retry (Moxxy handles backoff automatically)
- Consider upgrading your provider plan
- Use a lighter model for high-volume tasks
Model Not Available
Error: Model 'xxx' not found- Check model name spelling
- Verify model is available for your account and provider
- For OpenAI-compatible providers, ensure the model is listed in the registration
Ollama Connection Failed
- Verify Ollama is running:
ollama list - Check that Ollama is listening on the default port
- Ensure the desired model is pulled locally