LLM Providers
Configure which Large Language Model providers your agents use.
Supported Providers
| Provider | Models | API Key Source |
|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, o1, o3-mini | platform.openai.com |
| Gemini 2.0 Flash, Gemini 2.5 Pro | aistudio.google.com | |
| Z.Ai | Grok | x.ai |
Configuration
During Setup
Run the onboarding wizard:
bash
moxxy initYou'll be prompted to:
- Select your provider
- Enter your API key
- Choose a default model
Via Web Dashboard
- Open the web dashboard
- Go to Config → LLM Settings
- Select provider and enter API key
- Click Save
Via Vault
Store credentials directly in the vault:
bash
# OpenAI
moxxy run --agent default --prompt "Store 'sk-xxx' in vault as openai_api_key"
# Google
moxxy run --agent default --prompt "Store 'xxx' in vault as google_api_key"
# Z.Ai
moxxy run --agent default --prompt "Store 'xxx' in vault as xai_api_key"Via API
bash
curl -X POST http://localhost:17890/api/agents/default/vault \
-H "Content-Type: application/json" \
-d '{"key": "openai_api_key", "value": "sk-xxx"}'OpenAI
Available Models
| Model | Best For | Context | Speed |
|---|---|---|---|
| gpt-4o | General use, complex tasks | 128K | Fast |
| gpt-4o-mini | Quick tasks, cost-effective | 128K | Very Fast |
| o1 | Deep reasoning | 200K | Slow |
| o3-mini | Balanced reasoning | 200K | Medium |
Configuration
bash
# Set API key
moxxy run --agent default --prompt "Store 'sk-xxx' in vault as openai_api_key"
# Set model
moxxy run --agent default --prompt "Store 'gpt-4o' in vault as llm_model"
# Optional: Set organization
moxxy run --agent default --prompt "Store 'org-xxx' in vault as openai_org_id"Environment Variables
bash
export OPENAI_API_KEY="sk-xxx"
export OPENAI_ORG_ID="org-xxx" # optionalGoogle Gemini
Available Models
| Model | Best For | Context |
|---|---|---|
| gemini-2.0-flash | Fast responses | 1M |
| gemini-2.5-pro | Complex reasoning | 2M |
Configuration
bash
# Set API key
moxxy run --agent default --prompt "Store 'xxx' in vault as google_api_key"
# Set model
moxxy run --agent default --prompt "Store 'gemini-2.0-flash' in vault as llm_model"Environment Variables
bash
export GOOGLE_API_KEY="xxx"Z.Ai (Grok)
Available Models
| Model | Best For |
|---|---|
| grok-beta | General use |
Configuration
bash
# Set API key
moxxy run --agent default --prompt "Store 'xxx' in vault as xai_api_key"
# Set model
moxxy run --agent default --prompt "Store 'grok-beta' in vault as llm_model"Environment Variables
bash
export XAI_API_KEY="xxx"Model Selection
Default Model
Set the default model for an agent:
bash
moxxy run --agent default --prompt "Store 'gpt-4o' in vault as llm_model"Per-Task Model
For specific tasks, you can suggest:
Use a faster model to quickly summarize this textThe agent may switch to a lighter model for the task.
LLM Parameters
Temperature
Controls response creativity (0.0 - 2.0):
bash
# More deterministic
moxxy run --agent default --prompt "Store '0.3' in vault as llm_temperature"
# More creative
moxxy run --agent default --prompt "Store '0.9' in vault as llm_temperature"Max Tokens
Limit response length:
bash
moxxy run --agent default --prompt "Store '4096' in vault as llm_max_tokens"Top P
Nucleus sampling parameter:
bash
moxxy run --agent default --prompt "Store '0.9' in vault as llm_top_p"Multiple Providers
Different Providers per Agent
Each agent can use a different provider:
bash
# Agent 1 uses OpenAI
moxxy run --agent assistant --prompt "Store 'openai' in vault as llm_provider"
moxxy run --agent assistant --prompt "Store 'sk-xxx' in vault as openai_api_key"
# Agent 2 uses Google
moxxy run --agent researcher --prompt "Store 'google' in vault as llm_provider"
moxxy run --agent researcher --prompt "Store 'xxx' in vault as google_api_key"Fallback Providers
Configure fallback for reliability:
bash
moxxy run --agent default --prompt "Store 'google' in vault as llm_fallback_provider"Cost Management
Track Usage
Monitor API usage in provider dashboards:
Cost-Saving Tips
- Use gpt-4o-mini for simple tasks
- Set max_tokens to avoid long responses
- Use caching for repeated queries
- Monitor usage regularly
Budget Alerts
Set up alerts in provider dashboards to avoid surprise bills.
Troubleshooting
Invalid API Key
Error: Authentication failed - check your API keySolution:
- Verify the key is correct
- Check if the key has expired
- Ensure the key has proper permissions
Rate Limiting
Error: Rate limit exceededSolution:
- Wait and retry
- Upgrade your plan
- Implement exponential backoff
Model Not Available
Error: Model 'xxx' not foundSolution:
- Check model name spelling
- Verify model is available for your account
- Try a different model
Timeout
Error: Request timeoutSolution:
- Check network connection
- Increase timeout setting
- Try a faster model
Security Best Practices
API Key Storage
- ✅ Store in vault (encrypted)
- ✅ Use environment variables
- ❌ Don't hardcode in persona
- ❌ Don't log or print
Key Rotation
Regularly rotate API keys:
- Generate new key in provider dashboard
- Update vault:bash
moxxy run --agent default --prompt "Update openai_api_key to 'new-key'" - Restart gateway:bash
moxxy gateway restart - Revoke old key
Access Control
Limit API key permissions:
- Only grant needed scopes
- Set usage limits
- Use organization accounts for teams