Model and Provider Configuration
Set up AI providers and models to match your needs - from free local models to premium APIs
Quick provider setup
SystemSculpt (easiest)
Have license? You're done!
All models included, no API keys needed
Your own API key (2 minutes)
1. Settings → Add Provider
2. Select type (OpenAI, Anthropic, etc.)
3. Paste API key → Test → Save
4. Models appear in model selector
Provider comparison
Provider | Best for | Setup time | Cost |
---|---|---|---|
SystemSculpt | Everything, no hassle | 0 seconds | License |
OpenAI | GPT-4, general use | 2 minutes | Pay per use |
Anthropic | Claude, long context | 2 minutes | Pay per use |
Ollama | Privacy, free, offline | 5 minutes | Free |
OpenRouter | Many model access | 2 minutes | Pay per use |
Setting up providers
SystemSculpt API
Already have a license?
- Works automatically
- No configuration needed
- All models available
- Automatic updates
Benefits:
- No rate limits
- Built-in failover
- New models added automatically
- Single billing
OpenAI
Get started:
1. Get API key: platform.openai.com
2. Add Provider → OpenAI
3. Paste key (starts with sk-)
4. Models: Various GPT models available
When to use:
- You have existing OpenAI credits
- Need specific GPT-4 variant
- Want direct billing control
Anthropic (Claude)
Get started:
1. Get API key: console.anthropic.com
2. Add Provider → Anthropic
3. Paste key (starts with sk-ant-)
4. Models: Various Claude models available
When to use:
- Need 200K context window
- Prefer Claude's style
- Research/analysis tasks
Ollama (Local AI)
Get started:
1. Install: ollama.ai → Download
2. Terminal: ollama pull [model-name]
3. Add Provider → Ollama
4. Endpoint: http://localhost:11434/v1
5. API Key: leave blank or "dummy"
Popular models:
Model type | Size | Good for |
---|---|---|
Small models | 2-3GB | Quick tasks, general chat |
Medium models | 4-7GB | Coding, complex tasks |
Large models | 13GB+ | Advanced tasks, best quality |
When to use:
- Complete privacy needed
- No internet connection
- Unlimited free usage
- Experimenting
OpenRouter
Get started:
1. Get API key: openrouter.ai
2. Add Provider → OpenRouter
3. Paste key → Save
4. Many models available!
Benefits:
- One API, many providers
- Pay only for what you use
- Compare models easily
- Automatic routing
Advanced configuration
Multiple providers
Why use multiple?
- Redundancy (failover)
- Cost optimization
- Model variety
- Task specialization
Example setup:
1. SystemSculpt - Primary (included models)
2. Ollama - Privacy-sensitive tasks
3. OpenAI - Specific GPT-4 needs
Custom endpoints
For corporate/special setups:
Add Provider → Custom
- Name: "Company AI"
- Endpoint: https://ai.company.com/v1
- API Key: [your-key]
- Headers: {"X-Auth": "token"}
Use cases:
- Corporate proxies
- Self-hosted models
- Modified endpoints
- Special authentication
Provider management
Organize providers:
Name clearly:
- "Personal OpenAI"
- "Work Anthropic"
- "Local Ollama"
- "Research Claude"
Enable/disable:
- Toggle providers on/off
- Disabled = hidden from model list
- Keeps configuration saved
Model selection
Default model
Set for new chats:
Chat Settings → Change Default Presets
→ Select model → Save
Choosing defaults:
- Daily use: Smaller, faster models
- Complex tasks: Save larger models
- Cost-conscious: Set cheaper default
Favorite models
Star your top models:
Model selector → ⭐ next to model
→ Always appears at top
Good favorites:
- Your default workhorse
- Best model per provider
- Specialized models
- Quick local option
Model comparison
Task | Recommended model | Why |
---|---|---|
Quick questions | Fast models | Quick responses, cost-effective |
Coding | Balanced or advanced models | Better code understanding |
Research | Advanced models with large context | Handle extensive documents |
Creative | Advanced models | Higher quality output |
Private | Local models | Complete privacy |
Cost optimization
Estimate usage
Rough costs (per 1000 words):
- Fast models: Check provider pricing
- Balanced models: Check provider pricing
- Advanced models: Check provider pricing
- Local models: Free
Save money
Strategies:
- Use smaller models for simple tasks
- Set cheaper default model
- Use Ollama for experiments
- Batch similar questions
- Clear old conversations
Monitor usage
Check costs:
- OpenAI: platform.openai.com/usage
- Anthropic: console.anthropic.com
- OpenRouter: openrouter.ai/usage
- SystemSculpt: Included in license
Troubleshooting
Connection issues
"Connection failed":
- Check API key is correct
- Verify endpoint URL
- Test network connection
- Check provider status page
"Invalid API key":
- Remove extra spaces
- Check key hasn't expired
- Verify correct provider selected
- Try regenerating key
Model issues
"Model not found":
- Refresh model list
- Check provider is enabled
- Verify API key permissions
- Some models region-locked
"Rate limited":
- Slow down requests
- Check provider limits
- Upgrade API tier
- Use different provider
Performance
Slow responses:
- Try different provider
- Use smaller model
- Check network speed
- Consider local models
Timeouts:
- Increase timeout in settings
- Use faster models
- Check provider status
- Try again later
Best practices
Setup
✅ DO:
- Test each provider after adding
- Name providers descriptively
- Keep backup provider ready
- Document which key is which
❌ DON'T:
- Share API keys in chat
- Use same key everywhere
- Ignore rate limits
- Skip connection tests
Security
Protect API keys:
- Rotate keys regularly
- Use environment variables for shared vaults
- Monitor usage for anomalies
- Revoke unused keys
Ollama safety:
- Only run trusted models
- Check model sources
- Monitor resource usage
- Update regularly
Organization
Provider naming:
Good:
- "Personal GPT-4"
- "Work Claude"
- "Local Mistral"
Bad:
- "Provider 1"
- "Test"
- "New"
Quick reference
Provider URLs
- OpenAI: platform.openai.com
- Anthropic: console.anthropic.com
- OpenRouter: openrouter.ai
- Ollama: ollama.ai
- SystemSculpt: systemsculpt.com
Endpoint reference
YamlOpenAI: https://api.openai.com/v1 Anthropic: https://api.anthropic.com/v1 OpenRouter: https://openrouter.ai/api/v1 Ollama: http://localhost:11434/v1
Model reference
Fast models:
- Smaller, efficient models
- Quick response times
- Lower cost per token
Balanced models:
- Mid-tier capabilities
- Good performance-to-cost ratio
- Suitable for most tasks
Advanced models:
- Largest, most capable models
- Best performance
- Higher cost per token
🚀 Ready? Add your first provider and start chatting!