We're currently undergoing massive updates and changes to the core infrastructure of SystemSculpt. Website and plugin updates are occurring daily.

Model and Provider Configuration

6 min read

Set up AI providers and models to match your needs - from free local models to premium APIs

Quick provider setup

SystemSculpt (easiest)

Have license? You're done!
All models included, no API keys needed

Your own API key (2 minutes)

1. Settings → Add Provider
2. Select type (OpenAI, Anthropic, etc.)
3. Paste API key → Test → Save
4. Models appear in model selector

Provider comparison

ProviderBest forSetup timeCost
SystemSculptEverything, no hassle0 secondsLicense
OpenAIGPT-4, general use2 minutesPay per use
AnthropicClaude, long context2 minutesPay per use
OllamaPrivacy, free, offline5 minutesFree
OpenRouterMany model access2 minutesPay per use

Setting up providers

SystemSculpt API

Already have a license?

  • Works automatically
  • No configuration needed
  • All models available
  • Automatic updates

Benefits:

  • No rate limits
  • Built-in failover
  • New models added automatically
  • Single billing

OpenAI

Get started:

1. Get API key: platform.openai.com
2. Add Provider → OpenAI
3. Paste key (starts with sk-)
4. Models: Various GPT models available

When to use:

  • You have existing OpenAI credits
  • Need specific GPT-4 variant
  • Want direct billing control

Anthropic (Claude)

Get started:

1. Get API key: console.anthropic.com
2. Add Provider → Anthropic  
3. Paste key (starts with sk-ant-)
4. Models: Various Claude models available

When to use:

  • Need 200K context window
  • Prefer Claude's style
  • Research/analysis tasks

Ollama (Local AI)

Get started:

1. Install: ollama.ai → Download
2. Terminal: ollama pull [model-name]
3. Add Provider → Ollama
4. Endpoint: http://localhost:11434/v1
5. API Key: leave blank or "dummy"

Popular models:

Model typeSizeGood for
Small models2-3GBQuick tasks, general chat
Medium models4-7GBCoding, complex tasks
Large models13GB+Advanced tasks, best quality

When to use:

  • Complete privacy needed
  • No internet connection
  • Unlimited free usage
  • Experimenting

OpenRouter

Get started:

1. Get API key: openrouter.ai
2. Add Provider → OpenRouter
3. Paste key → Save
4. Many models available!

Benefits:

  • One API, many providers
  • Pay only for what you use
  • Compare models easily
  • Automatic routing

Advanced configuration

Multiple providers

Why use multiple?

  • Redundancy (failover)
  • Cost optimization
  • Model variety
  • Task specialization

Example setup:

1. SystemSculpt - Primary (included models)
2. Ollama - Privacy-sensitive tasks
3. OpenAI - Specific GPT-4 needs

Custom endpoints

For corporate/special setups:

Add Provider → Custom
- Name: "Company AI"
- Endpoint: https://ai.company.com/v1
- API Key: [your-key]
- Headers: {"X-Auth": "token"}

Use cases:

  • Corporate proxies
  • Self-hosted models
  • Modified endpoints
  • Special authentication

Provider management

Organize providers:

Name clearly:
- "Personal OpenAI"
- "Work Anthropic"
- "Local Ollama"
- "Research Claude"

Enable/disable:

  • Toggle providers on/off
  • Disabled = hidden from model list
  • Keeps configuration saved

Model selection

Default model

Set for new chats:

Chat Settings → Change Default Presets
→ Select model → Save

Choosing defaults:

  • Daily use: Smaller, faster models
  • Complex tasks: Save larger models
  • Cost-conscious: Set cheaper default

Favorite models

Star your top models:

Model selector → ⭐ next to model
→ Always appears at top

Good favorites:

  • Your default workhorse
  • Best model per provider
  • Specialized models
  • Quick local option

Model comparison

TaskRecommended modelWhy
Quick questionsFast modelsQuick responses, cost-effective
CodingBalanced or advanced modelsBetter code understanding
ResearchAdvanced models with large contextHandle extensive documents
CreativeAdvanced modelsHigher quality output
PrivateLocal modelsComplete privacy

Cost optimization

Estimate usage

Rough costs (per 1000 words):

  • Fast models: Check provider pricing
  • Balanced models: Check provider pricing
  • Advanced models: Check provider pricing
  • Local models: Free

Save money

Strategies:

  1. Use smaller models for simple tasks
  2. Set cheaper default model
  3. Use Ollama for experiments
  4. Batch similar questions
  5. Clear old conversations

Monitor usage

Check costs:

  • OpenAI: platform.openai.com/usage
  • Anthropic: console.anthropic.com
  • OpenRouter: openrouter.ai/usage
  • SystemSculpt: Included in license

Troubleshooting

Connection issues

"Connection failed":

  1. Check API key is correct
  2. Verify endpoint URL
  3. Test network connection
  4. Check provider status page

"Invalid API key":

  • Remove extra spaces
  • Check key hasn't expired
  • Verify correct provider selected
  • Try regenerating key

Model issues

"Model not found":

  • Refresh model list
  • Check provider is enabled
  • Verify API key permissions
  • Some models region-locked

"Rate limited":

  • Slow down requests
  • Check provider limits
  • Upgrade API tier
  • Use different provider

Performance

Slow responses:

  • Try different provider
  • Use smaller model
  • Check network speed
  • Consider local models

Timeouts:

  • Increase timeout in settings
  • Use faster models
  • Check provider status
  • Try again later

Best practices

Setup

DO:

  • Test each provider after adding
  • Name providers descriptively
  • Keep backup provider ready
  • Document which key is which

DON'T:

  • Share API keys in chat
  • Use same key everywhere
  • Ignore rate limits
  • Skip connection tests

Security

Protect API keys:

  • Rotate keys regularly
  • Use environment variables for shared vaults
  • Monitor usage for anomalies
  • Revoke unused keys

Ollama safety:

  • Only run trusted models
  • Check model sources
  • Monitor resource usage
  • Update regularly

Organization

Provider naming:

Good:
- "Personal GPT-4"
- "Work Claude"
- "Local Mistral"

Bad:
- "Provider 1"
- "Test"
- "New"

Quick reference

Provider URLs

  • OpenAI: platform.openai.com
  • Anthropic: console.anthropic.com
  • OpenRouter: openrouter.ai
  • Ollama: ollama.ai
  • SystemSculpt: systemsculpt.com

Endpoint reference

Yaml
OpenAI: https://api.openai.com/v1
Anthropic: https://api.anthropic.com/v1
OpenRouter: https://openrouter.ai/api/v1
Ollama: http://localhost:11434/v1

Model reference

Fast models:

  • Smaller, efficient models
  • Quick response times
  • Lower cost per token

Balanced models:

  • Mid-tier capabilities
  • Good performance-to-cost ratio
  • Suitable for most tasks

Advanced models:

  • Largest, most capable models
  • Best performance
  • Higher cost per token

🚀 Ready? Add your first provider and start chatting!