Brain Module - AI Models and Selection
The Brain Module supports various AI models and provides flexible options for model selection and management.
Supported Model Types
- OpenAI models
- Groq models
- OpenRouter models
- Local models through Ollama or LM Studio
Model Selection Modal
The Model Selection Modal offers a quick and efficient way to switch between different AI models.
How to Use
- Open the modal using the assigned hotkey (recommended: CMD+M)
- Use the search bar to filter models in real-time
- Navigate through models using keyboard arrows or Tab/Shift+Tab
- Select a model by clicking, pressing Enter, or using keyboard shortcuts
Features
- Fuzzy multi-term search
- Real-time filtering
- Keyboard navigation
- Visual highlighting of search terms
- Grouped display by provider
- Automatic selection of the first model after search
- Keyboard shortcuts for quick selection
- Enhanced refresh functionality with visual feedback
- Error handling with clear user messages
Model Provider Filtering
The Model Selection Modal now remembers your provider filter preferences:
- Filter settings are saved between sessions
- Toggle visibility for:
- Favorited models
- Local models
- OpenAI models
- Groq models
- OpenRouter models
- Changes take effect immediately and persist after closing
Your filter preferences will be restored each time you open the Model Selection Modal.
Model Fallback Behavior
When selecting a model, the plugin follows this order:
- Default model set in settings
- First available local model
- First available online model (OpenAI, Groq, or OpenRouter)
If no models are available, "No Models Detected" will be displayed.
Performance Considerations
- Local models may require significant computational resources
- Cloud-based models depend on internet connection speed
- Adjust max output tokens setting to balance generation speed and output length
Best Practices
- Experiment with different models to find the best fit for your tasks
- Use hotkeys for quick model switching
- Regularly update the plugin to access new models and improvements
Model Response Types
The Brain Module supports both streaming and non-streaming models:
Streaming Models
- Provide real-time response updates
- Better for interactive conversations
- Support token-by-token output
Non-Streaming Models
- O1 models (o1-preview and o1-mini)
- Better for reasoning tasks that need more time to think through your question
Token Management
- Automatic token limit adjustment based on model capabilities
- Smart handling of max output tokens
- Optimized performance for both streaming and non-streaming responses