Chat Module - Features
AI-Powered Conversations
The Chat module leverages advanced AI models to provide intelligent responses to user queries. Key aspects include:
- Integration with multiple AI models (OpenAI, Groq, OpenRouter, and local models)
- Support for both streaming and non-streaming responses
- Real-time streaming of AI responses for interactive models
- Enhanced support for reasoning models (O1-preview, O1-mini)
- Enhanced code block handling with syntax highlighting and one-click copying
- Token count tracking to manage conversation length and model limitations
- Display of the used AI model for each response
To use this feature:
- Open a new chat by clicking the ribbon icon or using the command palette.
- Type your message in the input field.
- Press Enter or click the Send button to receive an AI-generated response.
- For code snippets, click directly on the code block to copy it to your clipboard.
- The AI model used for the response will be displayed below each AI message.
Message Management
Users can manage individual messages within a conversation:
- Copy a message by clicking the "📋" button next to it
- Delete a message by clicking the "🗑️" button (requires confirmation)
- Deleted messages are also removed from the chat file
Context-Aware Chat
The Chat module can incorporate context from your Obsidian vault, allowing for more relevant and personalized conversations.
To add context to your chat:
- Click the "Context Files ➕" button in the chat interface.
- Select one or more files from your vault to add as context.
- The AI will consider the content of these files when generating responses.
- Context files are displayed below the chat interface and can be removed individually.
- Context files are saved with the chat and will be loaded when reopening the chat.
Conversation History
All chat conversations are automatically saved and can be easily accessed later.
To manage your chat history:
- Click the "⚙️" (Actions) button in the chat header to open the Chat Actions modal.
- Choose the "Open Chat History" option.
- Select a previous conversation to continue or review.
- Use the "Open Chat History File" option to open the current chat file in Obsidian for editing.
- Chat history excludes archived chats.
Token Count Tracking
The module keeps track of the token count for each conversation, helping you stay within the limits of the AI model.
- The token count is displayed at the top of the chat interface.
- Use the "Estimate Cost" option in the Chat Actions modal to estimate the cost of the current conversation.
- Token count is updated in real-time as you type or receive responses.
- The token count includes context files, message history, and the current input.
Input Area Behavior
The chat input area provides visual feedback during AI responses:
- Maintains a consistent height to prevent layout shifts
- Shows a loading indicator when the AI is processing
- Automatically disables input during AI responses
- Re-enables input when the AI response is complete
This ensures a smooth chat experience while preventing accidental message sending during AI processing.
For information on advanced features, please refer to the Advanced Features document.