Settings Reference
This document provides a comprehensive reference for all Obsidian Gemini Scribe settings.
Table of Contents
- Basic Settings
- Model Configuration
- Custom Prompts
- UI Settings
- Context Management
- Developer Settings
- Session-Level Settings
Basic Settings
API Key
- Type: String
- Required: Yes
- Storage: Stored securely using Obsidian's SecretStorage API (not saved in
data.json) - Description: Your Google AI API key for accessing Gemini models
- How to obtain: Visit Google AI Studio
- Migration: If upgrading from a previous version, your API key is automatically migrated from
data.jsonto secure storage on first load
Your Name
- Setting:
userName - Type: String
- Default:
"User" - Description: Name used by the AI when addressing you in responses
Plugin State Folder
- Setting:
historyFolder - Type: String
- Default:
gemini-scribe - Description: Folder where plugin stores history, prompts, and sessions
- Structure:
gemini-scribe/ ├── History/ # Legacy chat history files (v3.x) ├── Prompts/ # Custom prompt templates ├── Skills/ # Custom agent skills (<skill-name>/SKILL.md) ├── Agent-Sessions/ # Agent mode sessions with conversation history ├── debug.log # Current log file (when file logging is enabled) └── debug.log.old # Previous rotated log file
Enable Chat History
- Setting:
chatHistory - Type: Boolean
- Default:
false - Description: Save chat conversations to markdown files
- Note: Chat history is stored in Agent Sessions folder in v4.0.0
Summary Frontmatter Key
- Setting:
summaryFrontmatterKey - Type: String
- Default:
"summary" - Description: Frontmatter key used when storing document summaries
Model Configuration
All models are selected from available Gemini models. The plugin supports dynamic model discovery to automatically fetch the latest models from Google's API.
Chat Model
- Setting:
chatModelName - Type: String
- Default:
gemini-flash-latest - Description: Model used for agent chat conversations
- Available Models:
gemini-flash-latest- Gemini Flash Latest (fast and efficient, default for chat)gemini-2.5-pro- Gemini 2.5 Pro (most capable, requires billing)gemini-flash-lite-latest- Gemini Flash Lite Latest (lightweight)gemini-3-pro-preview- Gemini 3 Pro Preview (experimental)
- Note: Model discovery automatically fetches the latest available models from Google's API
Summary Model
- Setting:
summaryModelName - Type: String
- Default:
gemini-flash-latest - Description: Model used for document summarization and selection-based text rewriting
- Used by: Summarize Active File command, Rewrite text with AI command
Completions Model
- Setting:
completionsModelName - Type: String
- Default:
gemini-flash-lite-latest - Description: Model used for IDE-style auto-completions
- Note: Completions must be enabled via command palette
Custom Prompts
Custom prompts allow you to create reusable AI instruction templates that modify how the AI behaves for specific sessions.
Allow System Prompt Override
- Setting:
allowSystemPromptOverride - Type: Boolean
- Default:
false - Description: Allow custom prompts to completely replace the default system prompt
- Warning: Enabling this may break expected functionality if custom prompts don't include essential instructions
Creating Custom Prompts
- Create a markdown file in
[Plugin State Folder]/Prompts/ - Write your custom instructions in the file
- Select it in the session settings modal (gear icon in the agent panel)
See the Custom Prompts Guide for detailed instructions.
UI Settings
Enable Streaming
- Setting:
streamingEnabled - Type: Boolean
- Default:
true - Description: Enable streaming responses in the chat interface for a more interactive experience
- Note: When disabled, full responses are displayed at once
Context Management
Context management automatically monitors and controls conversation size to prevent exceeding model token limits.
Context Compaction Threshold
- Setting:
contextCompactionThreshold - Type: Number (percentage, 5-50)
- Default:
20 - Description: Percentage of the model's input context window at which automatic compaction occurs
- How it works: When conversation tokens exceed this percentage, older turns are summarized and replaced with a compact summary while preserving recent messages
- Hard ceiling: Aggressive compaction triggers at 80% of the input limit to prevent API errors
Show Token Usage
- Setting:
showTokenUsage - Type: Boolean
- Default:
false - Description: Display estimated token count in the agent input area
- Display format:
Tokens: ~N (Y new) / M (X%)showing total prompt tokens, uncached (new) tokens, model limit, and percentage used - How it works: Token counts update live after each API response, including during tool call chains. Gemini's implicit caching means repeated content (system prompt, tool definitions) is served from cache — the "new" count shows tokens that aren't cached
- Visual indicators:
- Normal (muted text) — well under threshold
- Yellow — approaching compaction threshold (≥80% of threshold)
- Orange/red — at or above compaction threshold
Log Tool Execution to Session History
- Setting:
logToolExecution - Type: Boolean
- Default:
true - Description: Append a summary of each tool execution to the session history file for auditing
- Format: Collapsible callout blocks showing tool name, key parameters, status, and duration
- Note: Requires plugin reload to take effect when toggled
Always Show Diff View for File Writes
- Setting:
alwaysShowDiffView - Type: Boolean
- Default:
false - Description: Automatically open a diff view when the agent proposes file changes, instead of requiring a button click
- When off: The confirmation card shows a summary and a "View Changes" button. Click it to open the diff view
- When on: The diff view opens automatically alongside the confirmation card
- Note: The diff view lets you edit the proposed content before approving. If you modify content, the tool result reports
userEdited: trueso the agent knows
Developer Settings
Advanced settings for developers and power users. Access by clicking "Show Advanced Settings" in the plugin settings.
Debug Mode
- Setting:
debugMode - Type: Boolean
- Default:
false - Description: Enable detailed console logging for troubleshooting
- Use case: Debugging API issues, tool execution problems, or unexpected behavior
Log to File
- Setting:
fileLogging - Type: Boolean
- Default:
false - Description: Write log entries to a file (
debug.log) in the plugin state folder - Behavior:
- Errors and warnings are always written to the log file when enabled
- Debug-level entries (
log(),debug()) are only written when Debug Mode is also enabled - Log files are automatically rotated at 1 MB (previous log kept as
debug.log.old) - Writes are batched and debounced to minimize I/O impact
- Use case: Sharing diagnostic information in bug reports, or enabling the agent to read logs for self-diagnosis via vault tools
- Note: Log files are stored in the plugin state folder and are automatically excluded from RAG indexing
API Configuration
Maximum Retries
- Setting:
maxRetries - Type: Number
- Default:
3 - Description: Maximum number of retry attempts when a model request fails
- Note: Uses exponential backoff between retries
Initial Backoff Delay
- Setting:
initialBackoffDelay - Type: Number (milliseconds)
- Default:
1000 - Description: Initial delay before the first retry attempt
- Note: Subsequent retries use exponential backoff (2x, 4x, 8x, etc.)
Model Parameters
Temperature
- Setting:
temperature - Type: Number (0.0-2.0)
- Default:
0.7 - Description: Controls response creativity and randomness
- Lower (0.0-0.5): More focused, deterministic, consistent
- Medium (0.5-1.0): Balanced creativity and coherence
- Higher (1.0-2.0): More creative, varied, unpredictable
- Note: Ranges automatically adjusted based on selected model's capabilities
Top-P
- Setting:
topP - Type: Number (0.0-1.0)
- Default:
1.0 - Description: Controls response diversity via nucleus sampling
- Lower values (0.1-0.5): More focused on likely tokens
- Higher values (0.5-1.0): More diverse vocabulary
- Note: Works in conjunction with temperature
Model Discovery
Dynamic model discovery automatically fetches the latest available Gemini models and their capabilities from Google's API.
Enable Model Discovery
- Setting:
modelDiscovery.enabled - Type: Boolean
- Default:
true - Description: Automatically discover and update available Gemini models
Auto-Update Interval
- Setting:
modelDiscovery.autoUpdateInterval - Type: Number (hours)
- Default:
24 - Description: How often to check for new models (0 to disable)
- Range: 0-168 hours (0-7 days)
Fallback to Static Models
- Setting:
modelDiscovery.fallbackToStatic - Type: Boolean
- Default:
true - Description: Use built-in model list when API discovery fails
- Recommendation: Keep enabled for reliability
Last Update
- Setting:
modelDiscovery.lastUpdate - Type: Number (timestamp)
- Description: Timestamp of last successful model discovery
- Note: Read-only, automatically updated
Tool Execution
Stop on Tool Error
- Setting:
stopOnToolError - Type: Boolean
- Default:
true - Description: Stop agent execution when a tool call fails
- When enabled: Agent stops immediately if any tool fails
- When disabled: Agent continues executing subsequent tools despite failures
Tool Loop Detection
Prevents the AI agent from executing identical tools repeatedly, which can cause infinite loops.
Enable Loop Detection
- Setting:
loopDetectionEnabled - Type: Boolean
- Default:
true - Description: Detect and prevent infinite tool execution loops
Loop Threshold
- Setting:
loopDetectionThreshold - Type: Number
- Default:
3 - Range: 2-10
- Description: Number of identical tool calls before a loop is detected
Time Window
- Setting:
loopDetectionTimeWindowSeconds - Type: Number (seconds)
- Default:
30 - Range: 10-120
- Description: Time window for detecting repeated calls
- Example: If threshold is 3 and window is 30s, calling the same tool 3+ times within 30 seconds triggers detection
MCP Servers
MCP (Model Context Protocol) server support allows the agent to use tools from external MCP servers. Supports both local (stdio) and remote (HTTP) servers.
Enable MCP Servers
- Setting:
mcpEnabled - Type: Boolean
- Default:
false - Description: Enable connections to MCP servers for external tool integration
Server List
- Setting:
mcpServers - Type: Array of server configurations
- Default:
[] - Description: List of MCP server configurations
Each server configuration includes:
| Field | Type | Description |
|---|---|---|
name | String | Unique server name |
transport | String | Transport type: "stdio" (local) or "http" (remote). Default: "stdio" |
command | String | Command to spawn the server (stdio only) |
args | String[] | Command arguments (stdio only) |
url | String | Server URL (http only, e.g., http://localhost:3000/mcp) |
env | Object | Optional environment variables |
enabled | Boolean | Connect on plugin load |
trustedTools | String[] | Tools that skip confirmation |
See the MCP Servers Guide for setup instructions.
Session-Level Settings
Session settings override global defaults for specific agent sessions. Access via the settings icon in the session header.
Model Configuration
- Model: Override the default chat model for this session
- Temperature: Session-specific temperature setting
- Top-P: Session-specific top-p setting
- Custom Prompt: Select a custom prompt template for this session
Context Files
- Add specific notes as persistent context for the session
- Context files are automatically included with every message
- Use @ mentions in chat to add files
- Active note is automatically included by default
Permissions
Session-level permissions allow bypassing confirmation dialogs for specific operations during the current session only.
Available permission bypasses:
- File creation
- File modification
- File deletion
- File moving/renaming
Note: Permissions reset when you create a new session or load a different session.
Performance Considerations
- Model Selection: Flash models (8B, standard) are faster but less capable than Pro models
- Temperature: Higher values may require more processing time
- Model Discovery: Minimal performance impact; runs in background
- Loop Detection: Negligible overhead; recommended to keep enabled
Security Best Practices
- API Key: Your API key is stored securely via Obsidian's SecretStorage and is not written to
data.json. Never share your API key or commit it to version control - System Folders: Plugin automatically protects
.obsidianand plugin state folders from tool operations - Tool Permissions: Review tool operations before approving (when confirmations are enabled)
- System Prompt Override: Use with caution; can break expected functionality
Troubleshooting
Models not appearing
- Check API key is valid
- Enable Model Discovery in Developer Settings
- Click "Refresh models" button
- Check console for errors (with Debug Mode enabled)
Tool execution issues
- Enable Debug Mode and Log to File
- Check Loop Detection settings
- Review Stop on Tool Error setting
- Examine console logs or
debug.login the plugin state folder for specific errors
Chat history not saving
- Verify "Enable Chat History" is toggled on
- Check Plugin State Folder path is valid
- Ensure you have write permissions to vault
For more help, see the Getting Started Guide or open an issue.