Skip to main content

Feature Settings

Feature Settings allow you to customize advanced functionality and experimental features in PostQode. These settings control specific behaviors and capabilities that enhance your development workflow.

Accessing Feature Settings

  1. Open VS Code with PostQode extension installed
  2. Click on the PostQode icon in the sidebar to open the PostQode panel
  3. Click on the Settings icon (gear icon) in the PostQode panel
  4. Navigate to the Feature Settings section

Workspace Management

Enable Checkpoints

Default: Disabled

Enables the extension to save checkpoints of your workspace throughout task execution using Git under the hood.

How Checkpoints Work

  • Automatic Snapshots: Creates Git commits at key points during task execution
  • Version History: Maintains a history of changes made by PostQode
  • Rollback Capability: Allows you to revert to previous states if needed
  • Branch Management: Uses Git branches to isolate checkpoint history

Benefits

  • Safety Net: Easily revert changes if something goes wrong
  • Change Tracking: See exactly what PostQode modified in your workspace
  • Collaboration: Share checkpoint history with team members
  • Debugging: Identify when and where issues were introduced

Requirements

  • Git Repository: Your workspace must be a Git repository
  • Git Installed: Git must be installed and accessible from command line
  • Write Permissions: PostQode needs write access to the workspace
  • Clean Working Directory: Works best with committed changes

Limitations

  • Large Workspaces: May not work well with very large repositories
  • Performance Impact: Can slow down operations in large projects
  • Storage Usage: Creates additional Git history and objects
  • Merge Conflicts: May complicate Git workflows in active repositories

Best Practices

  • Small to Medium Projects: Works best with reasonably sized codebases
  • Dedicated Branches: Consider using separate branches for PostQode work
  • Regular Cleanup: Periodically clean up checkpoint branches
  • Backup Important Work: Always backup critical work before enabling

Configuration Steps

  1. Ensure your workspace is a Git repository
  2. Navigate to Feature Settings
  3. Toggle "Enable Checkpoints" on
  4. PostQode will create checkpoints automatically during task execution
  5. View checkpoint history using Git commands or your Git client

MCP (Model Context Protocol) Features

Enable MCP Marketplace

Default: Enabled

Enables the MCP Marketplace tab for discovering and installing MCP servers that extend PostQode's capabilities.

What is MCP Marketplace

  • Server Discovery: Browse available MCP servers from the community
  • Easy Installation: One-click installation of MCP servers
  • Server Management: Manage installed servers and their configurations
  • Community Contributions: Access servers created by the PostQode community

Benefits

  • Extended Functionality: Add new tools and capabilities to PostQode
  • Specialized Tools: Access domain-specific tools and integrations
  • Community Innovation: Leverage community-developed enhancements
  • Modular Architecture: Add only the features you need

How to Use

  1. Enable "Enable MCP Marketplace" in Feature Settings
  2. Navigate to the MCP Marketplace tab in PostQode panel
  3. Browse available servers by category or search
  4. Click "Install" on servers you want to add
  5. Configure server settings as needed

MCP Display Mode

Options: Plain Text | Rich Display | Markdown

Default: Rich Display

Controls how MCP server responses are displayed in the PostQode interface.

Display Mode Options

Plain Text
  • Simple Format: Displays responses as plain text
  • Fast Rendering: Minimal processing for quick display
  • Low Resource: Uses minimal system resources
  • Best For: Simple text responses and low-powered systems
Rich Display
  • Enhanced Formatting: Supports links, images, and rich formatting
  • Interactive Elements: Clickable links and interactive components
  • Visual Appeal: Better visual presentation of complex data
  • Best For: Complex responses with multimedia content
Markdown
  • Markdown Rendering: Full markdown support with syntax highlighting
  • Code Blocks: Proper formatting for code snippets
  • Tables and Lists: Enhanced display of structured data
  • Best For: Technical documentation and code-heavy responses

Collapse MCP Responses

Default: Enabled

Sets the default display mode for MCP response panels to collapsed state.

Benefits

  • Clean Interface: Keeps the interface uncluttered
  • Focus Management: Helps maintain focus on current tasks
  • Performance: Reduces rendering overhead for large responses
  • User Control: Users can expand responses when needed

Configuration

  • Enabled: MCP responses start collapsed by default
  • Disabled: MCP responses are fully expanded by default
  • Per-Response: Users can still expand/collapse individual responses

AI Model Features

OpenAI Reasoning Effort

Options: Low | Medium | High

Default: Medium

Controls the reasoning effort level for OpenAI family models across all OpenAI model providers.

Reasoning Effort Levels

Low Effort
  • Fast Responses: Prioritizes speed over deep reasoning
  • Lower Cost: Reduced token usage and API costs
  • Simple Tasks: Best for straightforward questions and tasks
  • Quick Iterations: Ideal for rapid prototyping and testing
  • Balanced Performance: Good balance of speed and reasoning quality
  • Moderate Cost: Reasonable token usage for most tasks
  • General Purpose: Suitable for most development tasks
  • Default Choice: Recommended for typical usage patterns
High Effort
  • Deep Reasoning: Maximum reasoning capability and accuracy
  • Higher Cost: Increased token usage and API costs
  • Complex Tasks: Best for complex problem-solving and analysis
  • Quality Focus: When accuracy is more important than speed

When to Adjust

  • Increase for: Complex debugging, architectural decisions, code reviews
  • Decrease for: Simple questions, quick fixes, repetitive tasks
  • Consider Cost: Higher effort levels consume more tokens

Chat and Interaction Features

Enable Strict Chat Mode

Default: Disabled

Enforces strict tool use while in Chat mode, preventing file edits and system modifications.

Strict Mode Behavior

  • Read-Only Operations: Only allows reading files and viewing information
  • No File Edits: Prevents writing, modifying, or deleting files
  • No System Changes: Blocks system commands and configuration changes
  • Safe Exploration: Allows safe exploration of codebases without modifications

Benefits

  • Safety: Prevents accidental modifications during exploration
  • Code Review: Safe for reviewing code without risk of changes
  • Learning: Ideal for learning and understanding existing codebases
  • Collaboration: Safe for shared environments and pair programming

When to Enable

  • Code Reviews: When reviewing code without intending to modify
  • Learning: When exploring unfamiliar codebases
  • Demonstrations: When showing code to others
  • Shared Environments: When working in shared or production environments

Limitations

  • No File Creation: Cannot create new files or directories
  • No Modifications: Cannot fix bugs or implement features
  • Limited Assistance: Reduces PostQode's ability to help with implementation
  • Mode Switching: Need to disable for actual development work

Context Management

Enable Auto Compact

Default: Enabled

Enables advanced context management system which uses LLM-based condensing for next-generation models.

How Auto Compact Works

  • Intelligent Summarization: Uses AI to summarize old conversation context
  • Context Preservation: Maintains important information while reducing token usage
  • Automatic Triggering: Activates when context approaches token limits
  • Seamless Operation: Works transparently without user intervention

Benefits

  • Extended Conversations: Allows longer conversations without hitting limits
  • Cost Efficiency: Reduces token usage by compacting old context
  • Performance: Maintains response speed by managing context size
  • Continuity: Preserves conversation flow and important context

Technical Details

  • LLM Processing: Uses the same model to intelligently summarize context
  • Selective Compression: Preserves recent and important information
  • Threshold Management: Automatically triggers at optimal context levels
  • Quality Preservation: Maintains conversation quality and coherence

Considerations

  • Processing Time: Brief delay when compacting occurs
  • Information Loss: Some minor details may be lost in summarization
  • Model Dependency: Requires compatible models for optimal performance
  • Token Usage: Uses additional tokens for the compacting process