Documentation
Complete reference for all ContextDigger features
📦 Installation
Get started quickly
⚡ Features
All 9 feature areas
💻 Commands
40+ commands
🔗 GitHub
Source & issues
What is ContextDigger?
ContextDigger is a codebase navigation tool that helps developers explore large projects by automatically discovering logical areas, tracking exploration history, and maintaining context across sessions.
⚡ Requires Claude Code
All commands like /init-dig, /dig, and /mark-spot are used within Claude Code chat sessions. Future support for Cursor and other AI tools coming soon.
Key Capabilities:
- ✓ Auto-Discovery: Automatically finds and organizes code areas
- ✓ Context Preservation: Never lose your place while coding
- ✓ Team Collaboration: Share knowledge and track activity
- ✓ Analytics: Understand your codebase deeply
Installation
⚡ Quick Install (Recommended)
Install or update ContextDigger with a single command:
This automatically installs the Python package and Claude Code skills. Works for both fresh installs and upgrades.
📋 Manual Installation (Alternative)
Step 1: Install Python Package
Step 2: Install Claude Skills
Step 3: Verify Installation
Feature Areas
1. Discovery & Navigation
Auto-discover code areas, navigate with precision, bookmark locations
2. History & Context
Browser-like navigation, snapshots, breadcrumbs, suggestions
3. Team Collaboration
Team presence, knowledge sharing, wiki generation
4. Code Intelligence
Dependencies, impact analysis, hotspots, coverage
5. Analytics & Insights
Work tracking, productivity metrics, reporting
Supported Languages & Frameworks
Auto-Detected Languages:
Python
pytest, unittest frameworks
JavaScript / TypeScript
Jest, Vitest, Mocha test frameworks
Salesforce
Apex, Lightning Web Components
React Ecosystem
Next.js, Astro, React
Markdown & Scripts
Shell scripts, documentation
Any Language!
Works universally with pattern detection
Works with any language! Auto-detection provides enhanced features (better area names, framework-specific metadata), but ContextDigger functions with any codebase through pattern-based analysis.
Project Structure
After running /init-dig, ContextDigger creates a .cdg/ directory:
your-project/
└── .cdg/ # ContextDigger data directory
├── config.json # Project configuration
├── .gitignore # Excludes user-specific data
├── areas/ # Discovered code areas (commit these!)
│ ├── backend-api.json
│ ├── frontend-components.json
│ └── ...
├── sessions/ # Exploration history (gitignored)
│ └── .json
└── bookmarks/ # Marked locations (gitignored)
└── bookmarks.json ✅ Commit to Git
- ✓
.cdg/areas/- Share discoveries with team - ✓
.cdg/config.json- Project settings
❌ Don't Commit (Personal)
- ✗
.cdg/sessions/- Personal history - ✗
.cdg/bookmarks/- Personal bookmarks
Real-World Examples
Example 1: Large Monorepo Navigation
Working on a large monorepo with multiple services and shared utilities:
Example 2: Bug Investigation Workflow
Tracking down a bug that spans frontend and backend:
Example 3: Onboarding New Team Member
Help new developers understand the codebase quickly:
Troubleshooting
"ContextDigger not initialized" error
Solution: Run /init-dig in your project directory first.
This creates the .cdg/ directory and discovers your code areas.
"ModuleNotFoundError: No module named 'contextdigger'"
Solution: Install the Python package:
"/init-dig command not found"
Solution: Install Claude Code skills:
Or use the one-line installer:
"Discovery finds 0 areas"
Possible causes:
- Very small project (< 5 files)
- All files in gitignore
- Unusual project structure
Solution: Manually create areas or adjust exclude patterns in .cdg/config.json
"contextdigger: command not found" (CLI)
Solution: Add Python user bin to PATH:
Quick Start
- 1. Install package & skills
- 2. Run /init-dig in project
- 3. Use /dig to navigate
- 4. Mark spots with /mark-spot
Stats
Product Roadmap
From context management to context-free code intelligence
Bug Fixes & Enhanced Discovery
Released December 2025 - Critical improvements to code area discovery
- ✓ Fixed hardcoded package detection - Now discovers all Python packages, not just specific ones
- ✓ Eliminated cache pollution - Properly excludes .mypy_cache, __pycache__, node_modules
- ✓ Extended discovery depth - Finds nested projects up to 3 levels deep (e.g., flowmason/studio/frontend)
- ✓ Impact: Discovered areas increased from 36 to 139 in FlowMason project (386% improvement)
AST Symbol Indexing
Goal: Eliminate context window limits with offline local indexing
🔍 Symbol Indexing
- • Tree-sitter AST parsing
- • Index classes, functions, methods
- • Build call graphs automatically
- • Track imports & dependencies
- • SQLite local storage
⚡ Instant Queries
- •
/find-symbol- Go to definition - •
/find-references- Find all usages - •
/show-callers- Call hierarchy - •
/show-dependencies- Imports graph - • All queries <50ms, zero tokens
🎯 Impact: Context-Window Independence
Team Collaboration Features
Goal: Launch paid Team tier with cross-project intelligence
🔗 Cross-Project Search
Search symbols across all team repos
👥 Shared Indexes
Team members see same indexed code
📊 Team Analytics
Productivity metrics & insights
Vector DB Semantic Search
Goal: AI-powered semantic code search
🧠 Semantic Queries
- • "Find code that handles JWT authentication"
- • "Show me similar authentication patterns"
- • Code similarity detection (find duplicates)
- • Smart documentation (usage examples)
⚙️ Technical Approach
- • Local embedding generation (CodeBERT)
- • FAISS vector database
- • Runs offline (privacy-first)
- • Incremental updates
Enterprise & Platform Expansion
🏢 Enterprise Features (v3.5)
- • Security pattern detection
- • Compliance scanning (GDPR, HIPAA)
- • Self-hosted deployment
- • SOC 2 Type II certification
- • Audit logs & governance
🚀 Platform Expansion (v4.0)
- • Cursor support (10x market size)
- • VS Code extension
- • Public API
- • Plugin marketplace
Technical Architecture
Local Storage (No Cloud)
- • SQLite for symbol index
- • FAISS for vector embeddings
- • All data stays on your machine
- • Optional team sync via PostgreSQL
Performance Targets
- • Index 100k lines in <30 seconds
- • Query response <50ms
- • Index size <50MB for 1M lines
- • Incremental updates <5 seconds