Overview
OpenCode is a community-driven, fully open-source AI coding agent with a beautiful terminal UI (TUI) and support for 75+ AI model providers. Trusted by 200,000+ developers monthly, it’s the Swiss Army knife of CLI AI tools—giving you complete flexibility to experiment with any model without vendor lock-in. Key Benefits for Research:- 75+ LLM providers: Claude, GPT, Gemini, DeepSeek, Kimi, local models, and more
- Model experimentation: Compare reasoning quality across providers
- Cost optimization: Switch to cheaper models for routine tasks
- Beautiful TUI: Visual, intuitive terminal interface
- Multi-session: Work on multiple research projects simultaneously
- LSP support: Advanced code intelligence and navigation
- Open source: MIT license, community-driven (27K+ GitHub stars)
Why OpenCode? Model independence is crucial for research. Compare GPT-5 vs Claude vs Gemini vs DeepSeek on the SAME task to understand which models excel at different analysis types. OpenCode makes this trivial!
- Official Website
- GitHub Repository (27K stars)
- Documentation
Step 1: Installation
1.1 Prerequisites
System Requirements:- Node.js 18 or newer
- Terminal access
- API keys for providers you want to use
1.2 Install OpenCode
Via npm:TUI Experience: Unlike plain CLI tools, OpenCode shows a visual interface with panels, menus, and interactive elements. It’s still in the terminal, but much more user-friendly!
Step 2: Configure AI Providers
2.1 Multiple Provider Support
OpenCode can connect to 75+ providers! Configure the ones you need: Common providers for research:- Anthropic (Claude) - Best reasoning
- OpenAI (GPT) - Broad capability
- Google (Gemini) - Large context
- DeepSeek - Cost-effective
- Kimi K2 - Strong analysis
- Local models (via Ollama) - Privacy
2.2 Add Your First Provider
Launch OpenCode:- Navigate to Settings (usually
s
key) - Select Providers
- Click Add Provider
- Choose from list (e.g., “Anthropic”)
- Enter API key
- Select default model (e.g., “claude-sonnet-4.5”)
- Save and test
2.3 Configure Multiple Providers
Add multiple for comparison:-
Google (Gemini): Free tier, massive context
- Provider: Google AI
- API Key: From Google AI Studio
- Model:
gemini-2.5-pro
-
Anthropic (Claude): Premium reasoning
- Provider: Anthropic
- API Key: From Anthropic Console
- Model:
claude-sonnet-4.5
-
DeepSeek: Budget-friendly
- Provider: DeepSeek
- API Key: From DeepSeek Platform
- Model:
deepseek-chat
Multi-Provider Strategy: Use DeepSeek for initial screening (cheap), Gemini for comprehensive synthesis (free + 1M context), Claude for final theoretical arguments (quality). OpenCode makes switching seamless!
Step 3: Using the TUI Interface
3.1 Navigation
Main panels:- Chat Area: Conversation with AI
- File Explorer: Browse project files
- Provider Selector: Switch models on-the-fly
- Settings: Configure preferences
Tab
: Switch between panelsEnter
: Select/activateEsc
: Go back?
: Help menuq
: Quit
3.2 Model Switching
Compare models in real-time:- Ask same question to different models
- Switch provider mid-conversation
- Compare reasoning approaches
- Document which models excel at what
Step 4: Project Setup for Research
4.1 Create Research Project
Folder structure:4.2 Project Configuration
Create.opencode/config.json
:
Step 5: File Management
5.1 File Explorer Panel
Navigate visually:- Browse folders in TUI
- Click files to add to context
- Multi-select for batch operations
- See file previews
5.2 Reference Files with @
Command syntax:5.3 Multi-File Operations
Batch analysis:Step 6: Multi-Session Workflows
6.1 Work on Multiple Projects
OpenCode’s unique feature: parallel sessions Session 1: Literature Analysis- Context separated by project
- No crosstalk between tasks
- Resume any session anytime
6.2 Shareable Session Links
Collaborate with advisors:- Complete analysis in OpenCode
- Generate shareable link
- Send to advisor
- They can review your exact prompts and outputs
- Reproducibility
- Feedback from supervisors
- Teaching others your workflow
Step 7: Model Comparison Workflows
7.1 The Research Question Test
Test which model reasons best: Round 1 (GPT-5):- Which model identified the deepest gaps?
- Which had the most creative insights?
- Which was most thorough?
7.2 Cost-Quality Optimization
Workflow strategy:- DeepSeek (cheap): Initial screening, data extraction
- Gemini (free + powerful): Comprehensive synthesis
- Claude (premium): Final theoretical arguments
- DeepSeek: $0.14 per 1M tokens
- Gemini: FREE (1K req/day)
- Claude: $3 per 1M tokens
Step 8: LSP Features for Research
8.1 Code Intelligence for Data Analysis
If you’re analyzing R/Python code:- LSP provides autocomplete
- Function signatures
- Documentation lookup
- Error detection
8.2 Navigate Large Codebases
For computational research:- Jump to definitions
- Find all references
- Understand dependencies
- Refactor safely
Step 9: Integration with Research Memex
9.1 Zotero → OpenCode Pipeline
Workflow:- Export papers from Zotero
- Convert via OCR Guide
- Load into OpenCode
- Choose model based on task:
- Gemini: Massive corpus analysis
- Claude: Deep theoretical work
- DeepSeek: High-volume screening
9.2 OpenCode → Obsidian
Export insights:- Complete analysis in OpenCode
- Save markdown outputs
- Import to Obsidian
- Link with literature notes
9.3 OpenCode → Zettlr
Paper writing:- Generate synthesis with best-fit model
- Export to Zettlr
- Add @citekeys
- Export to Word/PDF
Step 10: Best Practices
10.1 Model Selection Strategy
Choose models by task type:Task | Recommended Model | Why |
---|---|---|
Screening abstracts | DeepSeek | Cheap, fast, sufficient |
Thematic analysis | Gemini 2.5 Pro | Free, large context |
Theory building | Claude Sonnet | Best reasoning |
Data extraction | Kimi K2 | Structured output |
Gap identification | Claude or GPT-5 | Creative insights |
10.2 Session Management
Organize by research phase:- Session: “screening” → Use DeepSeek
- Session: “synthesis” → Use Gemini
- Session: “writing” → Use Claude
- Clear separation of work
- Easy to resume
- Context doesn’t bleed between phases
10.3 Cost Tracking
Monitor spending:- Check provider dashboards regularly
- Log which models you use for each task
- Calculate cost per paper analyzed
- Optimize based on results
Step 11: Advanced Features
11.1 Custom Provider Configuration
Add niche providers:- Together AI
- Perplexity
- Groq (ultra-fast)
- Replicate (specialized models)
- Ollama (local privacy)
11.2 Batch Processing
Process multiple papers:11.3 Workflow Automation
Create reusable scripts:Step 12: Troubleshooting
Installation Issues
“opencode: command not found”- Reinstall:
npm install -g opencode
- Check npm global path:
npm config get prefix
- Restart terminal
- Clear npm cache:
npm cache clean --force
- Try with sudo:
sudo npm install -g opencode
Provider Connection Errors
“Unauthorized” or “API error”- Verify API key is correct
- Check provider has credits
- Test with provider’s web interface first
- Ensure no firewall blocking
TUI Display Issues
“Interface looks broken”- Use modern terminal (iTerm2, Windows Terminal, Alacritty)
- Avoid old terminals (cmd.exe)
- Check terminal size (minimum 80x24)
- Enable UTF-8 support
Model Not Responding
“Request timeout” or “No response”- Check internet connection
- Verify provider API status
- Try different model from same provider
- Switch to alternative provider
Step 13: OpenCode vs Other CLIs
Comparison Matrix
Feature | OpenCode | Claude Code | Gemini CLI |
---|---|---|---|
Providers | 75+ | Claude only | Gemini only |
Cost | Pay per provider | $20/mo | FREE |
Interface | Beautiful TUI | Standard CLI | Standard CLI |
Stars | 27K | 8.6K (org) | 8.6K |
Open Source | ✅ Yes (MIT) | ❌ No | ✅ Yes (Apache 2.0) |
Context | Varies | 200K | 1M |
Best For | Flexibility | Quality | Free power |
When to Use OpenCode
Choose OpenCode when:- ✅ You want to experiment with multiple models
- ✅ Need to optimize costs (use cheap models for routine tasks)
- ✅ Prefer visual TUI over plain text
- ✅ Want open-source flexibility
- ✅ Need to compare model reasoning quality
- Premium reasoning quality is paramount
- Budget allows for subscription
- Want enterprise support
- Need massive 1M context window
- Want completely free solution
- Comfortable with Google ecosystem
- Teaching/learning different tools
- Optimizing workflow (right tool for right task)
- Cost-quality tradeoff matters
Step 14: Research Workflows with OpenCode
14.1 Model Comparison for Synthesis
The research question:14.2 Cost-Optimized Literature Processing
50-paper systematic review: Phase 1: Screening (DeepSeek - $0.50 total)14.3 Privacy-Sensitive Research
Use local models via Ollama: Install Ollama:- 100% offline processing
- No data sent to cloud
- Perfect for sensitive research (medical, proprietary, etc.)
- Unlimited free usage
Step 15: Advanced Features
15.1 Multi-Session Workflows
Run parallel research projects: Terminal 1: Literature Review15.2 Session Sharing
Export for reproducibility:- Share with advisor for feedback
- Document methodology for papers
- Teach workflow to lab members
- Reproduce analysis later
15.3 LSP-Powered Code Analysis
For computational research:Step 16: Integration Strategies
16.1 The Three-CLI Strategy
Use each CLI for its strength: Morning (Gemini CLI):- Load 50 papers (1M context)
- Comprehensive first-pass synthesis
- FREE tier lasts all day
- Compare synthesis across models
- Test DeepSeek vs Claude vs GPT
- Optimize for cost-quality
- Final theoretical refinement
- Premium reasoning for arguments
- Prepare for publication
16.2 From GUI to CLI
Cherry Studio → OpenCode:- Develop workflow in Cherry Studio (visual, easy)
- Extract prompts that work well
- Automate in OpenCode (scriptable, faster)
- Use Cherry Studio for exploration, OpenCode for production
Step 17: Learning Resources
17.1 Official Documentation
17.2 Community
- GitHub Discussions: Ask questions, share workflows
- Issues: Report bugs, request features
- Contributors: 188+ active contributors
- Monthly users: 200,000+ developers
17.3 Tutorials & Guides
Checklist
By the end of this guide, you should have:- Installed OpenCode via npm
- Configured at least 2 AI providers (e.g., Google + Claude)
- Launched the TUI interface
- Created a research project folder
- Configured project settings (.opencode/config.json)
- Tested file references with @ syntax
- Compared same task across 2 different models
- Created a multi-session workflow
- Understood cost optimization strategies
- Chosen which models to use for different research tasks
Power Move: Use OpenCode to test which AI model best understands YOUR specific research domain. Run the same analysis task through 5 different models, compare outputs, then standardize on the winner. This is research methodology applied to AI tools!
Resources
Provider Setup Guides
- API Keys Setup - Get provider keys
- Google AI Studio - Free Gemini access
- Anthropic Console - Claude keys
- DeepSeek Platform - Budget option
Other CLI Tools
- Claude Code - Premium single-provider
- Gemini CLI - Free Google power
- CLI Comparison - Which tool for what?
Research Memex Workflow
- Cherry Studio - GUI alternative
- Session 4: Agentic Workflows - Concepts
- SLR Workflow - Complete example
- Failure Museum - Quality control