Agentic AI Tools
OpenCode Setup Guide
Set up OpenCode, a community-driven open-source CLI with support for 75+ AI models, beautiful TUI, and multi-session workflows for research flexibility.
Overview
OpenCode is a community-driven, fully open-source AI coding agent with a beautiful terminal UI (TUI) and support for 75+ AI model providers. Trusted by 200,000+ developers monthly, it's the Swiss Army knife of CLI AI tools—giving you complete flexibility to experiment with any model without vendor lock-in.
Key Benefits for Research:
- 75+ LLM providers: Claude, GPT, Gemini, DeepSeek, Kimi, local models, and more
- Model experimentation: Compare reasoning quality across providers
- Cost optimization: Switch to cheaper models for routine tasks
- Beautiful TUI: Visual, intuitive terminal interface
- Multi-session: Work on multiple research projects simultaneously
- LSP support: Advanced code intelligence and navigation
- LSP support: Advanced code intelligence and navigation
- Open source: MIT license, community-driven (95K+ GitHub stars)
Info
Why OpenCode? Model independence is crucial for research. Compare GPT-5.4 vs Claude vs Gemini vs DeepSeek on the SAME task to understand which models excel at different analysis types. OpenCode makes this trivial!
Official Resources:
- Official Website
- GitHub Repository (95K+ stars)
- Documentation
Step 1: Installation
1.1 Prerequisites
System Requirements:
- Node.js 18 or newer
- Terminal access
- API keys for providers you want to use
Check Node.js:
node --version1.2 Install OpenCode
Via npm:
npm install -g opencodeVerify installation:
opencode --versionFirst launch:
cd ~/your-research-project
opencodeYou'll see the beautiful TUI interface!
Tip
TUI Experience: Unlike plain CLI tools, OpenCode shows a visual interface with panels, menus, and interactive elements. It's still in the terminal, but much more user-friendly!
Step 2: Configure AI Providers
2.1 Multiple Provider Support
OpenCode can connect to 75+ providers! Configure the ones you need:
Common providers for research:
- Anthropic (Claude) - Best reasoning
- OpenAI (GPT) - Broad capability
- Google (Gemini) - Large context
- DeepSeek - Cost-effective
- Kimi K2.5 - Strong analysis
- Local models (via Ollama) - Privacy
2.2 Add Your First Provider
Launch OpenCode:
opencodeIn the TUI:
- Navigate to Settings (usually
skey) - Select Providers
- Click Add Provider
- Choose from list (e.g., "Anthropic")
- Enter API key
- Select default model (e.g., "claude-sonnet-4-6")
- Save and test
2.3 Configure Multiple Providers
Add multiple for comparison:
-
Google (Gemini): Free tier, massive context
- Provider: Google AI
- API Key: From Google AI Studio
- Model:
gemini-3.1-pro
-
Anthropic (Claude): Premium reasoning
- Provider: Anthropic
- API Key: From Anthropic Console
- Model:
claude-sonnet-4-6
-
DeepSeek: Budget-friendly
- Provider: DeepSeek
- API Key: From DeepSeek Platform
- Model:
deepseek-chat
Info
Multi-Provider Strategy: Use DeepSeek for initial screening (cheap), Gemini for comprehensive synthesis (free + 1M context), Claude for final theoretical arguments (quality). OpenCode makes switching seamless!
Step 3: Using the TUI Interface
3.1 Navigation
Main panels:
- Chat Area: Conversation with AI
- File Explorer: Browse project files
- Provider Selector: Switch models on-the-fly
- Settings: Configure preferences
Keyboard shortcuts:
Tab: Switch between panelsEnter: Select/activateEsc: Go back?: Help menuq: Quit
3.2 Model Switching
Compare models in real-time:
- Ask same question to different models
- Switch provider mid-conversation
- Compare reasoning approaches
- Document which models excel at what
Example workflow:
Question to GPT-5.4: "Identify themes in these 10 papers"
→ Switch to Claude
Same question to Claude: "Identify themes in these 10 papers"
→ Compare quality of thematic analysisStep 4: Project Setup for Research
4.1 Create Research Project
Folder structure:
/Research-OpenCode/
├── .opencode/
│ └── config.json # Project-specific settings
├── literature/ # Papers (markdown)
├── analysis/ # Data, results
├── notes/ # Research notes
└── drafts/ # Paper sections4.2 Project Configuration
Create .opencode/config.json:
{
"project": "Literature Review AI Education",
"defaultProvider": "google",
"defaultModel": "gemini-3.1-pro",
"context": {
"alwaysInclude": ["@README.md"],
"folders": ["literature", "analysis"]
},
"preferences": {
"citationStyle": "APA 7th",
"outputFormat": "markdown"
}
}This configures OpenCode defaults for your research project!
Step 5: File Management
5.1 File Explorer Panel
Navigate visually:
- Browse folders in TUI
- Click files to add to context
- Multi-select for batch operations
- See file previews
5.2 Reference Files with @
Command syntax:
"Summarize @/literature/smith2024.md"
"Compare @/literature/smith2024.md and @/literature/jones2024.md"
"Analyze all files in @/literature/ for common methodologies"5.3 Multi-File Operations
Batch analysis:
"For each file in @/literature/:
1. Extract research question
2. Extract methodology
3. Extract key findings
4. Save to @/analysis/extraction_table.csv"Step 6: Multi-Session Workflows
6.1 Work on Multiple Projects
OpenCode's unique feature: parallel sessions
Session 1: Literature Analysis
# Terminal 1
cd ~/Literature-Review
opencode --session "lit-analysis"Session 2: Data Analysis
# Terminal 2
cd ~/Data-Analysis
opencode --session "data-work"Benefits:
- Context separated by project
- No crosstalk between tasks
- Resume any session anytime
6.2 Shareable Session Links
Collaborate with advisors:
- Complete analysis in OpenCode
- Generate shareable link
- Send to advisor
- They can review your exact prompts and outputs
Great for:
- Reproducibility
- Feedback from supervisors
- Teaching others your workflow
Step 7: Model Comparison Workflows
7.1 The Research Question Test
Test which model reasons best:
Round 1 (GPT-5.4):
Provider: OpenAI
"Based on these 10 papers in @/literature/, what are the
three most significant research gaps?"Round 2 (Claude Sonnet):
Provider: Anthropic
[Same question]Round 3 (Gemini 3.1 Pro):
Provider: Google
[Same question]Document findings:
- Which model identified the deepest gaps?
- Which had the most creative insights?
- Which was most thorough?
7.2 Cost-Quality Optimization
Workflow strategy:
- DeepSeek (cheap): Initial screening, data extraction
- Gemini (free + powerful): Comprehensive synthesis
- Claude (premium): Final theoretical arguments
Track costs:
- DeepSeek: $0.14 per 1M tokens
- Gemini: FREE (1K req/day)
- Claude: $3 per 1M tokens
OpenCode lets you optimize spending while maintaining quality!
Step 8: LSP Features for Research
8.1 Code Intelligence for Data Analysis
If you're analyzing R/Python code:
- LSP provides autocomplete
- Function signatures
- Documentation lookup
- Error detection
Example:
"Review my R script @/analysis/meta_analysis.R:
- Check statistical methodology
- Suggest improvements
- Verify effect size calculations"8.2 Navigate Large Codebases
For computational research:
- Jump to definitions
- Find all references
- Understand dependencies
- Refactor safely
Step 9: Integration with Research Memex
9.1 Zotero → OpenCode Pipeline
Workflow:
- Export papers from Zotero
- Convert via OCR Guide
- Load into OpenCode
- Choose model based on task:
- Gemini: Massive corpus analysis
- Claude: Deep theoretical work
- DeepSeek: High-volume screening
9.2 OpenCode → Obsidian
Export insights:
- Complete analysis in OpenCode
- Save markdown outputs
- Import to Obsidian
- Link with literature notes
9.3 OpenCode → Zettlr
Paper writing:
- Generate synthesis with best-fit model
- Export to Zettlr
- Add @citekeys
- Export to Word/PDF
Step 10: Best Practices
10.1 Model Selection Strategy
Choose models by task type:
| Task | Recommended Model | Why |
|---|---|---|
| Screening abstracts | DeepSeek | Cheap, fast, sufficient |
| Thematic analysis | Gemini 3.1 Pro | Free, large context |
| Theory building | Claude Sonnet | Best reasoning |
| Data extraction | Kimi K2.5 | Structured output |
| Gap identification | Claude or GPT-5.4 | Creative insights |
10.2 Session Management
Organize by research phase:
- Session: "screening" → Use DeepSeek
- Session: "synthesis" → Use Gemini
- Session: "writing" → Use Claude
Benefits:
- Clear separation of work
- Easy to resume
- Context doesn't bleed between phases
10.3 Cost Tracking
Monitor spending:
- Check provider dashboards regularly
- Log which models you use for each task
- Calculate cost per paper analyzed
- Optimize based on results
Step 11: Advanced Features
11.1 Custom Provider Configuration
Add niche providers:
- Together AI
- Perplexity
- Groq (ultra-fast)
- Replicate (specialized models)
- Ollama (local privacy)
Config example:
{
"providers": {
"custom": {
"baseURL": "https://api.example.com/v1",
"apiKey": "$CUSTOM_API_KEY",
"models": ["custom-research-model"]
}
}
}11.2 Batch Processing
Process multiple papers:
"For each paper in @/literature/ (47 total):
1. Run /summarize command
2. Save output to @/summaries/[filename]_summary.md
3. Track progress with todo list
4. Report when complete"11.3 Workflow Automation
Create reusable scripts:
# analyze-batch.sh
#!/bin/bash
for file in literature/*.md; do
opencode --provider deepseek \
--prompt "/summarize @/$file" \
--output "summaries/$(basename $file)"
doneStep 12: Troubleshooting
Installation Issues
"opencode: command not found"
- Reinstall:
npm install -g opencode - Check npm global path:
npm config get prefix - Restart terminal
"Failed to install"
- Clear npm cache:
npm cache clean --force - Try with sudo:
sudo npm install -g opencode
Provider Connection Errors
"Unauthorized" or "API error"
- Verify API key is correct
- Check provider has credits
- Test with provider's web interface first
- Ensure no firewall blocking
TUI Display Issues
"Interface looks broken"
- Use modern terminal (iTerm2, Windows Terminal, Alacritty)
- Avoid old terminals (cmd.exe)
- Check terminal size (minimum 80x24)
- Enable UTF-8 support
Model Not Responding
"Request timeout" or "No response"
- Check internet connection
- Verify provider API status
- Try different model from same provider
- Switch to alternative provider
Step 13: OpenCode vs Other CLIs
Comparison Matrix
| Feature | OpenCode | Claude Code | Gemini CLI |
|---|---|---|---|
| Providers | 75+ | Claude only | Gemini only |
| Cost | Pay per provider | $20/mo | FREE |
| Interface | Beautiful TUI | Standard CLI | Standard CLI |
| Stars | 95K+ | 8.6K (org) | 8.6K |
| Open Source | ✅ Yes (MIT) | ❌ No | ✅ Yes (Apache 2.0) |
| Context | Varies | 200K | 1M |
| Best For | Flexibility | Quality | Free power |
When to Use OpenCode
Choose OpenCode when:
- ✅ You want to experiment with multiple models
- ✅ Need to optimize costs (use cheap models for routine tasks)
- ✅ Prefer visual TUI over plain text
- ✅ Want open-source flexibility
- ✅ Need to compare model reasoning quality
Use Claude Code when:
- Premium reasoning quality is paramount
- Budget allows for subscription
- Want enterprise support
Use Gemini CLI when:
- Need massive 1M context window
- Want completely free solution
- Comfortable with Google ecosystem
Use ALL THREE when:
- Teaching/learning different tools
- Optimizing workflow (right tool for right task)
- Cost-quality tradeoff matters
Step 14: Research Workflows with OpenCode
14.1 Model Comparison for Synthesis
The research question:
"What theoretical frameworks dominate the organizational
scaling literature?"Test with 3 models:
# Switch to DeepSeek (cheap)
[Ask question]
→ Note: Fast, decent, misses some nuances
# Switch to Gemini 3.1 Pro (free)
[Ask question]
→ Note: Comprehensive, good coverage, solid reasoning
# Switch to Claude Sonnet (premium)
[Ask question]
→ Note: Deepest insights, best theoretical connectionsDocument which model is best for your specific research domain!
14.2 Cost-Optimized Literature Processing
50-paper systematic review:
Phase 1: Screening (DeepSeek - $0.50 total)
Provider: DeepSeek
"Screen all 200 abstracts in @/screening/abstracts.txt
against inclusion criteria. Save to screening_results.csv"Phase 2: Extraction (Kimi K2.5 - $2.00 total)
Provider: Kimi K2.5
"Extract data from 50 included papers using /extract command.
Compile to master extraction table."Phase 3: Synthesis (Gemini - FREE!)
Provider: Google Gemini
"Analyze all 50 extracted papers. Identify themes, frameworks, gaps."Phase 4: Theory Building (Claude - $5.00)
Provider: Anthropic Claude
"Develop novel theoretical framework integrating findings.
Propose testable propositions."Total cost: ~$7.50 vs $50+ with single premium provider!
14.3 Privacy-Sensitive Research
Use local models via Ollama:
Install Ollama:
brew install ollama # macOS
# or download from ollama.comPull a model:
ollama pull llama3.3:70bConfigure in OpenCode:
Provider: Ollama
Model: llama3.3:70b
Base URL: http://localhost:11434Benefits:
- 100% offline processing
- No data sent to cloud
- Perfect for sensitive research (medical, proprietary, etc.)
- Unlimited free usage
Step 15: Advanced Features
15.1 Multi-Session Workflows
Run parallel research projects:
Terminal 1: Literature Review
opencode --session "lit-review" --provider googleTerminal 2: Data Analysis
opencode --session "data-analysis" --provider claudeTerminal 3: Theory Building
opencode --session "theory" --provider claudeEach maintains separate context and history!
15.2 Session Sharing
Export for reproducibility:
In OpenCode → Settings → Export Session
→ Generates shareable link or JSON
→ Colleagues can import and see exact workflowUse cases:
- Share with advisor for feedback
- Document methodology for papers
- Teach workflow to lab members
- Reproduce analysis later
15.3 LSP-Powered Code Analysis
For computational research:
"Review my Python script @/analysis/regression.py:
- Check statistical correctness
- Suggest improvements
- Explain complex sections
- Add documentation"LSP provides context-aware suggestions!
Step 16: Integration Strategies
16.1 The Three-CLI Strategy
Use each CLI for its strength:
Morning (Gemini CLI):
- Load 50 papers (1M context)
- Comprehensive first-pass synthesis
- FREE tier lasts all day
Afternoon (OpenCode):
- Compare synthesis across models
- Test DeepSeek vs Claude vs GPT
- Optimize for cost-quality
Evening (Claude Code):
- Final theoretical refinement
- Premium reasoning for arguments
- Prepare for publication
16.2 From GUI to CLI
Cherry Studio → OpenCode:
- Develop workflow in Cherry Studio (visual, easy)
- Extract prompts that work well
- Automate in OpenCode (scriptable, faster)
- Use Cherry Studio for exploration, OpenCode for production
Step 17: Learning Resources
17.1 Official Documentation
17.2 Community
- GitHub Discussions: Ask questions, share workflows
- Issues: Report bugs, request features
- Contributors: 188+ active contributors
- Monthly users: 200,000+ developers
17.3 Tutorials & Guides
Checklist
By the end of this guide, you should have:
- Installed OpenCode via npm
- Configured at least 2 AI providers (e.g., Google + Claude)
- Launched the TUI interface
- Created a research project folder
- Configured project settings (.opencode/config.json)
- Tested file references with @ syntax
- Compared same task across 2 different models
- Created a multi-session workflow
- Understood cost optimization strategies
- Chosen which models to use for different research tasks
Tip
Power Move: Use OpenCode to test which AI model best understands YOUR specific research domain. Run the same analysis task through 5 different models, compare outputs, then standardize on the winner. This is research methodology applied to AI tools!
What's Next
OpenCode's strength is the provider-comparison story. The natural next step is the CLI Setup Guide, which compares OpenCode against Claude Code and Gemini CLI so you can choose deliberately.