Skip to main content

Overview

OpenCode is a community-driven, fully open-source AI coding agent with a beautiful terminal UI (TUI) and support for 75+ AI model providers. Trusted by 200,000+ developers monthly, it’s the Swiss Army knife of CLI AI tools—giving you complete flexibility to experiment with any model without vendor lock-in. Key Benefits for Research:
  • 75+ LLM providers: Claude, GPT, Gemini, DeepSeek, Kimi, local models, and more
  • Model experimentation: Compare reasoning quality across providers
  • Cost optimization: Switch to cheaper models for routine tasks
  • Beautiful TUI: Visual, intuitive terminal interface
  • Multi-session: Work on multiple research projects simultaneously
  • LSP support: Advanced code intelligence and navigation
  • Open source: MIT license, community-driven (27K+ GitHub stars)
Why OpenCode? Model independence is crucial for research. Compare GPT-5 vs Claude vs Gemini vs DeepSeek on the SAME task to understand which models excel at different analysis types. OpenCode makes this trivial!
Official Resources:

Step 1: Installation

1.1 Prerequisites

System Requirements:
  • Node.js 18 or newer
  • Terminal access
  • API keys for providers you want to use
Check Node.js:
node --version

1.2 Install OpenCode

Via npm:
npm install -g opencode
Verify installation:
opencode --version
First launch:
cd ~/your-research-project
opencode
You’ll see the beautiful TUI interface!
TUI Experience: Unlike plain CLI tools, OpenCode shows a visual interface with panels, menus, and interactive elements. It’s still in the terminal, but much more user-friendly!

Step 2: Configure AI Providers

2.1 Multiple Provider Support

OpenCode can connect to 75+ providers! Configure the ones you need: Common providers for research:
  • Anthropic (Claude) - Best reasoning
  • OpenAI (GPT) - Broad capability
  • Google (Gemini) - Large context
  • DeepSeek - Cost-effective
  • Kimi K2 - Strong analysis
  • Local models (via Ollama) - Privacy

2.2 Add Your First Provider

Launch OpenCode:
opencode
In the TUI:
  1. Navigate to Settings (usually s key)
  2. Select Providers
  3. Click Add Provider
  4. Choose from list (e.g., “Anthropic”)
  5. Enter API key
  6. Select default model (e.g., “claude-sonnet-4.5”)
  7. Save and test

2.3 Configure Multiple Providers

Add multiple for comparison:
  1. Google (Gemini): Free tier, massive context
  2. Anthropic (Claude): Premium reasoning
  3. DeepSeek: Budget-friendly
Multi-Provider Strategy: Use DeepSeek for initial screening (cheap), Gemini for comprehensive synthesis (free + 1M context), Claude for final theoretical arguments (quality). OpenCode makes switching seamless!

Step 3: Using the TUI Interface

3.1 Navigation

Main panels:
  • Chat Area: Conversation with AI
  • File Explorer: Browse project files
  • Provider Selector: Switch models on-the-fly
  • Settings: Configure preferences
Keyboard shortcuts:
  • Tab: Switch between panels
  • Enter: Select/activate
  • Esc: Go back
  • ?: Help menu
  • q: Quit

3.2 Model Switching

Compare models in real-time:
  1. Ask same question to different models
  2. Switch provider mid-conversation
  3. Compare reasoning approaches
  4. Document which models excel at what
Example workflow:
Question to GPT-5: "Identify themes in these 10 papers"
→ Switch to Claude
Same question to Claude: "Identify themes in these 10 papers"
→ Compare quality of thematic analysis

Step 4: Project Setup for Research

4.1 Create Research Project

Folder structure:
/Research-OpenCode/
├── .opencode/
│   └── config.json       # Project-specific settings
├── literature/            # Papers (markdown)
├── analysis/              # Data, results
├── notes/                 # Research notes
└── drafts/                # Paper sections

4.2 Project Configuration

Create .opencode/config.json:
{
  "project": "Literature Review AI Education",
  "defaultProvider": "google",
  "defaultModel": "gemini-2.5-pro",
  "context": {
    "alwaysInclude": ["@README.md"],
    "folders": ["literature", "analysis"]
  },
  "preferences": {
    "citationStyle": "APA 7th",
    "outputFormat": "markdown"
  }
}
This configures OpenCode defaults for your research project!

Step 5: File Management

5.1 File Explorer Panel

Navigate visually:
  • Browse folders in TUI
  • Click files to add to context
  • Multi-select for batch operations
  • See file previews

5.2 Reference Files with @

Command syntax:
"Summarize @/literature/smith2024.md"

"Compare @/literature/smith2024.md and @/literature/jones2024.md"

"Analyze all files in @/literature/ for common methodologies"

5.3 Multi-File Operations

Batch analysis:
"For each file in @/literature/:
1. Extract research question
2. Extract methodology
3. Extract key findings
4. Save to @/analysis/extraction_table.csv"

Step 6: Multi-Session Workflows

6.1 Work on Multiple Projects

OpenCode’s unique feature: parallel sessions Session 1: Literature Analysis
# Terminal 1
cd ~/Literature-Review
opencode --session "lit-analysis"
Session 2: Data Analysis
# Terminal 2
cd ~/Data-Analysis
opencode --session "data-work"
Benefits:
  • Context separated by project
  • No crosstalk between tasks
  • Resume any session anytime
Collaborate with advisors:
  1. Complete analysis in OpenCode
  2. Generate shareable link
  3. Send to advisor
  4. They can review your exact prompts and outputs
Great for:
  • Reproducibility
  • Feedback from supervisors
  • Teaching others your workflow

Step 7: Model Comparison Workflows

7.1 The Research Question Test

Test which model reasons best: Round 1 (GPT-5):
Provider: OpenAI
"Based on these 10 papers in @/literature/, what are the
three most significant research gaps?"
Round 2 (Claude Sonnet):
Provider: Anthropic
[Same question]
Round 3 (Gemini 2.5 Pro):
Provider: Google
[Same question]
Document findings:
  • Which model identified the deepest gaps?
  • Which had the most creative insights?
  • Which was most thorough?

7.2 Cost-Quality Optimization

Workflow strategy:
  1. DeepSeek (cheap): Initial screening, data extraction
  2. Gemini (free + powerful): Comprehensive synthesis
  3. Claude (premium): Final theoretical arguments
Track costs:
  • DeepSeek: $0.14 per 1M tokens
  • Gemini: FREE (1K req/day)
  • Claude: $3 per 1M tokens
OpenCode lets you optimize spending while maintaining quality!

Step 8: LSP Features for Research

8.1 Code Intelligence for Data Analysis

If you’re analyzing R/Python code:
  • LSP provides autocomplete
  • Function signatures
  • Documentation lookup
  • Error detection
Example:
"Review my R script @/analysis/meta_analysis.R:
- Check statistical methodology
- Suggest improvements
- Verify effect size calculations"

8.2 Navigate Large Codebases

For computational research:
  • Jump to definitions
  • Find all references
  • Understand dependencies
  • Refactor safely

Step 9: Integration with Research Memex

9.1 Zotero → OpenCode Pipeline

Workflow:
  1. Export papers from Zotero
  2. Convert via OCR Guide
  3. Load into OpenCode
  4. Choose model based on task:
    • Gemini: Massive corpus analysis
    • Claude: Deep theoretical work
    • DeepSeek: High-volume screening

9.2 OpenCode → Obsidian

Export insights:
  1. Complete analysis in OpenCode
  2. Save markdown outputs
  3. Import to Obsidian
  4. Link with literature notes

9.3 OpenCode → Zettlr

Paper writing:
  1. Generate synthesis with best-fit model
  2. Export to Zettlr
  3. Add @citekeys
  4. Export to Word/PDF

Step 10: Best Practices

10.1 Model Selection Strategy

Choose models by task type:
TaskRecommended ModelWhy
Screening abstractsDeepSeekCheap, fast, sufficient
Thematic analysisGemini 2.5 ProFree, large context
Theory buildingClaude SonnetBest reasoning
Data extractionKimi K2Structured output
Gap identificationClaude or GPT-5Creative insights

10.2 Session Management

Organize by research phase:
  • Session: “screening” → Use DeepSeek
  • Session: “synthesis” → Use Gemini
  • Session: “writing” → Use Claude
Benefits:
  • Clear separation of work
  • Easy to resume
  • Context doesn’t bleed between phases

10.3 Cost Tracking

Monitor spending:
  • Check provider dashboards regularly
  • Log which models you use for each task
  • Calculate cost per paper analyzed
  • Optimize based on results

Step 11: Advanced Features

11.1 Custom Provider Configuration

Add niche providers:
  • Together AI
  • Perplexity
  • Groq (ultra-fast)
  • Replicate (specialized models)
  • Ollama (local privacy)
Config example:
{
  "providers": {
    "custom": {
      "baseURL": "https://api.example.com/v1",
      "apiKey": "$CUSTOM_API_KEY",
      "models": ["custom-research-model"]
    }
  }
}

11.2 Batch Processing

Process multiple papers:
"For each paper in @/literature/ (47 total):
1. Run /summarize command
2. Save output to @/summaries/[filename]_summary.md
3. Track progress with todo list
4. Report when complete"

11.3 Workflow Automation

Create reusable scripts:
# analyze-batch.sh
#!/bin/bash
for file in literature/*.md; do
  opencode --provider deepseek \
    --prompt "/summarize @/$file" \
    --output "summaries/$(basename $file)"
done

Step 12: Troubleshooting

Installation Issues

“opencode: command not found”
  • Reinstall: npm install -g opencode
  • Check npm global path: npm config get prefix
  • Restart terminal
“Failed to install”
  • Clear npm cache: npm cache clean --force
  • Try with sudo: sudo npm install -g opencode

Provider Connection Errors

“Unauthorized” or “API error”
  • Verify API key is correct
  • Check provider has credits
  • Test with provider’s web interface first
  • Ensure no firewall blocking

TUI Display Issues

“Interface looks broken”
  • Use modern terminal (iTerm2, Windows Terminal, Alacritty)
  • Avoid old terminals (cmd.exe)
  • Check terminal size (minimum 80x24)
  • Enable UTF-8 support

Model Not Responding

“Request timeout” or “No response”
  • Check internet connection
  • Verify provider API status
  • Try different model from same provider
  • Switch to alternative provider

Step 13: OpenCode vs Other CLIs

Comparison Matrix

FeatureOpenCodeClaude CodeGemini CLI
Providers75+Claude onlyGemini only
CostPay per provider$20/moFREE
InterfaceBeautiful TUIStandard CLIStandard CLI
Stars27K8.6K (org)8.6K
Open Source✅ Yes (MIT)❌ No✅ Yes (Apache 2.0)
ContextVaries200K1M
Best ForFlexibilityQualityFree power

When to Use OpenCode

Choose OpenCode when:
  • ✅ You want to experiment with multiple models
  • ✅ Need to optimize costs (use cheap models for routine tasks)
  • ✅ Prefer visual TUI over plain text
  • ✅ Want open-source flexibility
  • ✅ Need to compare model reasoning quality
Use Claude Code when:
  • Premium reasoning quality is paramount
  • Budget allows for subscription
  • Want enterprise support
Use Gemini CLI when:
  • Need massive 1M context window
  • Want completely free solution
  • Comfortable with Google ecosystem
Use ALL THREE when:
  • Teaching/learning different tools
  • Optimizing workflow (right tool for right task)
  • Cost-quality tradeoff matters

Step 14: Research Workflows with OpenCode

14.1 Model Comparison for Synthesis

The research question:
"What theoretical frameworks dominate the organizational
scaling literature?"
Test with 3 models:
# Switch to DeepSeek (cheap)
[Ask question]
→ Note: Fast, decent, misses some nuances

# Switch to Gemini 2.5 Pro (free)
[Ask question]
→ Note: Comprehensive, good coverage, solid reasoning

# Switch to Claude Sonnet (premium)
[Ask question]
→ Note: Deepest insights, best theoretical connections
Document which model is best for your specific research domain!

14.2 Cost-Optimized Literature Processing

50-paper systematic review: Phase 1: Screening (DeepSeek - $0.50 total)
Provider: DeepSeek
"Screen all 200 abstracts in @/screening/abstracts.txt
against inclusion criteria. Save to screening_results.csv"
Phase 2: Extraction (Kimi K2 - $2.00 total)
Provider: Kimi K2
"Extract data from 50 included papers using /extract command.
Compile to master extraction table."
Phase 3: Synthesis (Gemini - FREE!)
Provider: Google Gemini
"Analyze all 50 extracted papers. Identify themes, frameworks, gaps."
Phase 4: Theory Building (Claude - $5.00)
Provider: Anthropic Claude
"Develop novel theoretical framework integrating findings.
Propose testable propositions."
**Total cost: ~7.50vs7.50** vs 50+ with single premium provider!

14.3 Privacy-Sensitive Research

Use local models via Ollama: Install Ollama:
brew install ollama  # macOS
# or download from ollama.com
Pull a model:
ollama pull llama3.3:70b
Configure in OpenCode:
Provider: Ollama
Model: llama3.3:70b
Base URL: http://localhost:11434
Benefits:
  • 100% offline processing
  • No data sent to cloud
  • Perfect for sensitive research (medical, proprietary, etc.)
  • Unlimited free usage

Step 15: Advanced Features

15.1 Multi-Session Workflows

Run parallel research projects: Terminal 1: Literature Review
opencode --session "lit-review" --provider google
Terminal 2: Data Analysis
opencode --session "data-analysis" --provider claude
Terminal 3: Theory Building
opencode --session "theory" --provider claude
Each maintains separate context and history!

15.2 Session Sharing

Export for reproducibility:
In OpenCode → Settings → Export Session
→ Generates shareable link or JSON
→ Colleagues can import and see exact workflow
Use cases:
  • Share with advisor for feedback
  • Document methodology for papers
  • Teach workflow to lab members
  • Reproduce analysis later

15.3 LSP-Powered Code Analysis

For computational research:
"Review my Python script @/analysis/regression.py:
- Check statistical correctness
- Suggest improvements
- Explain complex sections
- Add documentation"
LSP provides context-aware suggestions!

Step 16: Integration Strategies

16.1 The Three-CLI Strategy

Use each CLI for its strength: Morning (Gemini CLI):
  • Load 50 papers (1M context)
  • Comprehensive first-pass synthesis
  • FREE tier lasts all day
Afternoon (OpenCode):
  • Compare synthesis across models
  • Test DeepSeek vs Claude vs GPT
  • Optimize for cost-quality
Evening (Claude Code):
  • Final theoretical refinement
  • Premium reasoning for arguments
  • Prepare for publication

16.2 From GUI to CLI

Cherry Studio → OpenCode:
  1. Develop workflow in Cherry Studio (visual, easy)
  2. Extract prompts that work well
  3. Automate in OpenCode (scriptable, faster)
  4. Use Cherry Studio for exploration, OpenCode for production

Step 17: Learning Resources

17.1 Official Documentation

17.2 Community

  • GitHub Discussions: Ask questions, share workflows
  • Issues: Report bugs, request features
  • Contributors: 188+ active contributors
  • Monthly users: 200,000+ developers

17.3 Tutorials & Guides


Checklist

By the end of this guide, you should have:
  • Installed OpenCode via npm
  • Configured at least 2 AI providers (e.g., Google + Claude)
  • Launched the TUI interface
  • Created a research project folder
  • Configured project settings (.opencode/config.json)
  • Tested file references with @ syntax
  • Compared same task across 2 different models
  • Created a multi-session workflow
  • Understood cost optimization strategies
  • Chosen which models to use for different research tasks
Power Move: Use OpenCode to test which AI model best understands YOUR specific research domain. Run the same analysis task through 5 different models, compare outputs, then standardize on the winner. This is research methodology applied to AI tools!

Resources

Provider Setup Guides

Other CLI Tools

Research Memex Workflow

I