Skip to main content
Difficulty: 🟢 Beginner | Time: 20-30 minutes | Prerequisites: API key from any provider

Overview

Cherry Studio is your all-in-one research workspace, a powerful GUI that integrates chat, knowledge bases, MCP servers, AND CLI coding agents in one interface. It’s the central hub of the Research Memex approach. Key Features:
  • Multiple AI model access through one interface (100+ models)
  • Knowledge Base for loading your literature corpus
  • MCP servers for enhanced capabilities (Zotero, web search, etc.)
  • Code Tools - Launch CLI agents (Claude Code, Gemini CLI, etc.) from within Cherry Studio
  • Conversation forking for exploring different analytical paths
  • Export to markdown for Obsidian integration
Official Documentation: For complete Cherry Studio features, see Cherry AI Docs | Installation Guide

1

Download and Install Cherry Studio

Visit Cherry Studio GitHub Releases and download the version for your operating system.
  • macOS
  • Windows
  • Linux
  1. Open the downloaded .dmg file
  2. Drag Cherry Studio to your Applications folder
  3. First launch: Right-click → Open (to bypass security warning)
2

Initial Configuration

When you first open Cherry Studio, you’ll see:
  • Welcome screen with model selection
  • API configuration section
  • Settings panel
Navigate to settings by clicking the Settings icon (gear icon) in the sidebar, then select API Keys or Model Configuration.
API Keys Explained: Before configuring providers, you may want to understand API keys, free tiers, and provider options. See the API Keys Setup Guide for comprehensive information on getting API access from Google, Anthropic, DeepSeek, and other providers.
3

Configure API Provider

You’ll need an API key from a provider to access AI models. See the API Keys Setup Guide for detailed instructions on getting keys from Google AI Studio (free), OpenRouter, or other providers.
Following the Systematic Review course? Your instructor will provide a shared OpenRouter API key for the class. Skip the API setup guide and use the provided key instead.
3.1

Navigate to API Settings

  1. In Cherry Studio, click the Settings icon (gear icon)
  2. In the settings menu, select API Keys
3.2

Add Your Provider

  1. In the API Keys panel, click the Add Provider button
  2. Select your provider (Google AI Studio, OpenRouter, etc.)
3.3

Enter and Test API Key

  1. A configuration window will appear
  2. Paste your API key into the field (starts with sk- or similar)
  3. Click the Test Connection button - you should see a green “Success” message
  4. Click Save
You’re now ready to use AI models in Cherry Studio!
4

Optional: Additional API Providers

Optional Providers for Specific Models

While OpenRouter provides access to most models you’ll need, you may want to configure additional providers for specific models or embedding services:
4.1

DeepSeek Provider

Optional - If you want direct access to DeepSeek models:
  1. Click Add ProviderDeepSeek
  2. Create account at platform.deepseek.com
  3. Recommended Model: DeepSeek V3.1
  4. Note: All DeepSeek models are also available via OpenRouter
4.2

Moonshot AI (Kimi)

Optional - For direct access to Kimi models:
  1. Click Add ProviderCustom
  2. Name: “Moonshot AI (Kimi)”
  3. Create account at platform.moonshot.ai
  4. Base URL: https://api.moonshot.ai/v1
  5. Recommended Model: Kimi K2 Turbo Preview
  6. Note: Kimi models are also available via OpenRouter
4.3

Google AI Studio Provider

Recommended Free Backup - For generous daily limits:
  1. Create account at aistudio.google.com
  2. Get API key at aistudio.google.com/app/apikey
  3. In Cherry Studio: Add ProviderGoogle Gemini
  4. Paste your API key (starts with AIza…)
Available Models:
  • gemini-1.5-flash (1,500 requests/day - high-volume work)
  • gemini-2.5-pro (100 requests/day - complex analysis)
  • gemini-embedding-experimental-0307 (document similarity)
Use Cases:
  • Processing large literature collections (1M token context)
  • Backup when course API credits are low
  • Cost-free experimentation
  • Document embeddings and semantic search
Daily Limits Reset: Midnight Pacific Time
5

Which Models Should I Add?

Now that you have configured your API providers, you might be wondering which models to add to your Cherry Studio interface.For detailed recommendations on which models are best suited for different research tasks, their costs, and how to configure them, please refer to the AI Model Reference Guide. It provides a comprehensive overview to help you make informed choices.
5

MCP Servers - Give Your AI Superpowers

What Are MCPs?

MCP (Model Context Protocol) is like giving your AI access to external tools and data. Without MCPs, AI can only chat. WITH MCPs, it can:Real examples of what MCPs enable:
  • 📚 Search your Zotero library: “Find all papers about organizational learning from 2020-2024”
  • 📁 Read/write files: “Analyze the methodology section in Chapter3.docx”
  • 🌐 Search the web in real-time: “What’s the latest research on AI in education published this month?”
  • 🧠 Step-by-step reasoning: “Break down this complex theory comparison in 5 structured steps”
  • 🔗 Multi-AI orchestration: “Ask Gemini CLI for its perspective on this analysis” (via Zen MCP)
The magic: MCPs turn chat into CAPABILITY. Your AI becomes a research partner, not just a text generator!
Essential MCPs:
  • @cherry/filesystem - AI can read your files, analyze documents
  • @cherry/sequentialthinking - Step-by-step structured reasoning
Powerful Additions:
  • Zotero MCP - Direct library access and search
  • Web Search MCP - Real-time information
  • Research Memex MCP - Access these docs from your AI!
Advanced MCPs:
New to MCPs? See the MCP Explorer Guide for detailed installation instructions. Following the course? Session 2 covers MCP setup in depth.

How to Install MCPs

In Cherry Studio:
  1. Settings → MCP Servers
  2. Click “Add Server”
  3. Choose from library or paste MCP URL
  4. Configure and test
Want to explore ALL available MCPs? See MCP Explorer Guide for:
  • Complete MCP catalog
  • Installation instructions for each MCP
  • Cherry Studio, Claude Code, and other client configs
  • Use cases and examples
Official MCP docs: Cherry Studio MCP Guide | MCP Protocol
7

Code Tools - Launch CLI Agents (Advanced)

Access CLI Power from the GUI

Cherry Studio v1.5.7+ includes Code Tools - a feature that lets you launch command-line AI agents (Claude Code, Gemini CLI, Qwen Code, OpenAI Codex) directly from the Cherry Studio interface!Why use Code Tools?
  • Access CLI agent capabilities without leaving Cherry Studio
  • No separate terminal setup needed
  • Integrated with your API keys and models
  • Perfect for file-based research workflows
1

Enable Code Tools

  1. Ensure you’re running Cherry Studio v1.5.7 or higher
  2. Settings → Navigation → Set navigation bar to Top position
  3. Create a new tab or conversation
  4. Click the Code (</>) icon in the toolbar
2

Select a CLI Agent

Choose from available CLI tools:
  • Claude Code: Premium, excellent for research workflows
  • Gemini CLI: Free Google power, 1M context window
  • Qwen Code: Alibaba’s open-source alternative
  • OpenAI Codex: GPT-based coding agent
For Research Memex, we recommend:
  • Beginners → Gemini CLI (free, powerful)
  • Advanced → Claude Code (best quality)
  • Experimenters → Qwen Code (open source)
3

Configure and Launch

  1. Select a compatible AI model from your configured providers
  2. Set working directory (your research project folder)
  3. Configure environment variables if needed
  4. Click Launch Agent
  5. The CLI agent runs in an embedded terminal within Cherry Studio!
Token Usage: Code Tools consume significant API tokens! Monitor your usage carefully, especially with complex file operations.
Official Guide: For detailed Code Tools tutorial, see Cherry Studio Code Tools Documentation
When to use Code Tools vs standalone CLI:
  • Use Code Tools: When you want GUI convenience and integrated workflow
  • Use standalone CLI: When you prefer terminal-native experience and want full control
For standalone CLI setup, see:
6

Test Your Setup

6.1

Create Test Conversation

  1. Click New Chat in the sidebar
  2. Select a model (start with GPT-5-mini or DeepSeek-chat)
  3. Type this test prompt:
    Please summarize the key components of a systematic review
    according to PRISMA guidelines in 3 bullet points.
    
  4. Press Enter or click Send
Expected Response: You should receive a concise summary within 5-10 seconds.
6.2

Verify Model Access

Test each configured model:
  1. Create new conversation
  2. Select different model from dropdown
  3. Send same test prompt
  4. Compare responses
6.3

Test MCP Tools

Verify MCP servers are working:
  1. Test Zotero: “Search my Zotero for systematic review papers”
  2. Test Sequential Thinking: “Help me plan a literature review in 5 steps using sequential thinking”
  3. Test Web Search: “Find recent papers on AI in management”
Expected: Each command should return relevant results
7

Set Up Knowledge Base

7.1

Prepare Your Documents

  1. Export your curated papers from Zotero as PDFs
  2. Create a folder: systematic-review-papers
  3. Place all PDFs in this folder
7.2

Create Knowledge Base in Cherry Studio

  1. Click Knowledge Base in sidebar
  2. Click Create New Collection
  3. Name it: “My Systematic Review”
  4. Click Add Documents
  5. Select your PDF folder
  6. Wait for processing (1-2 min per 10 papers)
Advanced Options:
  • OCR Processing: Enable for scanned PDFs (requires v1.4.8+)
  • Intent Recognition: Better search accuracy with powerful models
  • Multiple formats: Supports PDF, TXT, Markdown, Word, etc.
See: Knowledge Base Documentation | Document Preprocessing
7.3

Enable Knowledge Base in Conversations

  1. Start new conversation
  2. Click Knowledge icon in chat toolbar
  3. Select your collection
  4. The AI now has access to your papers!
Pro Tip: Enable “Intent Recognition” in knowledge base settings for more accurate search results when asking complex research questions.
8

Knowledge Management with Obsidian

8.1

Why Obsidian?

Obsidian is a powerful markdown editor that creates a “second brain” for your research:
  • Local storage: Your notes stay on your computer
  • Bidirectional linking: Connect ideas across papers
  • Zotero integration: Seamless citation management
  • Graph view: Visualize connections in your research
  • MCP accessibility: AI can read your knowledge base
8.2

Quick Setup Overview

For detailed instructions, see the Obsidian Setup Guide

Essential Steps:

  1. Install Obsidian from obsidian.md
  2. Create vault: “Systematic-Review-Research”
  3. Install plugins:
    • Zotero Integration (multiple options available)
    • Dataview (for literature tables)
    • Local REST API (for MCP access)
  4. Configure integration with Zotero (requires Better BibTeX)
  5. Set up MCP for Cherry Studio access
8.3

Cherry Studio → Obsidian Workflow

Set Up Folder Structure

Create this structure in your Obsidian vault:

Export from Cherry Studio to Obsidian

  1. In Cherry Studio conversation:
    • Click Export button (or Cmd/Ctrl + E)
    • Choose Markdown format
    • Select Save to Folder
    • Navigate to /02-AI-Conversations/
    • Name format: YYYY-MM-DD-Topic.md
  2. The exported file includes:
    • Complete conversation history
    • Model used and timestamps
    • Any code blocks or tables
    • Referenced papers (if using Zotero MCP)

Create Literature Note Template

Save this in /Templates/literature-note.md:
8.4

Bidirectional Workflow Benefits

  • From Zotero to Obsidian: Import papers with annotations
  • From Cherry Studio to Obsidian: Export AI analysis
  • Within Obsidian: Link papers, find patterns, build arguments
  • Back to Cherry Studio: Copy synthesis for further AI analysis
Advanced Integration: Cherry Studio can also connect directly to Obsidian via MCP or data settings. See: Cherry Studio Obsidian Integration
9

Troubleshooting

9.1

'API Key Invalid' Error

  • Double-check key is copied completely (no spaces)
  • Ensure you have credits in your account
  • Try regenerating the API key
9.2

'Rate Limit Exceeded'

  • Wait 60 seconds and try again
  • Switch to a different model temporarily
  • Check your API provider’s rate limits
9.3

'Connection Failed'

  • Check internet connection
  • Verify firewall isn’t blocking Cherry Studio
  • Try using a different API provider
9.4

Knowledge Base Not Working

  • Ensure PDFs are text-based (not scanned images)
  • Check file size (max 10MB per file recommended)
  • Try re-importing documents
9.5

Model Not Responding

  • Check API key configuration
  • Verify you have credits remaining
  • Try a different model to isolate issue
10

Quick Reference Card

10.1

Keyboard Shortcuts

  • New Chat: Cmd/Ctrl + N
  • Fork Conversation: Cmd/Ctrl + Shift + F
  • Search Conversations: Cmd/Ctrl + F
  • Export Chat: Cmd/Ctrl + E
  • Settings: Cmd/Ctrl + ,
10.2

Document Processing Options Comparison

FeatureMinerU (Free)Mistral APIDirect Text
Daily Limit500 documentsUnlimitedUnlimited
CostFree$0.10-0.20/docFree
QualityGoodExcellentBasic
Math Formulas✓ LaTeX✓ LaTeX
Tables✓ Preserved✓ EnhancedPartial
Images✓ Extracted✓ OCR
Multi-column
SpeedFastModerateInstant
11

Best Practices

  1. Choose the right model for the task (See AI Model Reference Guide): if you are not sure, experiment with cheaper/free models for initial exploration and use expensive models when you feel more confident
  2. Fork conversations before trying different approaches (available in Cherry Studio, ChatWise, ChatGPT, Claude.ai)
  3. Export important conversations immediately
  4. Start a new conversation if you notice performance degradation

Support Resources

Official Cherry Studio Documentation

Research Memex Integration

External Resources

Fallback Options

If Cherry Studio fails to work:

Checklist

By the end of this guide, you should have:
  • Downloaded and installed Cherry Studio
  • Created at least one API account (OpenRouter recommended)
  • Added $5-10 in API credits
  • Successfully sent a test message to any added model
  • Enabled and tested at least one MCP server (Zotero, Sequential Thinking, Web Search)
  • (Optional) Tested Code Tools by launching a CLI agent
  • Installed Obsidian and created a vault (a folder in your computer)
  • Exported a conversation to markdown format and imported it to Obsidian
  • Created and tested a knowledge base in Cherry Studio with your seed papers
  • (Optional) Configured OCR and intent recognition for knowledge base

For more information

If you plan to experiment with command‑line tools or provider‑specific keys later, see the CLI Setup Guide (optional).For API key setup and provider configuration, see the API Keys Setup Guide.For model selection and recommended settings (temperature, reasoning effort), refer to the AI Model Reference Guide.
I