Skip to main content
This guide offers a hands-on workshop for architecting and executing a systematic review using a modern, AI-powered toolkit. The approach is cognitive-first: instead of just learning to operate tools, we learn to think with them. We use AI as a mirror to make our own research processes visible, helping us deconstruct complex tasks like “finding a gap” or “building a theory” into explicit, repeatable steps. By the end of this session, you’ll move beyond just “using” AI to orchestrating it with purpose.

Learning Outcomes

By the end of this session, you’ll be able to:
  • Construct a high-quality, curated literature set using discovery (Research Rabbit) and management (Zotero) tools
  • Deconstruct complex research tasks (e.g., “synthesizing a framework”) into explicit, step-by-step cognitive operations
  • Design and implement structured, multi-step prompts () that guide AI through sophisticated analytical work
  • Critically evaluate AI-generated outputs, identifying common failure modes (like “botshit” and paradigm blindness)
  • Architect a research workflow that strategically combines your domain expertise with AI capabilities

The Complete Research Pipeline

THE RESEARCH PIPELINE (Example Implementation)

📄 SEED PAPERS (3-7 papers)
    |
    v
🕸️ CITATION DISCOVERY
    (Research Rabbit, Connected Papers, etc.)
    |
    v
📚 EXPANDED SET (50-100 papers)
    |
    v
👤 HUMAN CURATION ⚠️ CRITICAL GATE
    (Quality Filter)
    |
    v
📖 REFERENCE MANAGER
    (Zotero, EndNote, etc.)
    |
    v
🤖 AI INTERFACE
    (Cherry Studio, Claude Code, Gemini CLI, etc.)
    |
    v
🧠 AI ANALYSIS (Pattern Detection)
    |
    +-- If issues found --> 📝 PROMPT REFINEMENT --> loop back to AI Analysis
    |
    v
👤 QUALITY CONTROL ⚠️ CRITICAL GATE
    (Failure Check)
    |
    v
✨ RESEARCH SYNTHESIS (Final Output)

⚠️ = Human Judgment Gate
Note: Specific tools are examples. Choose what works for your context.
Using ASCII Diagrams: Copy this pipeline directly into your AI chat sessions to explain your workflow! Perfect for prompting AI agents about your research process.
Key Insight: Notice how human judgment (🟡 yellow nodes) acts as quality gates throughout the pipeline. AI excels at scale and pattern detection (🟣 purple), while humans provide critical curation and evaluation.

Core Readings: The AI Toolkit

Mindset & Mental Models

Interpretive Orchestration

Why this matters: Establishes the core professional mindset for this approach. It frames the researcher’s role as the essential human conductor of a powerful orchestra, preventing them from becoming passive operators of a tool.

"the void" by nostalgebraist

Why this matters: Provides the essential mental model for a researcher. It demolishes the idea that you are “talking to an AI” and replaces it with a more accurate and powerful one: you are co-writing a story with a character-predictor. This is the foundational insight for all effective context-setting.

Techniques & Best Practices

The Prompt Report: A Systematic Survey of Prompt Engineering Techniques

Why this matters: Provides a shared vocabulary and technical map, turning a chaotic collection of “tips and tricks” into a structured field of practice.

Gemini 2.5 Pro Capable of Winning Gold at IMO 2025

Why this matters: Serves as the “gold standard” case study. It proves that superior outcomes are not magic, but the result of superior process. It teaches the crucial concept of scaffolding: building a thinking process for the AI to follow.

Risks & Responsibility

Beware of Botshit: How to Manage the Epistemic Risks of Generative Chatbots

Why this matters: This is the intellectual safety manual for AI-powered research. It equips researchers with the critical framework needed to produce defensible, high-quality academic work.

Supplementary Readings

Quick Reference Guides

DAIR.AI — Prompt Engineering Guide

Why this matters: The go-to field manual. While the required “Prompt Report” provides the academic map, this guide offers the practical, browsable definitions and examples you’ll return to again and again.

Anthropic — Prompt engineering best practices

Why this matters: Learn directly from the model’s creators. This moves from general theory to applied practice, offering canonical, model-specific advice that you can implement immediately.

Foundational Papers

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Why this matters: Go to the primary source. This paper provides the scientific underpinning for one of the most important prompting techniques. It helps demystify why breaking down problems works.

ReAct: Synergizing Reasoning and Acting in LMs

Why this matters: The conceptual blueprint for AI agents. This paper provides the fundamental logic loop (Reason → Act → Observe) that powers most modern agentic systems. Reading this gives you a head start on understanding the architecture of the tools you’ll use in Session 4.

Advanced Context

Andrej Karpathy — Context engineering thread

Why this matters: A crucial mental upgrade for advanced users. This shifts your thinking from “how do I ask the question?” to “what information does the AI need to already have to answer well?” This context-first approach is key to unlocking complex, multi-step tasks.

Simon Willison — "In defense of prompt engineering"

Why this matters: Justifies the importance of this skill for your career. This piece provides a robust intellectual defense for investing time in prompt engineering, framing it as a lasting competency for anyone working with AI systems.

Session Structure

Our hands-on session will follow this structure:
  1. Pipeline Overview - Understanding the complete workflow
  2. Discovery & Curation - Mastering Research Rabbit and Zotero
  3. AI Integration - Setting up Cherry Studio and MCP servers
  4. Pipeline Practice - Working with a sample literature set
The scaling and scalability literature is used as a shared case study throughout these guides. A comprehensive seed library of 10 foundational papers is provided, which you can expand using the discovery methods we learn today.

🔧 MCP Server Setup (Live Demo)

During the AI Integration phase, we’ll add the first MCP servers:

Follow Along

  1. Open Cherry Studio → Settings → MCP Configuration
  2. Enable @cherry/filesystem to access your research files
  3. Add @cherry/sequentialthinking for structured analysis
  4. Test both servers with your sample papers
  5. See the power of AI with direct file access!
For more MCP exploration, see the MCP Explorer Guide. By the end, you’ll have a complete, tested workflow for AI-enhanced systematic reviews.

Pre-Class Setup for Session 2

Before our hands-on session, please complete the following setup to ensure you’re ready to dive in.
1

Complete Initial Setup

Follow the Cherry Studio Setup Guide to complete Steps 1-6. This includes installation, API setup, and basic MCP configuration.
2

Test Your Environment

  • Test at least one AI model to ensure it’s responding.
  • Test the Zotero MCP integration to confirm it can access your library.
3

Prepare Your Research Vault

  • Set up your Obsidian vault with the recommended folder structure from the setup guide.
  • Bring 3-5 of your core “seed papers” as PDFs, ready to be added to your knowledge base.
During class, we will:
  • Set up knowledge bases together
  • Practice Zotero MCP searches
  • Export conversations to Obsidian
  • Create literature note templates
  • Practice conversation forking for different analyses

Recommended Exercises

  1. Read the foundational papers on different review types (e.g., Llewellyn 2021, Yuki 2024).
  2. Develop a synthesis prompt: Using the provided sample papers, combine the IMO approach with a chosen paper’s method.
  3. Prepare a presentation: Document notes for a 3-5 minute presentation on your synthesis.
  4. Continue expanding your personal literature library in Zotero using Research Rabbit.
  5. MCP Explorer Challenge: Find 2-3 MCP servers relevant to your research on smithery.ai, install one, and test it.
Note: The IMO paper provides a powerful template for structuring AI thinking processes. Apply this to your own synthesis tasks!


Navigation: Return to Case Study OverviewNext: Session 3
I