Systematic Reviews
Building an SLR with Claude Code
A hands-on guide to conducting a systematic literature review using Claude Code's agentic capabilities for screening, extraction, and synthesis.
Introduction: From Cognitive Overload to Architectural Oversight
As researchers conducting a Systematic Literature Review (SLR), we often face cognitive overload. We identify, screen, and synthesize hundreds of articles, manage complex text datasets, and forge novel arguments from existing literature. This process is time-consuming and carries risks of inconsistency and error.
Powerful AI assistants like Claude Code present an opportunity. Used haphazardly (as glorified search engines or text generators), these tools can lead to superficial work. Used strategically, they extend the reach of a single researcher across the whole literature.
This guide demonstrates one approach: transforming your SLR project into a dynamic, AI-navigable "codebase" where you act as the architect, strategically orchestrating the work.
Info
Prerequisites: This guide assumes you've completed Session 4: Agentic Workflows and installed Claude Code. This is the practical implementation of those concepts.
Section 1: Bootstrapping Your SLR Project
Step 1.1: Launch and Create Project
Create your SLR project folder:
mkdir Systematic-Review-2025
cd Systematic-Review-2025
claudeOnce Claude Code launches, it creates a CLAUDE.md file automatically. This is your SLR's "brain."
Step 1.2: The One-Prompt Project Setup
Copy this entire prompt into Claude Code to bootstrap your SLR structure:
This is a systematic literature review project. Create the following directory structure:
- 01_search_and_screening/
- 02_data_extraction/
- 03_synthesis_and_writing/
- 04_manuscript_drafts/
- 00_literature_files/
Then populate CLAUDE.md with this SLR protocol:
# Systematic Literature Review Protocol
## Core Task
Systematic review to understand [YOUR RESEARCH QUESTION]
## File Structure
- /01_search_and_screening/ - Search strings, screening logs
- /02_data_extraction/ - Extraction tables (CSV/Markdown)
- /03_synthesis_and_writing/ - Theme analysis, drafts
- /04_manuscript_drafts/ - Final paper sections
- /00_literature_files/ - Full-text papers (markdown)
## Inclusion Criteria
- [YOUR CRITERIA - e.g., Peer-reviewed, 2020-2025, empirical]
- [Geographic/industry scope]
- [Methodology requirements]
## Exclusion Criteria
- [YOUR CRITERIA - e.g., Non-English, opinion pieces, duplicates]
## Citation Style
APA 7th Edition
## Custom Commands
- /screen: Evaluate abstract against inclusion criteria
- /extract: Extract data from full-text paper
- /critique: Review writing for quality and styleClaude will create the folders and set up your CLAUDE.md!
Step 1.3: Organize Literature Files
Your curated papers from Zotero should go into /00_literature_files/:
Naming convention:
smith2024a.md
jones2024b.md
lee2023.mdUse Better BibTeX citation keys (same as Zotero!) - see Zotero Setup Guide
Reference papers in Claude:
"Analyze @/00_literature_files/smith2024a.md"
"Compare @/00_literature_files/smith2024a.md and @/00_literature_files/jones2024b.md"Section 2: The Three-Tier Agentic Workflow
This framework helps you strategically delegate SLR tasks to maximize your cognitive energy for high-value work.
Tier 1: Automated Assistant (Mechanical Tasks)
Goal: Eliminate tedious, high-volume work
Use Cases:
Initial Screening:
"Apply the /screen command to every abstract in
@/01_search_and_screening/database_export_abstracts.txt
and output results to screening_results.csv"Bibliography Management:
"Scan @/04_manuscript_drafts/full_draft.docx and generate
a complete, alphabetized bibliography in APA 7th edition"Keyword Counting:
"Search all files in @/02_data_extraction/ and count
occurrences of: 'scalability', 'scaling', 'growth'"Tier 2: Collaborative Partner (Structured Tasks)
Goal: Accelerate structured content creation
Use Cases:
Data Extraction:
"Run /extract on @/00_literature_files/smith2024a.md
and append results to @/02_data_extraction/extraction_table.md"Drafting Thematic Sections:
"Using data in @/02_data_extraction/extraction_table.md,
draft a 'Emerging Themes' section. Synthesize findings across
studies and identify patterns."PRISMA Flowchart:
"Based on logs in @/01_search_and_screening/, create a PRISMA
flowchart showing: records identified, screened, included, excluded."Task Tracking:
"Create a todo list for synthesizing these 5 papers on organizational learning:
- Identify common themes
- Extract methodological approaches
- Map theoretical frameworks
- Identify research gaps
- Draft synthesis section"Tier 3: Socratic Sparring Partner (Deep Work)
Goal: Sharpen your theoretical contribution
Use Cases:
Identifying Research Gaps:
"Based on all papers in @/02_data_extraction/extraction_table.md,
what are the three most significant theoretical gaps? For each gap,
explain why it's important."Note: See Coherence Fallacy for common failure modes
Strengthening Arguments:
"My main argument is [YOUR ARGUMENT]. Act as a skeptical reviewer
from [TARGET JOURNAL]. Formulate three powerful critiques based on
the evidence I've synthesized."Future Research Avenues:
"Based on limitations across all papers, propose a research agenda
with three novel questions that address these gaps. Justify why each
represents a meaningful contribution."Section 3: Creating SLR-Specific Slash Commands
3.1 Screening Command
Create .claude/commands/screen.md:
Evaluate this abstract against the inclusion criteria in @CLAUDE.md.
Abstract: $ARGUMENTS
## Screening Decision
**Decision:** [INCLUDE | EXCLUDE]
**Reason:** [Specific criterion met/failed]
**Confidence:** [High | Medium | Low]
**Notes:** [Any boundary cases or uncertainties]
Format as CSV row: decision, reason, confidenceUsage:
/screen This paper examines organizational scaling in tech startups
using survey data from 500 companies across Europe and Asia between
2020-2023...3.2 Data Extraction Command
Create .claude/commands/extract.md:
Extract structured data from this paper:
Paper: $ARGUMENTS
## Extraction Template
| Field | Content |
|-------|---------|
| Authors & Year | |
| Research Question | |
| Methodology | [Qual/Quant/Mixed] |
| Sample | [n=?, context] |
| Data Collection | |
| Analysis Approach | |
| Key Findings | [3-5 bullets] |
| Theoretical Contribution | |
| Limitations | |
| Future Research | |
Append this table to @/02_data_extraction/master_table.mdUsage:
/extract @/00_literature_files/smith2024a.md3.3 Critique Command
Create .claude/commands/critique.md:
Review this writing section for quality:
Section: $ARGUMENTS
## Quality Checklist
1. **Logical Flow:** Are arguments well-structured?
2. **Evidence Strength:** Are claims supported by citations?
3. **APA 7th Compliance:** Citations formatted correctly?
4. **Clarity:** Is language precise and academic?
5. **Coherence:** Do paragraphs connect logically?
Provide specific feedback with line references.Section 4: The Complete SLR Workflow
Phase 1: Search & Screening (Weeks 1-2)
Step 1: Document search strategy
"Create a search log in @/01_search_and_screening/search_log.md
documenting:
- Databases used: Web of Science, Scopus, etc.
- Search strings and boolean operators
- Date ranges
- Total hits per database"Step 2: Screen abstracts
"Apply /screen to each abstract in
@/01_search_and_screening/abstracts.txt
Output to screening_results.csv with columns: ID, Decision, Reason"Step 3: Generate PRISMA counts
"Analyze screening_results.csv and generate PRISMA flowchart numbers:
- Records identified
- Records after deduplication
- Records screened
- Records excluded (with reasons)
- Full-text articles assessed
- Studies included in synthesis"Phase 2: Data Extraction (Weeks 3-4)
Step 1: Extract from each included paper
For each file in @/00_literature_files/:
/extract @/00_literature_files/[filename]Step 2: Compile extraction table
"Combine all extraction outputs into a single master table
at @/02_data_extraction/master_extraction_table.md"Step 3: Quality check
"Review @/02_data_extraction/master_extraction_table.md
for missing data, inconsistencies, or errors. Flag any papers
needing manual review."Phase 3: Synthesis & Analysis (Weeks 5-8)
Step 1: Thematic analysis
"Analyze @/02_data_extraction/master_extraction_table.md
and identify 5-7 major themes. For each theme, list:
- Papers that discuss it
- Key arguments/findings
- Points of agreement/disagreement"Step 2: Methodological synthesis
"Create a methods summary table showing distribution of:
- Qualitative vs quantitative approaches
- Sample sizes and contexts
- Data collection methods
- Analysis techniques"Step 3: Gap analysis
"Based on all papers, identify:
- Methodological gaps
- Theoretical gaps
- Empirical/contextual gaps
- Temporal gaps
For each gap, explain significance and future research implications."Phase 4: Writing & Refinement (Weeks 9-12)
Step 1: Draft sections
"Using themes from @/03_synthesis_and_writing/themes.md,
draft a literature review section. Structure:
- Introduction to theme
- Discussion of each paper's contribution
- Synthesis of consensus and contradictions
- Implications for research"Step 2: Quality control
/critique @/04_manuscript_drafts/literature_section_v1.mdStep 3: Bibliography generation
"Extract all citations from @/04_manuscript_drafts/full_draft.md
and generate an APA 7th formatted bibliography"Section 5: Quality Control & Verification
5.1 Common Failure Modes in SLR
Reference the Failure Museum for:
- Citation confusion: AI inventing plausible citations
- Context stripping: Missing nuances from papers
- Coherence fallacy: Creating false consensus
5.2 Verification Checklist
After each AI-generated output:
- Verify all citations exist in your Zotero library
- Cross-check extracted data against original papers
- Review for logical consistency
- Check for hallucinated claims
- Validate statistical information
5.3 Human Judgment Gates
Critical checkpoints requiring human review:
- Inclusion decisions (borderline cases)
- Theme identification (theoretical framing)
- Gap analysis (novelty assessment)
- Theoretical contribution (intellectual merit)
- Final argument (coherence and originality)
Section 6: Advanced SLR Techniques
6.1 Iterative Screening
First pass - conservative:
/screen [abstract]
Decision: UNCERTAIN
→ Add to manual review queueSecond pass - human refinement: Review uncertain cases personally, then update screening criteria in CLAUDE.md
6.2 Forward/Backward Citation Tracking
Using Research Rabbit exports:
"Compare papers in @/00_literature_files/ with the citation network
exported from Research Rabbit. Identify:
- Highly cited papers we might have missed
- Citation clusters suggesting themes
- Outlier papers worth investigating"6.3 Meta-Analysis Preparation
Extract quantitative data:
"From all papers in @/00_literature_files/ with quantitative results,
extract effect sizes, sample sizes, and statistical significance.
Format as CSV for meta-analysis."Section 7: Parallel Agent Workflows
7.1 Multi-Agent Paper Analysis
Analyze from multiple perspectives:
"Use 3 parallel agents to evaluate @/00_literature_files/smith2024a.md:
- Agent 1: Assess methodology rigor
- Agent 2: Evaluate theoretical contribution
- Agent 3: Identify practical implications
Synthesize their assessments into a comprehensive review."7.2: Parallel Screening
Screen batches simultaneously:
"Split the abstracts in @/01_search_and_screening/batch1.txt
into 3 groups. Use parallel agents to screen each group
with /screen command. Compile results when complete."Section 8: Export & Deliverables
8.1 PRISMA Flowchart
"Generate a complete PRISMA 2020 flowchart based on:
- Search log: @/01_search_and_screening/search_log.md
- Screening results: @/01_search_and_screening/screening_results.csv
- Inclusion log: @/01_search_and_screening/included_papers.md
Output as markdown table and mermaid diagram."8.2 Extraction Table Export
"Convert @/02_data_extraction/master_extraction_table.md
to CSV format for:
1. Import into Excel/R for analysis
2. Appendix for manuscript submission
3. Supplementary materials"8.3 Manuscript Draft Compilation
"Compile these sections into a complete draft:
- @/04_manuscript_drafts/01_introduction.md
- @/04_manuscript_drafts/02_methods.md
- @/04_manuscript_drafts/03_results.md
- @/04_manuscript_drafts/04_discussion.md
Add section numbers, format headers, generate bibliography."Export to Word for final editing:
Use Zettlr for final formatting and citation management with @citekeys!
Section 9: Ethical Guidelines & Academic Integrity
9.1 Transparency Requirements
Document in your methods section:
### AI-Assisted Analysis
Screening and data extraction were assisted by Claude Code (Anthropic, 2025)
using standardized prompts defined in the review protocol. All decisions were
verified by [AUTHOR NAME]. AI outputs were subject to human review and
quality control checkpoints. The complete protocol and prompts are available
in the supplementary materials.9.2 What to Automate vs Control
Automate (Tier 1):
- Initial abstract screening (with human verification)
- Basic data extraction
- Bibliography formatting
- Keyword counting
Collaborate (Tier 2):
- Thematic coding
- Methodology comparison
- Drafting synthesis sections
Human Only (Tier 3):
- Final inclusion decisions (borderline cases)
- Theoretical framing
- Novel gap identification
- Argument construction
- Quality assessment of contributions
9.3 Verification Protocol
For every AI-assisted task:
- Document the prompt used
- Review outputs for hallucinations
- Verify citations against Zotero library
- Check logical consistency
- Maintain audit trail
Section 10: Troubleshooting SLR Workflows
Screening Inconsistencies
Problem: Different screening decisions for similar abstracts
Solution:
- Refine inclusion criteria in CLAUDE.md
- Add edge case examples
- Create
/screen_uncertaincommand for borderline cases - Maintain manual review log
Extraction Errors
Problem: Missing or incorrect data in extraction table
Solution:
- Verify against original papers
- Update /extract template with more specific instructions
- Add validation checks: "Review extraction for [paper] against original"
Citation Hallucinations
Problem: AI invents citations that don't exist
Solution:
- Always verify with Zotero library
- Use Zotero MCP for citation checks
- See: Failure Museum - Citation Confusion
Section 11: Example: Complete Screening Workflow
Step 1: Prepare abstracts file
# Export from database to text file
# Format: One abstract per line, with ID prefixStep 2: Batch screening
"Process @/01_search_and_screening/web_of_science_export.txt:
For each abstract:
1. Apply /screen command
2. Log decision with ID
3. Track exclusion reasons
4. Flag uncertain cases for manual review
Output to screening_results.csv with columns:
ID, Title, Authors, Year, Decision, Reason, Confidence"Step 3: Review uncertain cases
"Show me all abstracts marked 'UNCERTAIN' in screening_results.csv
for manual review"Step 4: Generate screening stats
"Analyze screening_results.csv:
- Total abstracts screened
- Inclusion rate (%)
- Top 3 exclusion reasons
- Uncertain cases requiring manual review
Format as summary table."Section 12: Integration with Research Memex Tools
12.1 Zotero → Claude Code Pipeline
Workflow:
- Collect papers in Zotero (see Zotero Guide)
- Export to markdown via OCR (see OCR Guide)
- Move to
/00_literature_files/ - Process with Claude Code
Tip
Batch PDF Processing: For large literature corpora (30+ papers), use MinerU MCP to batch-convert PDFs to markdown. MinerU's VLM mode handles complex academic layouts with 90%+ accuracy, processing up to 200 documents in parallel.
12.2 Claude Code → Obsidian Pipeline
Export synthesis for knowledge base:
- Complete analysis in Claude Code
- Export synthesis notes to markdown
- Import to Obsidian vault
- Link with literature notes
See: Obsidian Setup Guide
12.3 Claude Code → Zettlr Pipeline
Final manuscript preparation:
- Draft sections in Claude Code
- Export to
/drafts/folder - Open in Zettlr for citation formatting
- Use @citekeys from Better BibTeX
- Export to Word/PDF for submission
See: Zettlr Setup Guide
Section 13: Case Study: Scaling Literature SLR
Example prompts used in the course:
Initial setup:
"This SLR examines organizational scaling and scalability.
Inclusion: Peer-reviewed, 2020-2025, empirical studies on
business scaling. Exclusion: Non-English, opinion pieces,
case studies <3 companies."Screening sample:
/screen @Coviello et al. 2024. This paper distinguishes
organizational scaling (growth in size) from scalability
(capacity to grow) using multi-method approach with
50 high-growth firms...
→ INCLUDE (meets all criteria, novel conceptual distinction)Extraction sample:
/extract @/00_literature_files/coviello2024.md
→ Generated structured table with RQ, methods, findings, contributionSynthesis:
"Analyze all papers with 'scalability' construct.
Compare definitions across studies. Identify conceptual confusion
vs. conceptual clarity. Draft 1-page synthesis."Checklist: SLR with Claude Code
By the end of this workflow, you should have:
- Created SLR project structure (5 folders)
- Written CLAUDE.md with SLR protocol
- Created /screen, /extract, /critique slash commands
- Screened abstracts with quality verification
- Extracted data into master table
- Conducted thematic synthesis
- Generated PRISMA flowchart
- Drafted literature review sections
- Verified all AI outputs against original papers
- Documented AI use for methods section
- Maintained audit trail of prompts
Resources
SLR Methodology
Research Memex Integration
- Session 4: Agentic Workflows - Conceptual foundation
- Claude Code Setup - General installation
- Failure Museum - Common AI errors
- Cognitive Blueprints - Prompt templates
Tool Integration
- Zotero - Reference management
- OCR Guide - PDF to markdown
- MinerU MCP - Batch PDF processing (30+ papers)
- Obsidian - Synthesis notes
- Zettlr - Final manuscript