Introduction: From Cognitive Overload to Architectural Oversight
As researchers conducting a Systematic Literature Review (SLR), we often face cognitive overload. We identify, screen, and synthesize hundreds of articles, manage complex text datasets, and forge novel arguments from existing literature. This process is time-consuming and carries risks of inconsistency and error. Powerful AI assistants like Claude Code present an opportunity. Used haphazardly (as glorified search engines or text generators), these tools can lead to superficial work. Used strategically, they can amplify our capacity significantly. This guide demonstrates one approach: transforming your SLR project into a dynamic, AI-navigable “codebase” where you act as the architect, strategically orchestrating the work.Prerequisites: This guide assumes you’ve completed Session 4: Agentic Workflows and installed Claude Code. This is the practical implementation of those concepts.
Section 1: Bootstrapping Your SLR Project
Step 1.1: Launch and Create Project
Create your SLR project folder:CLAUDE.md
file automatically. This is your SLR’s “brain.”
Step 1.2: The One-Prompt Project Setup
Copy this entire prompt into Claude Code to bootstrap your SLR structure:Step 1.3: Organize Literature Files
Your curated papers from Zotero should go into/00_literature_files/
:
Naming convention:
Section 2: The Three-Tier Agentic Workflow
This framework helps you strategically delegate SLR tasks to maximize your cognitive energy for high-value work.Tier 1: Automated Assistant (Mechanical Tasks)
Goal: Eliminate tedious, high-volume work Use Cases: Initial Screening:Tier 2: Collaborative Partner (Structured Tasks)
Goal: Accelerate structured content creation Use Cases: Data Extraction:Tier 3: Socratic Sparring Partner (Deep Work)
Goal: Sharpen your theoretical contribution Use Cases: Identifying Research Gaps:Section 3: Creating SLR-Specific Slash Commands
3.1 Screening Command
Create.claude/commands/screen.md
:
3.2 Data Extraction Command
Create.claude/commands/extract.md
:
3.3 Critique Command
Create.claude/commands/critique.md
:
Section 4: The Complete SLR Workflow
Phase 1: Search & Screening (Weeks 1-2)
Step 1: Document search strategyPhase 2: Data Extraction (Weeks 3-4)
Step 1: Extract from each included paperPhase 3: Synthesis & Analysis (Weeks 5-8)
Step 1: Thematic analysisPhase 4: Writing & Refinement (Weeks 9-12)
Step 1: Draft sectionsSection 5: Quality Control & Verification
5.1 Common Failure Modes in SLR
Reference the Failure Museum for:- Citation confusion: AI inventing plausible citations
- Context stripping: Missing nuances from papers
- Coherence fallacy: Creating false consensus
5.2 Verification Checklist
After each AI-generated output:- Verify all citations exist in your Zotero library
- Cross-check extracted data against original papers
- Review for logical consistency
- Check for hallucinated claims
- Validate statistical information
5.3 Human Judgment Gates
Critical checkpoints requiring human review:- Inclusion decisions (borderline cases)
- Theme identification (theoretical framing)
- Gap analysis (novelty assessment)
- Theoretical contribution (intellectual merit)
- Final argument (coherence and originality)
Section 6: Advanced SLR Techniques
6.1 Iterative Screening
First pass - conservative:6.2 Forward/Backward Citation Tracking
Using Research Rabbit exports:6.3 Meta-Analysis Preparation
Extract quantitative data:Section 7: Parallel Agent Workflows
7.1 Multi-Agent Paper Analysis
Analyze from multiple perspectives:7.2: Parallel Screening
Screen batches simultaneously:Section 8: Export & Deliverables
8.1 PRISMA Flowchart
8.2 Extraction Table Export
8.3 Manuscript Draft Compilation
Section 9: Ethical Guidelines & Academic Integrity
9.1 Transparency Requirements
Document in your methods section:9.2 What to Automate vs Control
Automate (Tier 1):- Initial abstract screening (with human verification)
- Basic data extraction
- Bibliography formatting
- Keyword counting
- Thematic coding
- Methodology comparison
- Drafting synthesis sections
- Final inclusion decisions (borderline cases)
- Theoretical framing
- Novel gap identification
- Argument construction
- Quality assessment of contributions
9.3 Verification Protocol
For every AI-assisted task:- Document the prompt used
- Review outputs for hallucinations
- Verify citations against Zotero library
- Check logical consistency
- Maintain audit trail
Section 10: Troubleshooting SLR Workflows
Screening Inconsistencies
Problem: Different screening decisions for similar abstracts Solution:- Refine inclusion criteria in CLAUDE.md
- Add edge case examples
- Create
/screen_uncertain
command for borderline cases - Maintain manual review log
Extraction Errors
Problem: Missing or incorrect data in extraction table Solution:- Verify against original papers
- Update /extract template with more specific instructions
- Add validation checks: “Review extraction for [paper] against original”
Citation Hallucinations
Problem: AI invents citations that don’t exist Solution:- Always verify with Zotero library
- Use Zotero MCP for citation checks
- See: Failure Museum - Citation Confusion
Section 11: Example: Complete Screening Workflow
Step 1: Prepare abstracts fileSection 12: Integration with Research Memex Tools
12.1 Zotero → Claude Code Pipeline
Workflow:- Collect papers in Zotero (see Zotero Guide)
- Export to markdown via OCR (see OCR Guide)
- Move to
/00_literature_files/
- Process with Claude Code
12.2 Claude Code → Obsidian Pipeline
Export synthesis for knowledge base:- Complete analysis in Claude Code
- Export synthesis notes to markdown
- Import to Obsidian vault
- Link with literature notes
12.3 Claude Code → Zettlr Pipeline
Final manuscript preparation:- Draft sections in Claude Code
- Export to
/drafts/
folder - Open in Zettlr for citation formatting
- Use @citekeys from Better BibTeX
- Export to Word/PDF for submission
Section 13: Case Study: Scaling Literature SLR
Example prompts used in the course: Initial setup:Checklist: SLR with Claude Code
By the end of this workflow, you should have:- Created SLR project structure (5 folders)
- Written CLAUDE.md with SLR protocol
- Created /screen, /extract, /critique slash commands
- Screened abstracts with quality verification
- Extracted data into master table
- Conducted thematic synthesis
- Generated PRISMA flowchart
- Drafted literature review sections
- Verified all AI outputs against original papers
- Documented AI use for methods section
- Maintained audit trail of prompts
Resources
SLR Methodology
Research Memex Integration
- Session 4: Agentic Workflows - Conceptual foundation
- Claude Code Setup - General installation
- Failure Museum - Common AI errors
- Cognitive Blueprints - Prompt templates