Skip to main content
In the previous session, we built an AI-powered toolkit. Now, we put that toolkit to the test: synthesis. Can AI replicate the deep, nuanced, and creative work of an expert human researcher in creating new theory from a body of literature? This guide outlines a hands-on “Replication Experiment” designed to explore this question. The goal isn’t to find a “winner,” but to learn from the gap between human and machine. By critically analyzing where AI succeeds and where it falls short, we develop the judgment needed for effective interpretive orchestration.

Learning Outcomes

By the end of this session, you’ll be able to:
  • Analyze the cognitive processes and judgments that expert researchers use to create novel theoretical syntheses from literature
  • Apply advanced, multi-step prompting techniques to guide AI through complex, end-to-end synthesis tasks
  • Critically compare AI-driven synthesis with human-authored work, identifying strengths and weaknesses of each approach
  • Recognize which synthesis tasks work well with AI assistance (e.g., pattern identification, thematic clustering) and which benefit most from direct human engagement (e.g., theoretical innovation, critical judgment)
  • Adapt your research workflow based on practical understanding of AI’s current capabilities and limitations

The Replication Experiment Framework

PARALLEL COMPARISON: HUMAN vs AI SYNTHESIS

📚 SAME LITERATURE SET (Shared Input)
    |
    +-- LEFT PATH: 👤 EXPERT HUMAN        +-- RIGHT PATH: 🤖 AI REPLICATION
    |                                     |
    +-- 📖 Deep Reading                   +-- 💾 Batch Processing
    |   (Close engagement)                |   (All papers loaded)
    |                                     |
    +-- 🎨 Creative Pattern Recognition   +-- 🔍 Pattern Detection
    |   (Intuitive connections)           |   (Statistical clustering)
    |                                     |
    +-- 🧠 Theoretical Innovation         +-- 📊 Framework Generation
    |   (Novel frameworks)                |   (Template application)
    |                                     |
    +-- ⚖️  Critical Judgment             +-- ✅ Consistency Check
    |   (Nuanced evaluation)              |   (Logic validation)
    |                                     |
    v                                     v
    ✍️  Expert Synthesis                  📝 AI Synthesis
    (Published paper)                     (Generated output)
    |                                     |
    +-------------------------------------+
                    |
                    v
            🔬 CRITICAL COMPARISON
                (Gap Analysis)
                    |
                    v
            💡 LEARNING INSIGHTS

                💪 Human Strengths:
                   - Theoretical creativity
                   - Critical judgment
                   - Contextual nuance

                🎯 AI Strengths:
                   - Comprehensive coverage
                   - Pattern detection
                   - Rapid processing

                🎓 Delegation Strategy:
                   - What to automate
                   - What to control
Using ASCII Diagrams: Copy this comparison framework into your AI chats when planning your replication experiment! It helps structure your prompts for systematic analysis.
Key Insight: The gap between human and AI synthesis is not a failure - it’s a learning opportunity. Understanding what AI misses reveals what makes human expertise irreplaceable and where strategic delegation adds the most value.

The Human Blueprints: Case Study Papers

The foundation for the Replication Experiment is the analysis of expert human work. The two papers below serve as blueprints for two distinct and powerful approaches to research synthesis. The goal is to dissect their methodology and then design an AI workflow to replicate that process.

Generativity: A systematic review and conceptual framework

A Blueprint for Inductive Synthesis: This paper is a prime example of a generative systematic review. It doesn’t just summarize a field; it uses an inductive, grounded theory approach to analyze the existing literature and construct a novel conceptual framework from it.

Entrepreneurial Resilience

A Blueprint for Deductive Synthesis: This paper provides a model for a critical, conceptual review. It focuses on engaging with, critiquing, and extending existing theories within a specific domain. The approach is more deductive, using the literature to refine and challenge established conceptual boundaries.

The Replication Experiment: Session Structure

Part A: Analyze the Human Blueprint

  • Deconstruct the Methodology: Carefully read the methodology sections of the case study papers. What were the exact cognitive steps the authors took?
  • Identify Key Decisions: Where did the authors make crucial judgments about inclusion, exclusion, theme naming, or theoretical connections?

Part B: Design the AI Protocol

  • Translate Steps into Prompts: Convert the cognitive steps you identified into a multi-step “cognitive blueprint” for an AI to follow.
  • Use a Large Context Model: Use an AI model with a large context window (like Gemini 2.5 Pro or Claude 3.5 Sonnet) to process the entire collection of papers from the case study.

Part C: Compare and Reflect

  • Analyze the Gap: Compare the AI’s output with the published human synthesis.
  • Identify Strengths and Weaknesses: What did the AI capture well? What crucial nuances or creative leaps did it miss?
  • Document Failures: Use the Failure Museum template to document the specific ways the AI fell short.
This experiment provides a visceral, hands-on understanding of where human expertise remains irreplaceable and how to best position AI as a powerful assistant rather than a replacement.

Recommended Exercises

  1. Document Your Failures: After running your own replication experiment, use the Failure Museum template to document 3-5 failures you observed.
  2. Prepare Project Documentation: Outline your research question, the key papers you’re working with, and the progress of your own synthesis.
  3. Reflection: What were the clearest gaps you observed between the AI’s synthesis and the human expert’s? Add these reflections to your failure documentation.
Note: The next session on Agentic Workflows will use these documented failures to build better, more robust AI systems!


Navigation: Previous: Session 2Return to Case Study OverviewNext: Session 4
I