Skip to main content

Case Study Overview

This page outlines the structure of a course designed to teach the Research Memex approach by applying it to a common academic task: the systematic literature review. This case study demonstrates one way to strategically integrate AI into the research workflow, from literature discovery through to final synthesis, while maintaining high standards of academic rigor. The materials are based on a module originally developed for MRes students. We’re sharing this approach to show how researchers can develop the complete pipeline, learning when and how to work with AI as a cognitive partner. Your implementation might look different based on your field and research questions.

Learning Schedule & Key Topics

This case study is structured around four key sessions, each building on the last:
SessionTopicCore Competency
1Foundations of Systematic ReviewsUnderstanding the “Why”
2Building the Human-AI Research PipelineInterpretive Orchestration
3Human vs. AI Synthesis: Learning from PracticeCritical Evaluation
4Advanced Agentic WorkflowsResearch Architecture

Getting Started with the Case Study

To get the most out of this case study, we suggest following the Quick Start Checklist first. Each session also has its own detailed guide with associated readings and exercises. Feel free to adapt the pace and focus to match your learning goals.

Learning Assessment

The learning process is assessed through two main components: Learning Through Practice (50%)
  • Session 2 Exercise: Master prompt development and cognitive scaffolding using a sample literature set.
  • Session 3 Exercise: Develop critical evaluation skills by documenting AI failure modes and the limitations of automated synthesis.
  • In-class work: Build presentation and peer feedback abilities.
Capstone Learning Project (50%) Participants choose a final project that best serves their research goals:
  • Option A: Validation Skills - Critically compare AI vs. human synthesis approaches
  • Option B: Workflow Design - Develop reproducible, AI-enhanced research pipelines
  • Option C: Quality Control - Build expertise in identifying and preventing common AI failure modes
Each option develops different competencies for AI-enhanced research. Choose what matters most for your work.

Tools & Budget

  • Essential Tools: The workflows in this case study utilize Research Rabbit (free), Zotero (free), and Cherry Studio (an open-source tool for multi-model AI interaction). See the API Keys Setup Guide for more.
  • API Budget: For course participants, a budget is typically provided for API access. Independent learners can leverage free tiers from providers like Google AI Studio.

Support

  • Technical: See the setup guides for each tool.
  • Content: The principles of systematic reviews are widely documented in academic literature.

Next Steps: Resources:
I