Skip to main content
OpenInterviewer is an open-source platform for conducting qualitative research interviews at scale. Instead of scheduling dozens of hour-long interviews, you design your study and share a link — participants engage with an AI interviewer that adapts based on their responses.
Created by: Xule LinGitHub: linxule/openinterviewerAI Providers: Gemini (default) or Claude — with per-study model selectionDeploy: One-click Vercel deployment
This is not a replacement for human interviews. AI interviews generate different data than human-conducted interviews. They’re a complementary method — useful for exploratory research, pilot studies, scaling data collection, and reaching participants across time zones. The platform is best understood as extending your research reach, not substituting your interpretive presence.

When to Use OpenInterviewer

  • Pilot studies — Test interview protocols before committing to full human-conducted studies
  • Scale — Collect interview data from 50+ participants without scheduling constraints
  • Exploratory research — Rapidly gather perspectives on emerging topics
  • Cross-timezone studies — Participants engage on their own schedule
  • Complementary data — Pair with human interviews for methodological triangulation

How It Works

For Researchers

  1. Create a study — Define research questions, participant profiles, and interview mode
  2. Configure the interviewer — Choose structured, standard, or exploratory mode; select your AI model
  3. Share the link — Participants access via a simple URL (with optional expiration)
  4. Monitor and analyze — Real-time synthesis of themes, contradictions, and patterns
  5. Generate follow-ups — Create new studies based on synthesis findings to dig deeper

For Participants

  1. Open the link — No account or app required
  2. Consent — Standard consent flow
  3. Conversation — Natural dialogue with an AI interviewer that adapts to responses
  4. Demographics — Collected conversationally, not as a separate form
New to the platform? After deploying, click the “Load Demo” button on the My Studies page to explore with pre-built sample data — three complete interviews with full synthesis, so you can see the entire workflow before creating your own study.

Key Features

Interview Modes

ModeBest ForAI Behavior
StructuredConfirmatory researchFollows predefined questions closely
StandardBalanced explorationFollows guide with adaptive follow-ups
ExploratoryDiscovery researchFree-flowing conversation guided by participant responses

Model Selection

Each study can use a different AI model, selected from a dropdown in the study setup page. This lets you balance cost, speed, and quality per study.

Gemini Models

  • gemini-2.5-flash — Fast, cost-effective (default)
  • gemini-2.5-pro — Higher quality responses
  • gemini-3-pro-preview — Most intelligent (may require allowlisting)

Claude Models

  • claude-haiku-4-5 — Fastest (1/1/5 per MTok)
  • claude-sonnet-4-5 — Balanced (default, 3/3/15 per MTok)
  • claude-opus-4-5 — Most capable (15/15/75 per MTok)
Model priority: per-study UI selection takes precedence over environment variable defaults.

AI Reasoning Mode

For analytical operations like synthesis, the platform can auto-upgrade to premium models with extended thinking enabled — producing richer thematic analysis without slowing down the interview itself.
OperationReasoningModel Used
Interview responsesOFFYour selected model
Greeting generationOFFYour selected model
Per-interview synthesisONAuto-upgraded (Gemini 3 Pro / Claude Opus)
Aggregate synthesisONAuto-upgraded
Follow-up study generationONAuto-upgraded
Each study can override this behavior: Automatic (recommended default), Always enabled (slower interviews but deeper responses), or Always disabled (faster, uses your selected model throughout). Keep in mind that auto-upgraded synthesis uses premium models — monitor costs if running many interviews.

Built-in Analysis

  • Per-interview synthesis — Automatic extraction of stated vs revealed preferences, themes, and contradictions
  • Cross-interview analysis — Pattern identification across all participants
  • Aggregate reporting — Themes, outliers, and convergence points
  • Follow-up studies — Generate new research questions from synthesis findings to iteratively deepen your inquiry
When generating participant links, you can set expiration windows (7 days, 30 days, 90 days, or never) and toggle link access on or off from the study detail page. This is useful for closing data collection on a schedule, pausing a study, or revoking links if they’ve been shared beyond your intended sample.

Security

  • API keys stay server-side, never exposed to participants
  • Researcher dashboard is password-protected
  • Participant tokens are JWT-signed
  • Data stored in encrypted Vercel KV (Redis)

Architecture

Built on Next.js with a clean separation between researcher and participant flows:
Researcher Dashboard (password-protected)
  ├── Study creation and management
  ├── Per-study model and reasoning configuration
  ├── Real-time monitoring
  ├── Cross-interview analysis
  └── Follow-up study generation

Participant Interface (public link)
  ├── Consent flow
  ├── AI-conducted interview
  └── Demographic collection

Backend
  ├── AI provider abstraction (Gemini / Claude)
  ├── Vercel KV for data persistence
  └── JWT-based participant authentication

Deployment

The fastest path is one-click Vercel deployment — click the button below, set two environment variables, and you’re live in about two minutes. Deploy with Vercel For local development:
git clone https://github.com/linxule/openinterviewer.git
cd openinterviewer

bun install
cp .env.example .env.local
# Edit .env.local with your API keys
bun run dev

Environment Variables

VariableRequiredDescription
GEMINI_API_KEYYesGoogle Gemini API key (get one free)
ADMIN_PASSWORDYesPassword to protect the researcher dashboard
ANTHROPIC_API_KEYNoUse Claude instead of/alongside Gemini
AI_PROVIDERNogemini (default) or claude
GEMINI_MODELNoOverride default Gemini model (gemini-2.5-flash)
CLAUDE_MODELNoOverride default Claude model (claude-sonnet-4-5)
Vercel KV credentials (KV_REST_API_URL, KV_REST_API_TOKEN, etc.) are configured automatically when you connect an Upstash Redis store through the Vercel dashboard.

Methodological Considerations

AI-conducted interviews are a young method. Consider these when designing your study:
  • Disclosure — Participants should know they’re talking to AI
  • Data quality — AI interviews tend to be shorter and more structured than human ones
  • Depth vs breadth — AI excels at consistent coverage; humans excel at unexpected depth
  • IRB/Ethics — Check your institution’s requirements for AI-mediated data collection
  • Complementarity — Strongest when paired with human interviews, not replacing them