← Back to projects

n8n Interview Prep Workflow

A research-stage n8n workflow that assembles company context and produces targeted interview-prep drills with manual source review.

What it is: A research workflow for turning scattered role and company context into reviewable interview-prep drills.

What I built: Explored the workflow structure, review checkpoints, and context-gathering logic for a narrower interview-prep system.

Current state: Research-stage work: the concept and architecture are ahead of the implementation maturity.

Why it matters: Collects role context from notes and selected public references into one prep flow.

Category: Research / Experiment

Status: Research

Visibility: Public

This is a research-stage portfolio entry, not a claim of fully shipped production implementation.

What this project is

A workflow concept for gathering company context and turning it into targeted interview drills instead of scattered prep notes.

Why I explored it

Interview prep is often a messy mix of notes, role pages, company research, and last-minute prompt writing. I wanted a narrower system that could collect the useful context and turn it into practice material without pretending the workflow should run fully unattended.

Constraints

  • Human oversight needs to stay in the loop for high-stakes or ambiguous interpretation.
  • The workflow should not hide where the underlying context came from.
  • Output quality depends on keeping references reviewable and scoping the drills to the role.

Architecture direction

  • Structured intake of role notes and selected references.
  • n8n orchestration for context assembly and drill generation.
  • Manual review gate before anything becomes final prep material.

Current state

This is still research work. The useful output so far is the workflow framing and review model, not a finished production tool.

Why it matters

The point is not to automate thinking away. It is to compress repetitive prep overhead into a cleaner drill-building workflow while keeping the reasoning chain inspectable.

Key decisions

  • Keep source review manual so generated prompts stay anchored to real context.
  • Model confidence labels explicitly instead of pretending all prompts are equally strong.
  • Use orchestration only where it reduces prep friction without hiding important reasoning steps.

What I'd improve next

The next improvement would be a lightweight source-trace view so each drill can be tied back to the notes or references that produced it.

Related reading

Relevant services