User Research Pack
End-to-end user research workflow covering planning, recruitment, data collection, analysis, and insight synthesis. Ideal for UX researchers
We built the User Research Pack because we're tired of watching engineering teams waste weeks on ad-hoc studies that produce nothing but noise. When you prompt an agent to "run a user study," you get hallucinated recruitment criteria, missing consent forms, and interview guides that drift into leading questions. You don't need another template library; you need an executable workflow that enforces structure from planning to synthesis.
Install this skill
npx quanta-skills install user-research-pack
Requires a Pro subscription. See pricing.
We've audited dozens of research workflows where the "plan" was a single prompt. The result? Agents drift. An interview guide starts with "What do you think of our product?" instead of probing for underlying needs. A research plan lacks a validation schema, so you launch a study with undefined objectives. You end up with data you can't trust. If you're already using a UX Research Pack for personas and journey maps, you know the value of structure—but that pack doesn't cover the full lifecycle of planning, recruitment, and synthesis. You need a dedicated workflow for the research execution itself.
Why Prompt-Only Research Fails
The core problem is that user research requires discipline that raw prompting can't enforce. We've seen agents hallucinate "recruitment criteria" as "everyone with a laptop." We've seen interview guides skip the consent script, exposing your team to legal risk. We've seen synthesis matrices that are just copy-pastes of quotes, with no linkage to evidence or themes.
When you treat research as a coding task, you miss the nuance. A 2023 NN/g article on user interviews emphasizes that interviews help you learn who your users are, what they experience, and what they value—but only if the questions are designed to uncover that depth [5]. Without a structured guide, you get surface-level answers. You miss the "why." You miss the edge cases. You miss the contradictions that actually drive product decisions.
We built the User Research Pack to fix this. It's not a collection of pretty templates. It's a pipeline. It includes a validator that blocks you from launching a study with missing objectives. It includes a method mapping that forces you to choose the right tool for the job. It includes a synthesis matrix that requires evidence for every theme. You get structure, validation, and reproducibility.
The Hidden Costs of Unstructured Studies
Ignore this at your peril. Every hour you spend writing a research plan from scratch is an hour you're not shipping. We're talking 6–10 hours per study just on scaffolding templates. Then there's the risk. A missing consent form isn't just a compliance gap; it's a liability. You're also burning budget on bad recruitment. Without a structured screener, you attract the wrong participants.
A 2023 NN/g article on participant databases discusses considerations for creating a research panel, along with tips and lessons learned [4]. Without a structured panel strategy, you rely on snowball sampling, which introduces bias. You miss diverse users. You build features for your "power users" that alienate the mass market. The cost? Lost revenue, support tickets, and customer churn.
And when you try to analyze the data, you're left with a "top 5" list that ignores contradictory evidence. A 2019 NN/g article on interpreting research findings notes that the ideal way to conduct UX research is to use multiple methodologies, mixing both quantitative and qualitative research [2]. Without a workflow, teams default to the easiest method, skewing results. You lose the triangulation that gives you confidence.
If you're also looking at survey design capabilities, you'll see how ad-hoc surveys compound these errors. Without a structured survey lifecycle, you get biased sampling and poorly phrased questions. You end up with quantitative data that reinforces your biases rather than challenging them. The hidden cost isn't just time; it's decision quality. Every flawed study is a technical debt item waiting to break your product roadmap.
A Fintech Team's Diary Study Disaster
Picture a team building a recurring payment dashboard. They need to understand how users manage subscriptions. They decide on a diary study. Without a pack, the researcher drafts a vague prompt: "Have users log their payments." The agent generates entries that are just text fields. Users get fatigue. Responses drop. The data is unusable.
A 2025 NN/g article on diary study entries warns that you must balance closed-ended, open-ended, and multimedia questions to get high-quality responses [3]. The team wasted three weeks waiting for entries that turned out to be garbage. They had to redo the study. The cost was $15k in dev time and lost momentum.
Contrast this with a team that uses a method mapping reference. They consult a canonical mapping of 20 UX methods and realize a diary study isn't the best fit for their early-stage hypothesis. They pivot to a usability test. They save time. They get actionable insights.
Or consider a team building an internal tool. They need to assess productivity. The CASTLE framework offers a complementary assessment for internal product teams [7]. Without a method mapping, the team picks the wrong method, wasting resources. They run a workshop without a foundation. A 2020 NN/g article on workshop activities lists 7 foundational activities that act as a foundation for every UX exercise [6]. Without these, the workshop becomes a solutioning session, not a discovery session. The team leaves with a list of features, not user needs.
The story isn't about the tool; it's about the discipline of choosing the right method and executing it with structure. A 2017 NN/g cheat sheet helps you choose appropriate UX methods and activities for your projects [1]. Another 2017 article compares UX mapping methods, helping you understand similarities and differences among empathy maps, customer journey maps, experience maps, and service blueprints [8]. These references are useless if they're buried in a wiki. They need to be embedded in your workflow.
What Changes When Your Workflow Is Locked
Once the User Research Pack is installed, your process becomes deterministic. You run scripts/scaffold.sh and get a directory structure: planning/, recruitment/, collection/, analysis/, reports/. The research-plan.yaml is validated against validators/schema.json before you even start. You can't submit a plan with missing objectives or undefined recruitment criteria. The validate_plan.py script exits non-zero on validation failure, ensuring plan completeness.
The interview-guide.md template forces you to include consent, warm-up, core questions, and probes. The method-mapping.md reference keeps you aligned with NN/g standards. You get examples/filled-plan.yaml as a reference for best practices. You get examples/synthesis-matrix.md to link themes to evidence, not just quotes.
You can pair this workflow with market research analysis to contextualize user insights within TAM and SAM frameworks. You get a holistic view of the problem space. The result? Studies that are reproducible, compliant, and actionable. You stop guessing and start shipping insights.
Errors are caught early. Recruitment is structured. Synthesis is reproducible. You can block a study launch in CI/CD if the plan fails validation. You can ensure every interview has a consent script. You can force your team to use evidence-based themes. This is what happens when you lock your workflow.
What's in the User Research Pack
What's in the User Research Pack?
skill.md— Orchestrator skill defining the User Research workflow, referencing all templates, validators, references, and scripts. Guides the agent through planning, recruitment, collection, analysis, and synthesis phases.references/method-mapping.md— Canonical knowledge base embedding NN/g method mapping. Details 20 UX methods across dimensions (Attitudinal/Behavioral, Qualitative/Quantitative, Generative/Evaluative) and product lifecycle stages.templates/research-plan.yaml— Production-grade YAML template for structuring a research plan. Includes metadata, objectives, methodology selection, recruitment criteria, schedule, and deliverables.templates/interview-guide.md— Structured template for conducting user interviews. Includes consent script, warm-up, core questions, probes, and wrap-up sections with timing guidance.templates/consent-form.md— Legal and ethical consent form template for research participants. Covers data usage, recording, anonymity, right to withdraw, and contact info.scripts/scaffold.sh— Executable bash script to scaffold a new research project. Creates directory structure (planning, recruitment, collection, analysis, reports) and copies templates.validators/schema.json— JSON Schema definition for validating research plans. Enforces required fields like objectives, methodology, recruitment criteria, and schedule.validators/validate_plan.py— Executable Python script that validates a research plan YAML against the JSON schema. Exits non-zero on validation failure, ensuring plan completeness.examples/filled-plan.yaml— Worked example of a completed research plan in YAML format. Demonstrates best practices for objectives, mixed methods, and recruitment.examples/synthesis-matrix.md— Worked example of a synthesis matrix for qualitative analysis. Shows how to structure findings, themes, and evidence from interview transcripts.
Install and Ship
Stop wasting hours on ad-hoc research. Upgrade to Pro to install the User Research Pack. Run the scaffold, validate your plans, and ship insights with confidence.
References
- UX Research Cheat Sheet — nngroup.com
- Interpreting Contradictory UX Research Findings — nngroup.com
- Get the Responses You Want: Designing Diary Study Entries — nngroup.com
- Best Practices for Building and Maintaining Your Own Research Panel — nngroup.com
- User Interviews 101 — nngroup.com
- Foundational UX Workshop Activities — nngroup.com
- CASTLE Framework for Productivity/Workplace Applications — nngroup.com
- UX Mapping Methods Compared: A Cheat Sheet — nngroup.com
Frequently Asked Questions
How do I install User Research Pack?
Run `npx quanta-skills install user-research-pack` in your terminal. The skill will be installed to ~/.claude/skills/user-research-pack/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.
Is User Research Pack free?
User Research Pack is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.
What AI coding agents work with User Research Pack?
User Research Pack works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.