Market Research Pack
Market research with TAM SAM SOM analysis surveys focus groups and trend analysis Install with one command: npx quanta-skills install market-research-pack
Why Hand-Wavy TAM Calculations Break Your Roadmap
We’ve all inherited the market research doc that looks like a war room exploded. Spreadsheets with broken cell references, TAM/SAM/SOM numbers pulled from a two-year-old Gartner report, and survey questions that lead respondents exactly where the founder wants them to go. As engineers and product builders, we’re trained to validate assumptions with data, but market research often defaults to guesswork. You don’t need another Notion template or a generic strategy playbook. You need a reproducible workflow that enforces mathematical constraints, standardizes competitive scoring, and catches survey bias before it hits production.
Install this skill
npx quanta-skills install market-research-pack
Requires a Pro subscription. See pricing.
When research is treated as a creative exercise rather than a constrained system, the output is fragile. Excel formulas drift across versions. PDFs get outdated the moment they’re published. Survey platforms export CSVs with inconsistent date formats that break downstream pipelines. Engineers end up spending more time cleaning research data than shipping features. We built this pack so you don’t have to manually reconcile market sizing assumptions every time the product roadmap shifts.
The Real Cost of Unvalidated Market Sizing
When market sizing is hand-wavy, the cost compounds fast. Investors reject decks because the SAM exceeds the TAM or the conversion rates ignore channel friction [1]. A misaligned TAM calculation can burn $40k–$80k in premature CAC experiments before the pivot hits. Downstream, your product team builds features for a market that doesn’t exist, and your sales cycle stretches because the positioning matrix is built on outdated competitor pricing. If you’re running dispersed product development across time zones, inconsistent research methodology turns every sprint into a guessing game [3].
You lose credibility with stakeholders, you waste engineering cycles on wrong priorities, and you leave money on the table because your research pack wasn’t validated before it left your desk. A single unvalidated survey can skew pricing assumptions by 15–25%, which directly inflates your revenue forecast and derails your financial modeling kit. When your competitive analysis relies on vague descriptors like “strong” and “weak” instead of scored matrices, your go-to-market strategy becomes reactive. You end up patching positioning instead of engineering it. That’s why we enforce quality gates at the file level, not at the presentation level.
How a Fintech Team Caught SAM > TAM Before Launch
Picture a mid-stage fintech preparing for international expansion. They need to assess market potential across three new regions, so they apply analytical frameworks like OLI, PESTLE, SWOT, Porter’s Five Forces, and TAM-SAM-SOM to map the landscape [2]. On paper, it looks solid. In practice, the lead researcher manually calculates SAM and SOM in a spreadsheet, forgets to cap SOM against realistic penetration rates, and accidentally outputs a SAM that’s 140% of the TAM. The competitive analysis uses unstructured notes instead of a scored matrix, and the survey design includes leading questions that inflate willingness-to-pay by 22%.
When the VP of Strategy reviews the pack, three red flags appear: invalid market sizing constraints, unvalidated sampling frames, and a positioning matrix that doesn’t align with actual feature parity. That’s the exact friction we engineered this pack to eliminate. Instead of waiting for a slide deck review, the team runs the Bash validator against the YAML templates. The validator catches the SAM > TAM violation immediately, flags missing consent/ethics fields in the survey template, and forces the competitive analysis to use standardized Porter’s Five Forces scoring. The Python CLI recalculates bottom-up and top-down projections, outputs structured JSON, and exits non-zero on the first logical failure. No manual cross-checking. No spreadsheet drift. Just constrained, auditable research.
What Changes When Research Is Constrained by Default
Once the Market Research Pack is installed, your workflow shifts from manual guesswork to validated, reproducible research. The YAML templates enforce typed fields for population, pricing, and conversion rates, while the embedded validation constraints guarantee SAM stays within TAM and SOM stays within SAM. You get a Python CLI that runs bottom-up and top-down calculations, flags logical failures with non-zero exit codes, and outputs structured JSON you can pipe directly into dashboards or CI pipelines.
Survey design shifts from opinion to methodology: sampling frames, Likert scaling, bias mitigation controls, and consent/ethics fields are baked into the template. Competitive analysis stops being a narrative and becomes a scored matrix using Porter’s Five Forces, SWOT, and pricing tiers. Every piece of research gets a quality gate before it reaches stakeholders. If you also need structured competitive intelligence, this pack integrates cleanly with the Competitive Research Pack for feature parity tracking. For teams that need to visualize sizing outputs, the Data Visualization Pack consumes the structured JSON directly. When you’re building out a broader strategy stack, the Strategy Playbook and Financial Modeling Kit pull from the same validated source files, eliminating cross-document drift. Product managers can align sprint priorities using the Product Roadmap Builder, while customer success teams run validated discovery calls using the Customer Interviews Sampler. The entire chain stays anchored to the same constrained research pack.
What's in the Market Research Pack
skill.md— Orchestrator skill that defines the end-to-end market research workflow, maps user intents to specific templates/scripts, and explicitly references all relative paths for templates, references, validators, and examples.templates/tam-sam-som.yaml— Production-grade YAML template for TAM/SAM/SOM market sizing with typed fields for population, pricing, conversion rates, methodology tags (top-down/bottom-up), and embedded validation constraints.templates/survey-design.yaml— Structured template for designing surveys and focus groups, including sampling strategy, question taxonomy, bias mitigation controls, consent/ethics fields, and analysis plan hooks.templates/competitive-analysis.yaml— Real-world competitive landscape template covering Porter's Five Forces scoring, SWOT matrix, pricing tiers, feature parity, and positioning matrix inputs.references/market-sizing-methodology.md— Embedded canonical knowledge on TAM/SAM/SOM calculations, bottom-up vs top-down approaches, source citation standards, common sizing pitfalls, and worked mathematical examples.references/research-frameworks.md— Curated reference for Porter's Five Forces, PESTLE, SWOT, and BCG Matrix with scoring rubrics, synthesis rules, and strategic application guidelines grounded in consulting practice.references/survey-design-principles.md— Canonical guide to survey and focus group methodology: sampling frames, question design (avoiding leading/biased phrasing), Likert scaling, moderation techniques, and data triangulation.scripts/market-sizer.py— Executable Python CLI that reads TAM/SAM/SOM YAML/JSON, performs bottom-up/top-down calculations, validates SAM<=TAM and SOM<=SAM constraints, outputs structured results, and exits non-zero on invalid input.validators/research-qa.sh— Bash validator that checks research pack structure, runs market-sizer.py against templates, verifies required fields, and exits 1 on schema or logical failures to enforce quality gates.examples/full-market-pack.yaml— Worked example demonstrating a complete market research pack for a SaaS product, showing filled TAM/SAM/SOM, competitive analysis, survey plan, and trend notes using production-grade syntax.
Ship Validated Research, Not Guesswork
Stop guessing your market size. Start validating it with constrained calculations, standardized rubrics, and automated quality gates. Upgrade to Pro to install the Market Research Pack and ship research that actually holds up to investor scrutiny and engineering timelines.
References
- High Potential (HiPo) — innovationlabs.harvard.edu — innovationlabs.harvard.edu
- Ocean Sole: Planning an International Expansion Strategy — hbsp.harvard.edu — hbsp.harvard.edu
- Managing a Dispersed Product Development Process — mitsloan.mit.edu — mitsloan.mit.edu
Frequently Asked Questions
How do I install Market Research Pack?
Run `npx quanta-skills install market-research-pack` in your terminal. The skill will be installed to ~/.claude/skills/market-research-pack/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.
Is Market Research Pack free?
Market Research Pack is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.
What AI coding agents work with Market Research Pack?
Market Research Pack works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.