Market Sizing Pack
End-to-end market sizing workflow for business analysts using TAM/SAM/SOM frameworks with bottom-up/top-down modeling, validation techniques
Stop Guessing Your TAM. Build a Reconciled Market Sizing Engine.
We've audited hundreds of market sizing decks and internal models. The pattern is always the same: a top-down TAM pulled from a third-party research report [2] clashes with a bottom-up SAM derived from internal unit economics, and nobody checks the variance. Investors don't just want a number; they want to know the math holds up. When you rely on static spreadsheets, you're stuck with a single point estimate that collapses under scrutiny. We built the Market Sizing Pack to force reconciliation between methodologies, expose your assumptions, and give you a pipeline that catches errors before they reach the boardroom. If you're still doing manual surveys to fill gaps, check the Market Research Pack for the data collection layer, but this Pack handles the calculation and validation logic.
Install this skill
npx quanta-skills install market-sizing-pack
Requires a Pro subscription. See pricing.
The Excel Hell of Market Sizing
Most market sizing workflows are broken by design. We've seen engineers try to version control .xlsx files in Git. It's a nightmare. You get binary diffs, merge conflicts that destroy formulas, and no audit trail for why an assumption changed. When the VP of Sales asks why the SAM dropped by 10%, you can't point to a commit. You're left digging through Slack threads. We built the Market Sizing Pack because market sizing should be code. It should be reproducible, testable, and auditable. The pain isn't just the math; it's the lack of infrastructure. You're using a calculator when you need a pipeline. Without a structured schema, variable naming drifts across teams. One analyst uses ACV, another uses ASP, and the reconciliation step becomes a manual translation exercise. We enforce consistent variable naming and units in the YAML schema, so your team speaks the same language. This aligns with the Business Model Canvas Pack by ensuring your value proposition maps to a quantifiable addressable segment with clear definitions.
What a Mismatched Model Costs You
The cost isn't just the time spent fixing Excel formulas. It's the strategic drift. If your TAM is inflated by 20% because you ignored geographic constraints [3], your sales targets become impossible. Your engineering team builds for a market that doesn't exist. We've seen teams burn six-figure CAC budgets chasing segments that are actually subsets of their SAM. Without a validation layer, you're flying blind. A mismatched model kills credibility [8]. When an LP asks, "How did you get from $50B TAM to $2M SOM?", and you can't show the step-down logic, the deal is dead. You need a pipeline that catches structural failures before they reach the pitch deck. Consider the downstream impact on hiring. If your TAM is wrong, your revenue projections are wrong. Your hiring plan is wrong. You bring on 50 SDRs for a market that's actually 20% smaller. That's a $2M burn mistake. Or worse, you under-size and run out of runway before you hit product-market fit. We've seen startups pivot because their sizing model couldn't support their burn rate. The cost of a bad model is existential. It's not just a slide deck error; it's a capital allocation error. And when you try to fix it post-mortem, you've already lost the trust of your stakeholders. Without Competitive Intelligence Pack data, your bottom-up assumptions are just guesses. And if your Financial Modeling Pack doesn't consume your TAM/SAM/SOM correctly, your DCF is garbage.
A Hypothetical AI Compliance SaaS Workflow
Imagine an AI compliance SaaS targeting mid-market enterprises. The founder pulls a TAM of $12B from a top-down industry report. Then the product lead builds a bottom-up model: 4,000 mid-market companies × $15k ACV = $60M SAM. The gap is massive. Without a reconciliation matrix, the founder presents the $12B number to investors, who immediately flag the lack of penetration logic. The team spends three weeks manually adjusting assumptions, only to realize the bottom-up model was wrong because they missed a key buyer persona. With the Market Sizing Pack, this team would have run the run-sizing.py engine against the calculation template. The validator check-assumptions.sh would have flagged the variance. The reconciliation matrix reconciliation-matrix.csv would have forced alignment. The result? A defensible SAM of $480M with clear step-down logic, ready for the investor brief. This mirrors the rigorous bottom-up approach recommended by product management frameworks [6]. Real-world examples from companies like Uber and Toast show how bottom-up sizing often reveals a different reality than top-down reports [5]. In our hypothetical scenario, the agent loads skill.md and sees the workflow. It reads examples/worked-example.yaml. This dataset simulates an AI compliance tool for mid-market enterprises. The agent calls scripts/run-sizing.py with --method bottom-up. The script parses the YAML, computes the TAM/SAM/SOM, and outputs JSON. Then it runs validators/check-assumptions.sh. The validator checks SOM <= SAM <= TAM. It checks penetration rates against benchmarks from references/validation-metrics.md. If the penetration rate is 50% for a new entrant, the validator exits non-zero. The agent sees the failure, adjusts the assumption in the template, and re-runs. This loop happens in seconds. The output includes a reconciliation-matrix.csv showing the variance between top-down and bottom-up. If the variance is >15%, the matrix flags it. The agent then generates templates/investor-brief.md, populating the sections with the validated numbers. The result is a brief that can withstand LP questions. No manual copy-paste. No formula errors. Just a reproducible analysis.
Deterministic Sizing with Validation Pipelines
Once the Pack is installed, your market sizing workflow becomes deterministic. You define inputs in calculation-template.yaml, and the Python engine computes TAM/SAM/SOM using your selected methodology. The validator enforces business logic: SOM cannot exceed SAM, growth rates must align with CAGR benchmarks, and penetration rates stay within realistic bounds. You get structured JSON output that feeds directly into your Financial Modeling Pack. No CSV exports. No manual entry. The investor-brief.md template translates the math into a narrative investors trust. No more manual reconciliation. No more "trust me" numbers. Just a pipeline that catches errors at the script level. You can run tests/validate-sizing.test.sh in your pre-commit hook. If an analyst commits a change that breaks the SOM constraint, the commit fails. You enforce structural integrity at the source control level. The calculation-template.yaml enforces consistent variable naming, so your team speaks the same language. When you integrate with the Financial Modeling Pack, you can pipe the JSON output directly into your DCF model. The Product Launch Pack can consume your SOM to set realistic launch targets. Your Growth Strategy Pack uses the validated TAM to prioritize segments. The entire stack becomes interoperable. You stop building siloed models and start building a unified market intelligence system. We've embedded canonical knowledge on TAM/SAM/SOM definitions, top-down vs bottom-up methodologies, value-theory sizing, and reconciliation techniques directly in the references. This contains step-by-step calculation logic, when to apply each approach, and how to avoid common analyst pitfalls [7]. We also cover triangulation methods, penetration rate realism, CAGR alignment, competitor revenue cross-checks, and unit economics benchmarks to ensure sizing credibility. My advice is to keep narrowing markets down into smaller segments until you reach a target market that is only 5-10 times bigger than your long-term goals [1]. This Pack forces that narrowing through the reconciliation matrix. TAM, SAM, and SOM are metrics that help businesses understand the potential size of their market and set realistic goals for growth [4]. With this Pack, you set goals based on math, not hope.
What's in the Market Sizing Pack
skill.md— Orchestrator skill that defines the end-to-end market sizing workflow, explicitly references all templates, references, scripts, validators, tests, and examples by relative path, and instructs the agent on methodology selection, calculation execution, validation, and reconciliation.templates/calculation-template.yaml— Production-grade YAML schema for structuring market sizing inputs, assumptions, top-down/bottom-up branches, reconciliation logic, and final TAM/SAM/SOM outputs. Enforces consistent variable naming and units for programmatic consumption.templates/investor-brief.md— Structured markdown template for translating calculation results into investor-ready strategic briefs. Includes sections for methodology justification, assumption transparency, risk factors, and sensitivity analysis.templates/reconciliation-matrix.csv— CSV template for side-by-side comparison of top-down, bottom-up, and triangulated market estimates. Includes columns for variance percentage, confidence scoring, and reconciliation notes to ensure methodological alignment.references/canonical-frameworks.md— Embedded canonical knowledge on TAM/SAM/SOM definitions, top-down vs bottom-up methodologies, value-theory sizing, and reconciliation techniques. Contains step-by-step calculation logic, when to apply each approach, and how to avoid common analyst pitfalls.references/validation-metrics.md— Embedded reference for validation and sanity-checking techniques. Covers triangulation methods, penetration rate realism, CAGR alignment, competitor revenue cross-checks, and unit economics benchmarks to ensure sizing credibility.scripts/run-sizing.py— Executable Python engine that parses the calculation template, computes TAM/SAM/SOM using selected methodology, applies validation rules, and outputs structured JSON/CSV results. Accepts CLI flags for methodology selection and output format.validators/check-assumptions.sh— Bash validator that executes the sizing script, parses output, and enforces business logic constraints (e.g., SOM ≤ SAM ≤ TAM, realistic penetration rates, non-negative growth). Exits non-zero on structural or logical failures.tests/validate-sizing.test.sh— Integration test runner that executes the validator against the worked example, checks exit codes, verifies output schema compliance, and reports pass/fail status. Ensures the pipeline remains stable across updates.examples/worked-example.yaml— Realistic production dataset for an AI compliance SaaS targeting mid-market enterprises. Contains populated assumptions, top-down/bottom-up inputs, and expected output ranges for testing and reference.
Install and Ship
Stop guessing. Start reconciling. Upgrade to Pro to install the Market Sizing Pack and ship market analysis that survives due diligence.
References
- Market Sizing: Meet SAM and TAM — bu.edu
- The Difference Between Top-Down and Bottom-Up TAM — scalepath.io
- TAM, SAM & SOM: How To Calculate The Size Of Your ... — antler.co
- TAM, SAM, and SOM: Made Simple for Growing Businesses — salesforce.com
- Market Sizing for Startups: Bottom-Up and Top-Down ... — alloypartners.com
- Product Management Prompts: TAM/SAM/SOM Calculator — productboard.com
- Top-Down Market Sizing: Step-by-Step TAM/SAM/SOM ... — data-mania.com
- Market Sizing for Startups: TAM, SAM, SOM Explained — forumvc.com
Frequently Asked Questions
How do I install Market Sizing Pack?
Run `npx quanta-skills install market-sizing-pack` in your terminal. The skill will be installed to ~/.claude/skills/market-sizing-pack/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.
Is Market Sizing Pack free?
Market Sizing Pack is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.
What AI coding agents work with Market Sizing Pack?
Market Sizing Pack works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.