Accessibility Audit Pack

Pro Design

Deep technical guide to conducting WCAG-compliant accessibility audits, covering axe-core CI integration, manual AT testing protocols, remed

We built the Accessibility Audit Pack because we were tired of shipping UI that works for mouse users but breaks for everyone else. You've seen it: your PR passes CI, Lighthouse is green, and then a user reports they can't navigate your checkout flow with a keyboard. Or worse, legal flags your site for non-compliance. The problem isn't that you don't care about accessibility; it's that your tooling is incomplete.

Install this skill

npx quanta-skills install accessibility-audit-pack

Requires a Pro subscription. See pricing.

Automated scanners catch markup errors, but they can't tell you what a screen reader actually announces [1]. You're left guessing whether your focus management is broken or if your ARIA labels are actually helpful. When you push a modal, the markup might be valid, but if focus gets trapped inside, you've created a keyboard trap. When you update dynamic text, the aria-live region might not fire. These are the edge cases that automated tools miss or misreport. You need a workflow that bridges the gap between automated scanning and manual assistive technology testing, without wasting sprints on vague bug triage.

The Trap of "Green" Lighthouse Scores

Lighthouse is a heuristic tool. It checks for common patterns and flags obvious violations. It's useful for catching missing alt attributes or low contrast ratios, but it's not a compliance engine. Relying on it gives you a false sense of security. You'll ship a date picker that looks fine in the DOM but doesn't announce the selected date to a screen reader. You'll ship a form where the error messages aren't associated with the inputs via aria-describedby.

Even axe-core, the engine behind many accessibility checks, can be noisy if you don't tune it. Out of the box, it might report incomplete results or flag rules that don't apply to your stack. You need a production-grade configuration that disables irrelevant rules, handles incomplete states correctly, and sets thresholds that match your risk tolerance. The WAI ACT implementation for axe-core maps rules to WCAG success criteria, but you have to configure the tool to enforce those mappings [3]. Without the right config, you're drowning in false positives or missing critical violations like focus order and keyboard traps.

What Bad Accessibility Costs You

Every missed violation is a liability. When you ship a modal without proper focus trapping, you create a keyboard trap that breaks the P99 experience for power users. When your color contrast fails, you alienate users with low vision. The cost compounds. A 2024 analysis of accessibility defects found that issues found post-release cost 30x more to fix than those caught in CI [7]. You're burning engineering hours triaging vague bug reports like "button doesn't work" when the root cause is a missing aria-expanded state.

Worse, you're risking litigation. WCAG 2.2 success criteria are becoming the legal baseline [4]. If your team is manually running audits, you're wasting sprints. Manual testing protocols exist, but without a structured workflow, you're just winging it [6]. You need a triage process that maps violations to success criteria and assigns SLAs. When a violation is caught, you shouldn't be debating whether it's a P1 or P2. You should have a template that categorizes the issue by severity, maps it to the specific WCAG 2.2 success criterion, and assigns an SLA based on the impact. This turns accessibility from a vague "nice-to-have" into a measurable engineering discipline.

How a Team Turned Audits Into a Pipeline Gate

Picture a fintech team rolling out a new transaction dashboard. They rely on a basic axe-core run before merge. The CI passes. Two weeks later, a user on NVDA reports that the 'Submit' button is announced as 'button' instead of 'Submit transaction'. The issue? The button text is dynamic based on currency, and the aria-label wasn't updating. The automated tool flagged the structure, but missed the dynamic state mismatch. The team had to hotfix production, delaying the release by three days.

A team that implemented a structured audit workflow, including manual AT verification against WCAG 2.2 test rules [5], caught this during the triage phase. They mapped the violation to the specific success criterion, assigned an SLA, and fixed it before merge. The difference wasn't just the tool; it was the workflow. They used a triage template to categorize violations by severity and mapped them to remediation patterns, turning a production fire into a planned backlog item. They integrated axe-core into their CI pipeline with a Playwright integration that ran headless audits against every PR. The pipeline failed fast when violations exceeded their threshold, forcing developers to fix issues before merge. This shifted accessibility left, reducing remediation costs and improving user trust.

What Changes Once the Pack Is Installed

Once you install the Accessibility Audit Pack, your pipeline changes. You stop guessing. You get a skill.md orchestrator that maps every phase of the audit lifecycle. Your CI pipeline runs axe-core with a production-grade config that handles incomplete states correctly [3]. You run Playwright tests that fail fast when a violation exceeds your threshold. You have a remediation triage template that forces the team to map every bug to a WCAG 2.2 success criterion, not just "fix it". You have step-by-step protocols for NVDA, VoiceOver, and JAWS, so your QA isn't improvising.

Errors are caught in PR, not production. Your team ships with confidence, knowing that accessibility is baked into the workflow, not an afterthought. You can also integrate this with Implementing Accessibility Audit for a complete workflow. The pack includes scripts to bootstrap axe-core CLI, run audits against target URLs, and write structured JSON to disk. A validator parses the results, enforces configurable thresholds, and exits non-zero on failure to block CI pipelines. You get examples of real axe-core output for triage practice, and a curated rule catalog that documents known false positives and remediation patterns. This is the infrastructure you need to ship WCAG-compliant UIs at scale.

What's in the Accessibility Audit Pack

  • skill.md — Orchestrator that defines the full WCAG audit lifecycle, explicitly cross-references all relative paths below, and maps each phase (automated scanning, manual AT testing, triage, CI integration) to the appropriate template, reference, script, or example.
  • templates/axe-core-config.json — Production-grade axe-core configuration with rule enable/disable toggles, performance options, threshold settings, and incomplete-state handling for enterprise CI pipelines.
  • templates/playwright-a11y-test.ts — Real Playwright integration template with axe-core fixtures, page-level audits, CI-compatible assertions, and structured result parsing for headless execution.
  • templates/remediation-triage.md — Structured triage workflow template for categorizing violations by severity, mapping to WCAG 2.2 success criteria, assigning SLAs, and tracking remediation progress.
  • references/wcag-22-criteria.md — Embedded canonical WCAG 2.2 success criteria, test rules, and techniques covering Perceivable, Operable, Understandable, and Robust principles with manual/automated validation guidance.
  • references/axe-core-rule-catalog.md — Curated axe-core rule catalog including rule IDs, WCAG mappings, known false positives, incomplete state triggers, and remediation patterns for common violations.
  • references/manual-at-testing-protocols.md — Step-by-step assistive technology testing protocols for NVDA, VoiceOver, and JAWS, plus keyboard navigation checks, focus management validation, and screen reader announcement verification.
  • scripts/run-audit.sh — Executable shell script that bootstraps axe-core CLI, runs audits against a target URL or local HTML, captures console output, and writes structured JSON to disk.
  • scripts/validate-results.py — Programmatic validator that parses axe-core JSON, enforces configurable violation thresholds, flags incomplete results, and exits non-zero on failure to block CI pipelines.
  • examples/ci-pipeline.yml — Production GitHub Actions workflow demonstrating CI/CD integration, dependency caching, matrix testing across browsers, and artifact upload for audit reports.
  • examples/worked-failing-audit.json — Realistic axe-core output fixture containing violations, incomplete results, and passes for triage practice, threshold validation testing, and remediation walkthroughs.

Install and Ship

Stop shipping broken UIs. Upgrade to Pro to install the Accessibility Audit Pack and lock down your WCAG compliance today.

References

  1. CI/CD Accessibility & Performance Best Practices — github.com
  2. Automated Accessibility Testing and Continuous Integration — blogs.library.duke.edu
  3. Axe-core ACT Implementation — w3.org
  4. Web Content Accessibility Guidelines (WCAG) 2.1 — w3.org
  5. Understanding Test Rules for WCAG 2.2 Success Criteria — w3.org
  6. Tier 1 Manual Accessibility Testing Protocol — itaccessibility.uiowa.edu
  7. Accessibility Testing in CI/CD: A Complete Integration Guide — testparty.ai

Frequently Asked Questions

How do I install Accessibility Audit Pack?

Run `npx quanta-skills install accessibility-audit-pack` in your terminal. The skill will be installed to ~/.claude/skills/accessibility-audit-pack/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.

Is Accessibility Audit Pack free?

Accessibility Audit Pack is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.

What AI coding agents work with Accessibility Audit Pack?

Accessibility Audit Pack works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.