Implementing Accessibility Audit

A structured workflow for conducting web accessibility audits using automated tools and manual testing. Essential for WCAG compliance and in

We built this so you don't have to stitch together a broken workflow every time a compliance deadline hits. If you're running accessibility audits today, you're likely juggling axe-core CLI flags, a stale Lighthouse script, and a markdown checklist that nobody actually uses. WCAG 2.2 dropped nine new success criteria [6], and your existing tooling doesn't know how to validate them. You're getting false positives from unconfigured rules, missing shadow DOM violations, and spending hours manually correlating JSON outputs just to figure out if a build is safe to ship.

Install this skill

npx quanta-skills install implementing-accessibility-audit

Requires a Pro subscription. See pricing.

This isn't about "checking a box." It's about having a deterministic, repeatable pipeline that catches violations before they hit production. When your audit config is default, you're leaving critical barriers in place for users who rely on assistive technology. Every missed violation is a blocked user, a support ticket, and a potential legal exposure. We've seen teams waste weeks retrofitting accessibility because they treated it as a manual review task instead of an automated gate.

The Cost of Scattered Audit Tooling

Running audits in silos costs you more than engineering hours. It costs you trust. When axe-core, Lighthouse, and manual testing live in different repos with different output formats, you can't aggregate risk. You end up with a "green" Lighthouse score while axe-core is flagging critical contrast failures in your shadow DOM components. You can't prove conformance to auditors because your evidence is fragmented across console logs and unversioned config files.

The financial and operational impact is real. Every hour spent manually reviewing violations is an hour not spent shipping features. Every missed WCAG 2.2 criterion like Focus Appearance or Dragging Movements [1] is a direct violation of modern standards. When you lack a unified validation layer, you can't enforce thresholds. A build with 50 minor violations might pass, while a build with 3 critical violations gets flagged inconsistently. This ambiguity leads to technical debt that compounds with every sprint. You're not just delaying releases; you're building a product that actively excludes a significant portion of your user base.

A Finteam's Three-Tool Nightmare

Imagine a team shipping a customer portal. They run axe-core for automated checks, but their config doesn't scope to iframes or shadow DOM, so they miss half the violations. They run Lighthouse for performance, but the output is a raw JSON file that no one parses. They have a manual checklist from 2021 that doesn't include the new WCAG 2.2 criteria [6].

They launch. Three days later, a screen reader user files a complaint about focus trapping in a modal. The team has to hotfix, roll back, and spend a week reconstructing the audit state to reproduce the issue. A 2024 W3C evaluation methodology guide [3] highlights how a structured, step-by-step process prevents this chaos. Without a unified workflow, you're reacting to incidents instead of preventing them. This is exactly why we designed a skill that forces tooling, configuration, and reporting into a single, auditable pipeline.

What Changes Once the Audit Is Locked

Install this skill and your audit workflow stops being a guessing game. You get a deterministic pipeline that orchestrates axe-core, Lighthouse, and manual validation in one shot. The run-audit.sh script handles environment setup, executes the scans, aggregates the JSON outputs, and invokes the validator. If critical violations exceed your thresholds, the build fails. No more "maybe it's okay."

You'll get RFC 9457-style structured error reporting for violations, with precise DOM selectors and impact scoring. The validate-audit-results.js validator enforces pass/fail criteria based on WCAG severity levels [7]. You can configure axe-core to disable noisy rules and focus on wcag2aa violations [2]. The Lighthouse runner uses Puppeteer to trigger interactions, catching issues that static analysis misses. You'll cover all 13 guidelines and the 9 new WCAG 2.2 success criteria [6] without lifting a finger to map them. Every violation comes with remediation hints, context selectors, and a clear path to fix. You're not just catching bugs; you're building a compliance artifact that auditors can actually review.

If you're also looking to integrate accessibility-audit-pack for deeper manual AT testing protocols, this skill provides the automated backbone to feed it. For teams that need to correlate accessibility risks with security posture, pairing this with an owasp-security-audit-pack ensures you're not trading one vulnerability class for another.

What's in the implementing-accessibility-audit Skill

  • skill.md — Orchestrator skill that defines the 4-phase accessibility audit workflow (Setup, Automated Scan, Manual Review, Reporting). Explicitly references and chains all subordinate files: templates/axe-config.json, templates/lighthouse-audit.js, templates/wcag-checklist.md, scripts/run-audit.sh, validators/validate-audit-results.js, references/wcag-2.2-core-criteria.md, references/axe-core-testing-patterns.md, and examples/audit-report.json. Provides decision trees for tool selection, threshold configuration, and remediation prioritization.
  • templates/axe-config.json — Production-grade axe-core configuration for CI/CD pipelines. Grounded in Context7 axe-core docs: configures runOnly tags (wcag2a, wcag2aa), disables noisy rules, sets resultTypes to ['violations'] for performance, enables performanceTimer, and configures selectors/ancestry/xpath for precise DOM reporting. Includes frame and shadow DOM scoping defaults.
  • templates/lighthouse-audit.js — Node.js audit runner template using Puppeteer and Lighthouse navigation API. Grounded in Context7 Lighthouse docs: demonstrates headless browser setup, navigation with URL and interaction triggers, flags for onlyCategories, formFactor, and screenEmulation. Includes structured console logging and JSON export for CI integration.
  • templates/wcag-checklist.md — Structured markdown checklist for manual WCAG 2.2 testing. Maps to the 4 principles (Perceivable, Operable, Understandable, Robust) and 13 guidelines. Includes specific test steps for the 9 new WCAG 2.2 success criteria (e.g., Focus Appearance, Dragging Movements, Target Size). Designed for copy-paste into issue trackers.
  • scripts/run-audit.sh — Executable shell script that orchestrates the full audit workflow. Runs axe-core CLI against a target URL, executes the Lighthouse Node template, aggregates JSON outputs, and invokes the validator. Exits non-zero if critical violations exceed thresholds. Handles environment setup, artifact generation, and cleanup.
  • validators/validate-audit-results.js — Programmatic validator that parses combined axe-core and Lighthouse audit JSON. Enforces pass/fail thresholds based on WCAG severity levels (Critical, Serious, Moderate, Minor). Exits with code 1 and detailed violation summaries if thresholds are breached. Uses strict schema validation against the audit report structure.
  • references/wcag-2.2-core-criteria.md — Curated authoritative reference embedding WCAG 2.2 canonical knowledge. Covers the 4 principles, 13 guidelines, and detailed breakdown of the 9 new success criteria introduced in 2.2. Includes official test rules, authoring techniques, and conformance levels. No external links; all guidance is embedded for offline use.
  • references/axe-core-testing-patterns.md — Curated authoritative reference embedding axe-core canonical knowledge. Covers context scoping (include/exclude, fromFrames, fromShadowDom), rule configuration (runOnly, rules enable/disable), partial runs for cross-origin iframes, and performance optimization (resultTypes, preload). Includes real API usage patterns from dequelabs/axe-core.
  • examples/audit-report.json — Worked example of a realistic, production-ready audit report combining axe-core and Lighthouse outputs. Demonstrates proper violation structure, context selectors, impact scoring, and remediation hints. Serves as the ground truth schema for the validator and template configuration.

Install and Ship

Stop stitching together broken audit workflows. Upgrade to Pro to install implementing-accessibility-audit and lock your compliance pipeline. Ship faster, sleep better, and actually pass your next audit.

References

  1. Web Content Accessibility Guidelines (WCAG) 2.2 — w3.org
  2. Understanding WCAG 2.2 — w3.org
  3. W3C Accessibility Guidelines Evaluation Methodology — w3.org
  4. W3C WCAG 2.2 Now Available — access-board.gov
  5. WCAG 2 Overview — w3.org

Frequently Asked Questions

How do I install Implementing Accessibility Audit?

Run `npx quanta-skills install implementing-accessibility-audit` in your terminal. The skill will be installed to ~/.claude/skills/implementing-accessibility-audit/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.

Is Implementing Accessibility Audit free?

Implementing Accessibility Audit is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.

What AI coding agents work with Implementing Accessibility Audit?

Implementing Accessibility Audit works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.