Public documentation

How A11Y Cat works

The active Chrome extension workflow from toolbar click to local evidence export.

What A11Y Cat Extension is

A11Y Cat Extension supports accessibility review on the current browser tab. It is an evidence and triage tool: automated results support human accessibility review, but do not replace expert testing, real assistive technology testing, or a complete WCAG conformance assessment.

What type of review this is

Automated accessibility scan

Bundled local axe-core and strict A11Y Cat checks identify current-page issues that have enough evidence for scanner results.

Guided manual review

Review queues organise items that need human judgement before they are treated as failures.

Screen Reader Review aid

The workflow helps organise expected reading order, visible-content checks, notes, and review evidence. It is not real NVDA, JAWS, VoiceOver, TalkBack, or Narrator proof.

Design screenshot review

Image-only review can discuss visible layout, hierarchy, spacing, content clarity, and likely risks. It cannot verify DOM, selectors, focus order, ARIA, names, tab order, labels, or real screen reader output.

Contrast review

Confirmed contrast failures stay in scanner results only when evidence is strong. Gradients, images, transparency, overlays, CSS effects, text over media, logo-like text, and third-party overlays require inspection.

Metadata and SEO-supporting checks

Metadata, structured data, and preview checks support page-quality review. They are not direct accessibility conformance proof.

Broken link review

Link checks report supported status signals and limitations. Cross-origin, authenticated, blocked, or timeout states can remain unverified.

Language and spelling review

Language and spelling checks use supported local evidence and dictionaries. They are review aids, not complete linguistic validation across all languages.

Diagnostics and limitations review

Diagnostics explain runtime, permission, frame, state, storage, and coverage boundaries. They are support evidence, not page failures.

Export and evidence review

CSV, JSON, diagnostics, workflow, screen-reader-review, and HTML QA report exports are generated locally only after user action.

Workflow

  1. 1

    User opens the extension

    The user clicks the Chrome toolbar action on the page they are reviewing.

  2. 2

    Chrome grants temporary activeTab access

    activeTab gives the extension temporary access to the current tab after explicit user action. The release manifest does not declare broad host permissions.

  3. 3

    The MV3 service worker injects packaged local scripts

    The service worker uses chrome.scripting.executeScript to inject local extension files into the active tab.

  4. 4

    Local axe-core and the A11Y Cat runtime inspect the page

    axe-core is bundled as a local extension asset, not loaded from a remote CDN. The runtime inspects the current rendered DOM, styles, text, metadata, links, headings, images, language, spelling, contrast signals, and diagnostics where supported.

  5. 5

    The scan reflects the current rendered state

    Hidden, delayed, authenticated, responsive, modal, error, and interaction-dependent states need explicit review and may require rescanning after the state changes.

  6. 6

    Findings are classified

    A11Y Cat separates confirmed issues from needs-review items, advisory signals, limitations, and diagnostics.

  7. 7

    The UI displays grouped evidence and workflow controls

    The panel groups evidence, counts, filters, issue actions, review queues, diagnostics, local data controls, and export controls.

  8. 8

    Optional modules add supporting checks

    Metadata, links, language, spelling, headings, images, contrast, diagnostics, review workflows, and screen-reader-review support add context where evidence is available.

  9. 9

    Local state may be saved

    Settings, scan history, workflow status and notes, virtual review state, contrast inspector state, benchmarks, and spelling allowlist entries can be stored in browser-controlled extension storage where supported.

  10. 10

    Exports are created only by user action

    CSV, JSON, diagnostics, workflow state, screen-reader-review, and HTML QA report files are generated locally when the user triggers an export.

Result confidence model

ClassificationMeaningHow to use it
Confirmed issueHighest-confidence issue evidence for the current rendered state, usually from axe-core or strict A11Y Cat checks that pass reliability gates.Treat as a remediation candidate, then retest and confirm user impact in context.
Needs reviewEvidence that needs human judgement, visual inspection, interaction, or assistive technology testing before it becomes a defect.Inspect manually and record the decision before reporting it as a failure.
AdvisoryHelpful quality, metadata, language, spelling, or review-readiness signal.Use as guidance. Do not count it as a definite WCAG failure.
LimitationA known boundary of what the extension could not inspect, calculate, or prove.Use another method for the uncovered state and avoid coverage overclaims.
DiagnosticTechnical support information about permissions, injection, storage, runtime, exports, or scan coverage.Use for troubleshooting and traceability. Do not treat it as a page failure.

What A11Y Cat can help with

  • Finding high-confidence axe-core issues on the current rendered page.
  • Keeping confirmed issues separate from manual review and advisory evidence.
  • Reviewing headings, images, labels, metadata, language, spelling, links, reflow/text-scale signals, and contrast evidence where supported.
  • Organising Screen Reader Review notes before or alongside real assistive technology testing.
  • Capturing diagnostics and limitations when permissions, frames, states, or runtime conditions reduce coverage.
  • Exporting local evidence for QA, engineering, support, and remediation handoff.

What A11Y Cat cannot prove

  • It cannot prove full WCAG conformance or certify accessibility.
  • It cannot replace expert manual testing, keyboard testing, or real assistive technology testing.
  • It cannot prove NVDA, JAWS, VoiceOver, TalkBack, or Narrator behaviour unless real evidence from those tools exists.
  • It cannot automatically test every route, viewport, modal, menu, validation error, authentication path, delayed state, iframe, or dynamic interaction.
  • It cannot infer DOM structure, selectors, focus order, ARIA, accessible names, form labels, or screen reader output from an imported design image alone.

Privacy and local-first behaviour

The documented extension package processes scan results in the browser. It does not declare a developer-operated scan server, developer scan database, analytics pipeline, telemetry endpoint, hosted AI API, or remote axe-core CDN for scan processing.

Exports can include sensitive page evidence such as URLs, selectors, snippets, diagnostics, screenshots where captured, and notes. Users create exports intentionally and control where those files go.