Public documentation

Scan workflow and result classification

How A11Y Cat moves from active-tab inspection to classified evidence and review queues.

Full scan workflow

  1. 1

    Open the extension

    Use the browser action on the active tab. Restricted pages, browser chrome, and unsupported schemes may prevent scanning.

  2. 2

    Scan current page state

    A11Y Cat inspects the rendered DOM and available runtime data for the current page state, not every route, breakpoint, state, iframe, or authenticated journey.

  3. 3

    Classify results

    Outputs are grouped as confirmed issues, needs-review items, advisory results, limitations, and diagnostics so reviewers do not treat every item as equivalent.

  4. 4

    Triage and verify

    Use severity, selectors, snippets, source data, and review queues to decide what needs engineering remediation or manual accessibility verification.

  5. 5

    Export or clear evidence

    Export CSV, JSON, diagnostics, QA reports, or local state only when needed. Clear local data when review evidence is no longer required.

Result classification model

ClassificationMeaningUser action
Confirmed issueHighest-confidence issue evidence for the current rendered state, usually from axe-core or strict A11Y Cat checks that pass reliability gates.Treat as a remediation candidate, then retest and confirm user impact in context.
Needs reviewEvidence that needs human judgement, visual inspection, interaction, or assistive technology testing before it becomes a defect.Inspect manually and record the decision before reporting it as a failure.
AdvisoryHelpful quality, metadata, language, spelling, or review-readiness signal.Use as guidance. Do not count it as a definite WCAG failure.
LimitationA known boundary of what the extension could not inspect, calculate, or prove.Use another method for the uncovered state and avoid coverage overclaims.
DiagnosticTechnical support information about permissions, injection, storage, runtime, exports, or scan coverage.Use for troubleshooting and traceability. Do not treat it as a page failure.

Contrast result routing

Confirmed contrast issues should appear in the main scanner only when evidence is strong enough. Confirmed axe-core color-contrast violations remain main scanner findings. Custom contrast output is confirmed only when strict evidence gates pass; otherwise it belongs in Diagnostics/Limitations or dormant contrast-review evidence for future roadmap work.

The former Contrast Inspector UI is not exposed in the active extension. Uncertain contrast examples include gradients, background images, transparency, overlays, dynamic backgrounds, CSS effects, text over media, decorative or logo-like text, and third-party injected widgets such as BugHerd. These require inspection before being treated as failures.

Image-only design review boundary

For imported PNG, JPG, or WebP design screenshots, findings must be labelled as visual review or design review. A screenshot can support review of visible layout, hierarchy, spacing, contrast concerns, content clarity, and likely risk. It cannot prove DOM structure, selectors, keyboard focus order, ARIA, accessible names, semantic HTML, tab order, form labels, or real screen reader output.

Reviewer actions

  • Use confirmed issues as remediation candidates, then confirm impact in the real user journey.
  • Use needs-review items for manual inspection, visual checks, or assistive technology testing.
  • Use advisory results to improve quality or prepare follow-up tasks.
  • Record limitations when scan coverage is reduced.
  • Use diagnostics to reproduce scanner behaviour or support a bug report.