Public documentation

UI Feature Guide

The operational hub for visible extension features, workflows, evidence, and limitations.

How to use this guide

Each feature is documented by purpose, use case, implementation approach, inputs, output, user action, evidence value, and limitations. Use this page as the hub, then follow child pages for deeper scanner, export, privacy, and troubleshooting detail.

Result classification quick reference

ClassificationMeaningNeeds human review?Primary action
ConfirmedHighest-confidence issues that can be presented in the main scanner results for the current rendered state.Review context and user impact before release decisions.Prioritise remediation, retest, and record evidence.
Needs reviewItems that need human judgement before being treated as failures.Yes.Inspect manually, then decide whether a defect exists.
AdvisoryHelpful signals that may improve quality but should not be treated as definite WCAG failures.Usually.Use as quality guidance or backlog input.
LimitationA known boundary of what the extension could not test or prove.Yes, if the uncovered state matters.Use another test method and avoid coverage overclaims.
DiagnosticTechnical support information for debugging or understanding scan coverage.Not as a page failure.Use for troubleshooting, support, and repeatability.

Specialist workflow boundaries

Contrast Inspector roadmap

Confirmed contrast issues appear in the main scanner only when evidence is strong enough. The former Contrast Inspector UI is not exposed in the active extension; uncertain cases such as gradients, images, transparency, overlays, dynamic backgrounds, CSS effects, text over media, decorative/logo text, and BugHerd-style overlays remain review or diagnostic evidence until the workflow is rebuilt.

Design screenshot review

Image-only review can discuss visible layout, hierarchy, spacing, contrast concerns, content clarity, and likely accessibility risks. It cannot verify DOM, selectors, focus order, ARIA, names, semantic HTML, tab order, labels, or real screen reader output.

Screen Reader Review aid

Supports note-taking, expected reading order checks, visible-content review, and evidence organisation. It is not a replacement for NVDA, JAWS, VoiceOver, TalkBack, Narrator, keyboard-only testing, or real user-task testing.

Exports and local history

Exports are generated locally after user action and can include sensitive URLs, selectors, snippets, diagnostics, notes, and screenshots where captured.

Primary visible features

FeatureUser actionEvidence valueLimitation
Toolbar launch and scan workflowOpen the action on an authorised tab and run a scan.Current-page automated and review evidence.Restricted pages and unvisited states may not scan.
Results and filtersUse classification, severity, source, and search filters.Keeps confirmed issues separate from review and diagnostics.Filters do not change classification.
Language and spellingReview flagged terms, skipped languages, and allowlist decisions.Supports editorial and language review.Not complete language validation.
Screen Reader Review aidUse expected reading order and notes to plan real AT testing.Organises review evidence.Does not prove real screen reader behaviour.
Exports, diagnostics, and historyExport evidence or clear local state when appropriate.Supports handoff and reproducibility.Exported files leave browser control.

Feature-to-guide map

Feature areaPrimary guidePrivacy or evidence note
Scan workflow and result queuesUnderstanding resultsMay include URLs, selectors, snippets, and issue state in exports.
Scanner modulesScanner modulesReads current page state; output can be exported.
Review workflowsReview workflowsUser review state may be stored locally or exported.
Screen Reader Review aidScreen Reader Review aidOrganises review evidence but does not prove real AT behaviour.
Exports/local state/historyExports and local dataExports can contain sensitive page evidence.
Diagnostics/supportDiagnostics and limitationsDiagnostics can include runtime and page context for support.