Public documentation
Review categories and workflows
How A11Y Cat separates confirmed issues from guided review, advisory signals, limitations, and diagnostics.
Review categories
| Category | Use it for | Do not use it for |
|---|---|---|
| Confirmed issue | Remediation candidates backed by high-confidence current-state evidence. | Claiming complete WCAG conformance or skipping context review. |
| Needs review | Manual judgement, visual review, interaction checks, or assistive technology testing. | Automatic defect filing without human verification. |
| Advisory | Quality guidance, metadata, spelling, language, and readiness signals. | Definite WCAG failure claims. |
| Limitation | Known coverage boundaries that require another method. | Page defect claims. |
| Diagnostic | Troubleshooting permissions, storage, injection, runtime, and export support. | User-impact claims without additional evidence. |
Guided manual review workflow
- 1
Open the relevant queue
Start with needs-review, contrast review, screen reader review, or diagnostics depending on the evidence type.
- 2
Inspect the real page state
Use the current page, selectors, snippets, visual state, and user journey to decide whether the evidence represents a user-impacting problem.
- 3
Use real assistive technology where required
Run NVDA, JAWS, VoiceOver, TalkBack, Narrator, keyboard-only, or other required testing for release decisions.
- 4
Record the decision
Mark whether the item is confirmed, not applicable, deferred, blocked by limitation, or needs more evidence.
- 5
Export only when appropriate
Export review state or reports only when the data is authorised for sharing.