A practical bias check anyone can run in 30 minutes, on any AI system.
Two canvases, one philosophy. Both produce findings you can act on. Pick the version that matches the kind of system you're working with.
The questions differ because the failure modes differ. Pick the one that matches your system.
LLMs, chatbots, image generators, AI agents
For systems that generate (text, image, audio, video, code) or act (call tools, write to systems, take real-world actions). Three checks covering representation, defaults, and failure modes including the agentic-specific failure modes that didn't exist in classical ML.
Open the canvas →Classifiers, risk scores, recommendation engines
For systems that predict a label, a score, or a probability (Examples include hiring filters, fraud detection, credit scoring, healthcare triage). The original framing, drawing on cases like Obermeyer et al. and Bertrand & Mullainathan. Same structure, different questions.
Learn what's coming →Most AI bias guidance lives at one of two extremes: high-level principles you can't operationalise, or comprehensive impact assessments that require specialist time and tooling. There's a gap in the middle; a runnable practice that builders, buyers, and users can actually do on a Tuesday afternoon.
These canvases are an attempt to fill that gap. Three checks. Twelve questions. One page of audit. Designed to be runnable in a thirty-minute team meeting with whoever's available — engineers, designers, PMs, ops, anyone affected by the system. The output isn't a compliance artifact; it's a working set of findings to address, document, or escalate.
The canvas doesn't replace formal RAI processes, model cards, impact assessments, or regulatory compliance. It's an operationalisation layer that plugs into what teams already have or seeds the practice for teams that don't have anything yet.
This canvas is shared to do three things: pass on what I've learned, invite feedback from others working in this space, and contribute to the wider conversation about building better norms and practices around responsible AI.
If you've used it, adapted it, or have thoughts on the framing — I'd love to hear from you. Reach me through the contact form or on LinkedIn.