For classifiers, risk scores, and recommendation engines.
The predictive-systems version of the canvas is in development. Same structure as the generative canvas — three checks, twelve questions, runnable in thirty minutes — but reframed around the distinct failure modes of systems that predict labels, scores, or probabilities.
Hiring filters, fraud detection, credit scoring, healthcare triage, recommender systems. The checks draw on the classic predictive-bias literature — Obermeyer et al. (2019) on the algorithm underserving Black patients via cost as a health proxy, Bertrand & Mullainathan (2003) on resume callbacks by name, and ProPublica COMPAS (2016) on recidivism prediction.
Check 1 asks who is missing from the training data and labels. Check 2 asks what proxies the model is really using — the original proxy check. Check 3 asks where the model fails differently for different people.
If your work is predictive — classifiers, scores, predictions, recommendations — sign up via my contact form and I'll let you know when it's published. In the meantime, the generative & agentic canvas may still be useful if any part of your system is generative.
This work is shared to do three things: pass on what I've learned, invite feedback from others working in this space, and contribute to the wider conversation about building better norms and practices around responsible AI.
Particularly interested in hearing from practitioners working on predictive systems — what would make the predictive canvas most useful? Reach me through the contact form or on LinkedIn.