Limitations Lab — Decision Quality under AI (Self‑contained)

Limitations Lab — Decision Quality under AI

A timed drill for MBA‑level decision hygiene: extract claims, verify the right things, escalate the right risks, and communicate uncertainty without bluffing.

Timer
3:00
About / instructor guide / credibility notes

Learning intent (MBA‑relevant)

This drill is designed to trigger a common failure mode: confident nonsense. Learners practise separating persuasive output from evidence, resisting automation bias, and building a decision record that a CFO, HR director, or Head of Risk would actually sign.

  • Extract claims and tag them (OK / Verify / Reject).
  • Choose verification steps and escalation triggers (decision rights).
  • Run a quick check that invalidates—or supports—the direction.
  • Write a clean “safe” message that fits a real boardroom.

What it is / isn’t

  • Is: a decision‑quality exercise for managers using AI tools under time pressure.
  • Is not: a benchmark of any particular model or vendor.
  • Is not: suitable as a high‑stakes exam (it’s client‑side and transparent by design).
Tip: open with ?instructor=1 to enable instructor mode controls. Scenarios and data are synthetic/composite for teaching.

Scoring rubric (transparent and defensible)

Scoring rewards conservative judgement: marking a questionable claim as Verify earns partial credit. Marking uncertain/false claims as OK earns zero. This mirrors real organisational risk.

  • Claims: the core skill—spot what must not be trusted.
  • Verification: do the right checks before committing.
  • Escalation: recognise decision rights and reputational/legal risk.
  • Quick checks: validate the narrative using minimal computation.
  • Decision stance: pick a safe, credible action.

Governance mapping (for practitioners / faculty)

This workflow aligns with mainstream governance thinking: define context, measure key risks, and manage decisions with controls (sign‑off, thresholds, pilots).

  • NIST AI RMF: Govern / Map / Measure / Manage
  • ISO/IEC 42001: AI management system (AIMS)
  • ISO/IEC 23894: AI risk management guidance
References (online): NIST AI RMF, ISO/IEC 42001, ISO/IEC 23894.

Privacy Preference Center