Can You Trust AI?

Human Actually · Trust calibration

Can You Trust AI?

A game about knowing when to trust, verify, or reject machine-made answers.

Why this exists

Most people think the AI question is: “Is the model accurate?” That matters, but it is incomplete. A better question is: Do you know when to trust it? This experiment is designed to help you see the difference between answers that are safe to trust, answers that should be verified, and decisions where human judgment should stay in charge.

What you’ll leave with

  • A picture of your trust pattern

    How you lean when AI sounds sure, unsure, or in between—not a trivia score.

  • A clearer sense of when to trust vs verify

    A practical read on role placement: when outputs are usable as-is, when they need checking, and when you should lead.

  • A better mental model for real life

    Framing you can reuse the next time a fluent answer shows up in work, health, money, or media.

How it works

This is an interactive judgment experiment—not a benchmark of model accuracy. Each round shows a question and one AI-style answer. You choose Trust, Verify, or Reject. Feedback follows every choice; at the end you get a calibration read and a practical framework for using AI.

Explore

Part of the Human Actually project. Session data stays in your browser unless you use the contact form.