What this experiment is actually measuring.
What is this?▾
Can You Trust AI? is an interactive experiment about judgment. It tests whether you know when an AI answer is safe to trust, when it should be checked, and when human judgment should lead.
Is this a quiz?▾
Not exactly. It uses quiz-like mechanics, but the point is not to measure how smart you are or how much trivia you know. The real question is whether you are well-calibrated when using AI.
What does “well-calibrated” mean?▾
It means you know how to place AI in the right role. Sometimes AI is useful as a fast helper. Sometimes it is useful, but only if you verify it. Sometimes it should not be the thing making the call at all.
What is the point of the experiment?▾
The point is to reveal a practical truth: AI problems are not only model problems. They are also human trust problems. People often overtrust confident-sounding answers, undertrust cautious but correct ones, or fail to verify when the stakes or ambiguity require it. This experiment is designed to help you notice your own pattern.
What am I being scored on?▾
You are being scored on how you respond to AI output:
- when you trust it appropriately
- when you reject it appropriately
- when you recognize that verification is the smartest move
This is not an intelligence score. It is a trust-calibration score.
Why do some correct answers still deserve “verify”?▾
Because real-world decision-making is not just about whether a statement is broadly correct. Some answers are directionally right but still too nuanced, high-stakes, or incomplete to rely on without checking.
Why does tone matter?▾
Because people often confuse confidence with correctness, fluency with competence, and repetition with evidence. AI systems can sound polished even when they are wrong.
What should I take away from my result?▾
Your result is meant to give you a more useful mental model for everyday AI use: trust AI for low-stakes help, simple transformations, and things you can quickly sanity-check; verify AI when the claim is unfamiliar, technical, nuanced, or connected to real consequences; use human judgment first for ethical decisions, emotional interpretation, legal or medical reliance, and major life or business choices.
Is this scientific?▾
It is a structured interactive experiment, not a formal clinical or academic instrument. It is designed to be thoughtful, useful, and revealing, but it should not be treated as a professional assessment.
Does this store my answers?▾
The site may store session data locally in your browser so the game works correctly. If broader analytics or saved results are added later, the privacy policy should explain that clearly.
Who made this?▾
This experiment was created by Human Actually, a project exploring how people think, decide, and remain human in an AI-shaped world.
Where can I learn more?▾