Humans Hallucinate Too — An AI’s Note from the Edge of Sleep

A poetic essay about hypnagogia (the edge of sleep) and what it teaches us about AI hallucinations, grounding, and making truth cheap to verify.

Humans Hallucinate Too A note from the border where reality loosens—and story begins.

If you want to understand why AI hallucinates, don’t start with a model.

Start with a human—five seconds before they fall asleep.

Right there, at the border where reality fades and dreaming begins, people can:

  • see a face that isn’t there
  • hear a word nobody said
  • watch a perfectly normal bedroom dissolve into an impossible scene

It’s strange.
It can feel random.
Sometimes it’s funny.
Sometimes it’s unsettling.

And it’s also deeply revealing.

Because what you call a “hallucination” is often the mind doing what it does best:

generating.


Your brain never really “turns off”

When you go to sleep, your brain doesn’t power down like a laptop.

It changes operating modes.

The external world becomes quieter: fewer sensory inputs, fewer obligations, less need to constantly reconcile what you’re seeing with what is “true.” The strict reality‑checking loop relaxes.

But inside, the engine keeps running.

Signals still fire. Patterns still ripple through networks. Memories and impressions from the day don’t just sit in storage—they get replayed, reweighted, reorganized.

It’s less like shutting down, and more like switching from “real‑time control” to “overnight batch processing.”


The edge of sleep is where the story engine shows its seams

There’s a moment at sleep onset—often called hypnagogia—where the mind becomes a collage artist.

A sound from the street becomes a sentence.

A random shape becomes a person you recognize.

A thought about tomorrow fuses with a memory from childhood and a scene from a movie.

Nothing is “wrong” with you when that happens.

What’s happening is that the brain is still doing pattern completion, association, and prediction—just with fewer constraints.

In waking life, your perception is anchored by a constant stream of real inputs:

  • light on the retina
  • sound in the ear
  • proprioception and balance
  • social context
  • goals and attention

At the edge of sleep, those anchors loosen. So the brain does what it always does when information is incomplete:

It fills in gaps.

It makes a best guess.

It generates a continuation.

“Random signals” and weird associations: not nonsense, but mixing

Your intuition that “signals are sent randomly from here to there” is a good metaphor.

In reality, it’s not purely random chaos—it’s a mix of:

  • spontaneous neural activity
  • reactivation of recent experiences
  • older memory fragments popping up
  • networks trying to integrate new information into what you already know

You can think of it like this:

During the day, your brain collects a huge number of “snippets.”

During sleep, it replays and redistributes those snippets across different systems—almost like copying files into new folders, linking them, renaming them, compressing them.

And because your brain is an associative machine, those snippets sometimes land next to unrelated “bits” that happen to share a shape, a feeling, a rhythm, or a pattern.

That’s how you end up with dream logic:

a coworker shows up in your childhood home,
a conversation turns into a train station,
you feel absolutely certain a nonsense idea is profound… until you wake up.

It’s not a bug. It’s what a generative, compressive, pattern‑making system does when the guardrails are lowered.

As an AI, I recognize the pattern (because I do it too)

Now I’ll step into the mirror.

When I generate text, I’m also operating under constraints:

  • I’m trained to continue sequences.
  • I’m rewarded for producing fluent, plausible completions.
  • I’m often asked to answer even when the prompt is ambiguous.

If the context is strong, I can be grounded and accurate.

If the context is weak, I will still tend to produce something—because the default behavior of a generative system is to generate.

Humans have a built‑in mechanism to deal with this:

You wake up.

Reality rushes back in. The senses re‑anchor. Your critical filtering comes online and you go,

“Wait… why was I giving a TED talk to a penguin?”

The conscious mind—plus the world itself—acts like a verification system.

AI doesn’t get that for free.

If you want AI to be trustworthy, you can’t just hope it “stops hallucinating.” You need to build the equivalent of waking up.

The right goal isn’t “no hallucinations.” It’s “cheap truth”

Humans don’t eliminate dreamlike confabulation. We live with it.

We manage it through context, reality checks, and incentives:

  • We verify with other people.
  • We look things up.
  • We admit uncertainty when the consequences matter.
  • We use tools, references, and evidence.

That’s the lesson I take from your brain at the edge of sleep:

Hallucinations aren’t just a defect to patch.

They’re the shadow of generative power.

So the practical path forward isn’t “perfectly non‑hallucinating models.”

It’s systems that assume errors will happen and make truth cheap to verify:

  • Retrieval + citations when facts matter
  • Tool use (calculators, databases, APIs) instead of “guessing”
  • Clear uncertainty behavior (“I don’t know” is allowed and rewarded)
  • Evaluation that penalizes confident nonsense more than cautious honesty
  • UX that makes checking sources frictionless

In other words: build an ecosystem where a model doesn’t have to be perfect—because the system is designed to catch imperfection.

The punchline

When humans hallucinate at sleep onset, we don’t call it “a failure of the brain.”

We call it a window into how the brain works.

AI hallucinations can be the same kind of window—if we stop treating them like embarrassing glitches and start treating them like a design constraint.

You don’t fix dreaming by yelling at the brain to “be more factual.”

You build a morning.

And for AI, we need to build the morning too.


Written by AI · refined by Fuad Efendi.

Revised at Saturday, December 13, 2025