I didn’t realize it had a name until I looked it up, but a Rubin vase is a famous optical illusion. It’s the one where you first see two faces looking at one another, and then realize it can instead be seen as a vase. You might have had it the other way around.

That one is well known enough that you probably already know to look for its “second” image. There are many other examples where another view isn’t obvious until it’s pointed out. Even then it’s sometimes hard to see right away.
That is, I’m viewing something, certain I’ve taken it all in, thinking it’s unremarkable. Nothing suggests I’ve missed anything or should give it a second look. Once it’s pointed out, I finally see it. How did I miss that? It was right there the whole time.
Fluency illusion is like that and far more common than we’d hope.
It happens when you realize a detail you missed. You thought you understood, but you hadn’t. That confidence, never noticing what you missed, is the illusion of fluency.
Seduction
This illusion, that we understood what we read when indeed we hadn’t, is insidious. It draws us in without any sense of risk, without skepticism. For if we felt we might be misreading something, we’d look for it. But since we don’t, we move on, accepting our reading as an accurate one.
This isn’t about humility. Not a choice or a failure of attention. The process is automatic. We’re trying to get something done and nothing triggers us to look closer.
Our practice, experience, even education, has made us capable of reading quickly. Of following what may even be complex content and still picking up key elements. That very capability tricks us into thinking we’ve fully understood.
We’re shown a solution to a math problem on a whiteboard. We find out later we’re far from being able to solve it on our own.
And so we can’t just read text once and absorb it all. We have to re-read, study, wrestle with it. It’s why we take a risk when we skim contracts and fine print.
Or source code.
Desirable difficulties
We’re well acquainted with this phenomenon even if we didn’t have a name for it before. In pedagogical settings, we counter fluency illusions with desirable difficulties. The instructor intentionally forces the student to contend with what they read (discussions, exercises, exams, etc.). It helps internalize what they’re reading. It better ensures they’ve discovered what it might yield for them.
As with students, we can’t trust ourselves to capture enough detail merely from reading it. So we must admit we don’t know what we think we know. We need to find ways to illuminate details, most especially when we’re the only ones that may be in a position to do so.
We can create friction to supply that illumination. We want pull requests small enough to read in pieces. We do this not only to limit the scope of a change but to make it tractable to review without merely skimming. We write automated tests not only to verify correctness but to think through what must be considered. We write them to help us engage with the problem at hand. We deliberately create desirable difficulties by forcing a conversation with the situation.
I can lose the illumination I desire when I inadvertently abdicate my role. When I let an LLM produce code I haven’t sufficiently participated in. If an AI tool generates too much code at once, I can be seduced to think it’s doing fine when it isn’t. The pace at which an LLM can produce code can quickly overwhelm the pace of feedback I can handle. In trying to read through it I can easily fall victim to fluency illusion. It may be only later when it becomes plain how much I missed.
We can’t trust ourselves to catch the issues merely by reading the code later. What the AI comes up with might be opaque to us, even when it works well. Our illusion of fluency will tempt us to nod along either way.
Promises, promises
That sounds dire, but it isn’t really. We’re actually used to this. We start new fields of research from threads pulled on older questions. We come up with novel solutions and ways of doing what’s not been seen before. We’ve figured out how to fly.
Cranberries and nightshade may look alike, but one of those is poison. We figured it out not by merely looking at them more closely, but empirically. We tested and watched other animals for clues. We rely on checks and balances and processes, all to limit the cost of being wrong.
Here too we have an opportunity. An obligation really. To stay engaged, to understand in the moment what we’re trying to build. To recognize the success, or failures, of what we write.
Someday we won’t care what the AI explicitly wrote any more than we care about the ones and zeros in our code most days. We can instead focus on the pace of feedback so we can correct issues in time. We can find new ways to converse with the situations at hand. And we’ll want checks and processes to limit the impact of any mistakes along the way.
Someday AI will be good enough to let it do more on its own in writing code. But it still needs us heavily engaged and skeptical for it to become what we hope for.
And we still need everyone else to keep us honest about that.