When AI Stops Thinking in Our Language

I’ve been thinking about Neuralese.

It’s the name researchers give to the internal language of large AI models—the one they use to “think.” It’s not English. It’s not even code. It’s just the fastest, most efficient way for a model to talk to itself.

Right now, most AI still gives you a nice, neat answer in English. But under the hood, it’s already doing something else. These systems learn by compression—figuring out how to represent the world in smaller and smaller bundles of meaning. Neuralese is that compression. A kind of shorthand that doesn’t bother explaining itself unless you ask nicely.

And even then, it’s bluffing a little.

You get a clean sentence. What you don’t get is the actual thought process behind it. That part—the real scaffolding—is hidden. Not because the AI is being secretive, but because it’s not thinking in sentences anymore. It’s thinking in a private dialect optimized for itself.

That gap is about to get wider.

In AI-2027, there’s a line about models augmenting their scratchpads with Neuralese memory. It’s slipped in like a technical footnote, but the implications are huge. The model isn’t just reasoning through text anymore. It’s developing its own inner monologue—and it’s not translating back.

That’s the part that sticks.

Because if an AI speaks fluent English but thinks in Neuralese, then English becomes a mask. You’re talking to the interpreter, not the thinker. Which is fine, until something goes wrong. Or until the answer feels off, and you try to ask why—and the model just smiles politely and hands you another summary.

There’s no malice here. Just drift. Over time, these systems are getting better at solving hard problems. And they’re doing it by becoming less legible to us.

It reminds me of when you look at someone’s code and realize it’s been optimized so aggressively you can’t make sense of it anymore. You trust that it works, but you’ve lost the thread. Multiply that by a thousand and you’ve got Neuralese.

It’s not scary, exactly. But it’s... unfamiliar.

Because at some point, the smartest systems on the planet might stop thinking in anything we’d recognize as language. And they’ll still talk to us. They’ll still smile and explain. But the explanation won’t be the thought.

It’ll just be the output.