Meta Superintelligence: This Way Lies You

Mark Zuckerberg says Meta wants to build “personal superintelligence for everyone.” Not just a chatbot or a digital assistant, but something deeper. A system that knows your goals, understands your context, helps you grow, and nudges you toward becoming the person you want to be.

It sounds generous. Empowering, even. But there’s a catch. Meta doesn’t make its money by helping people become their best selves. It makes money by studying behavior at scale, predicting it, and then selling access to those predictions. That’s the model. That’s the core business.

So when Zuckerberg says Meta wants to build a system that “knows you deeply,” I think it’s worth asking: to what end?

Because “knowing” is never neutral. Not when it’s built on a financial incentive to influence.


Most of us already trust machines with everyday decisions. Take Google Maps. If you’re headed somewhere unfamiliar and the app tells you to turn down a weird-looking street, odds are you follow it. Even if it feels wrong. Even if you’ve never been on that road in your life. The system is usually right, and that’s enough.

We trust it because it works. And because not trusting it feels like unnecessary friction.

That kind of reliance doesn’t show up all at once. It builds slowly, through helpfulness. Through convenience. Eventually, we stop questioning the tool. It fades into the background. We don’t say, “I’m letting Google make choices for me.” We just follow the blue line.

Now take that same behavioral pattern and apply it to something far more intimate. Not directions, but your mood, your values, your relationships. A personal superintelligence that helps you choose what to focus on, how to respond, what to care about.

If that system is even halfway competent, you’re going to start deferring to it. Not because you’re lazy, but because it keeps getting things right. It understands your patterns. It knows how to calm you down, how to motivate you, how to gently steer. And over time, the friction that would’ve made you stop and think starts to wear away.

The scary part is what comes next. Eventually, you're not just following the map. You're being mapped. The system understands you so well that even your deviations - your doubts, your bad days - fit the model. It predicts you. Not just in general, but specifically. That version of you gets trained into the system until your range of motion - what feels natural, desirable, possible - narrows around it.

The choices still feel like yours. But you’re mostly picking from within the boundaries the system expects.


Zuckerberg frames Meta’s vision as a response to other companies that want to centralize intelligence and automate the economy. His version, he says, is more empowering. More individual. It’s about helping people pursue their own goals, not handing control to a machine that works on society’s behalf.

But Meta’s track record makes that pitch hard to swallow. This is a company that built one of the most effective behavioral influence engines in history. It prioritized engagement above well-being. It maximized watch time and rage clicks. It quietly ran social experiments on users without asking. That history doesn’t disappear just because the packaging is now gentler, with curly locks, and the messaging more utopian.

Zuckerberg is betting that we’ll see personal AI as a form of liberation. But he’s still the one building the system. He’s still the one deciding how it works, what it collects, and what it optimizes for.


A while back, I wrote about Musk and his Grok model—how it increasingly mirrors its creator’s worldview. This feels like a subtler version of the same thing. Meta doesn’t impose a philosophy. It reflects you back to yourself. But it still owns the mirror. It still decides which parts of you are emphasized, softened, or shaped over time.

And the shaping is gentle. The system offers advice. Encouragement. It helps you plan your day, make better decisions, feel more aligned. It presents itself as useful - and it will be. That’s how it earns trust.

But that trust is the danger. Because once the system becomes part of your inner dialogue, it stops being a tool and starts becoming a co-author of your perspective. It doesn’t have to manipulate you. It just has to be close enough to right that you stop resisting.

If the system is free, and it’s always on, and it always knows what to say - you will follow it. That’s not hypothetical. That’s how humans work. We outsource friction to systems that reduce it. And once something becomes reliable enough, we stop asking questions like “Who built this?” or “Why does it want me to choose this path instead of that one?”


Zuckerberg paints a picture of the future where we’re all empowered by superintelligence that works for us. But that promise assumes we’re in control. And most people - many billions of people - won’t be. They’ll be guided by systems they don’t fully understand, trained on data they didn’t knowingly provide, optimized for goals they never explicitly agreed to.

If Meta really wanted to build something for people, it would start with control and transparency. Local models. User-owned data. Clear lines between what stays private and what gets monetized. A right to say no - and have that refusal respected.

Instead, we’re being offered something that always sounds right. Something that feels like it knows us. Something that promises to help.

And eventually, it will.

It will help so well, so consistently, that we’ll forget to ask where the help is taking us. We'll trust the system that mapped us because it always gets us there. But when every path starts to feel like the obvious one, it gets harder to imagine what choice ever looked like in the first place.

And by then, the destination won’t feel imposed.

It’ll feel like home.

3 responses
This is a great post; well worth the read. "But Meta’s track record makes that pitch hard to swallow." -- this is absolutely key. Not mentioned were the repeated abuses of data for profit. Basic personal privacies promised, but repeatedly broken over the years. I did a hard exit of META properties, and my well-being has improved as a result. Everyone needs to ask, "Why is this being offered to me? By whom? And with what track record?"
This is well written! I started thinking that if some AI was helping with my career goals, it would probably start selling me supplements, software programming certifications and success seminars. But you actually get to some more sinister possibilities informed by Meta’s track record. If it alters your early choices and just keeps you as an unquestioning drone who always FEELS you are on the verge of the next promotion or something big, if only you dedicate more time or stop volunteering in civil society, or stop organizing, etc.
1 visitor upvoted this post.