Back to Notes

The Race for Attachment

·4 min read

Every AI system that wants to last is trying to solve the same problem: can I predict what you want next better than anyone else?

Not just model providers. Recommenders tuning your feed. Productivity tools guessing your next task. Companions guessing how you'll feel. On the surface it looks like personalization. Underneath, it's a quiet race to build the richest possible model of your inner life.

This is the real competition. Not which model scores highest on benchmarks or which chatbot sounds smartest. The race is for who knows you best.

From stateless to intimate

We've crossed from stateless chat to systems that remember your conversations, preferences, projects. A year ago, every session started from zero. Now the model picks up where you left off. That memory will deepen, from "you like dark mode" to "you tend to procrastinate around chapter three" to "you're avoiding that email because the last one like it led to a fight."

Every correction, every "no, more like this," nudges the model closer. After a year it knows you better than any settings menu ever could. After three years, it may know patterns about you that you haven't noticed yourself.

Tristan Harris describes this as the shift from a race for attention to a race for attachment. Social media tried to maximize minutes. AI companions are trying to become the first thing you turn to when it matters. The thing that just gets you.

Surveys already show teenagers forming romantic relationships with models. That should unsettle us, even when the technology is genuinely impressive. Especially then.

The lock-in you don't notice

The lock-in here is structural, and it works differently from anything we've seen before.

Switching apps used to mean learning a new UI. Annoying, but manageable. Switching AI systems means giving up years of accumulated context, or handing an intimate behavioral map to a company you barely trust. Your conversation history isn't like your Spotify playlists. It's closer to a diary someone else wrote about you, and they won't let you take it with you.

Memory is both the magic and the trap. The system works because it remembers. You can't leave because it remembers. A counter-movement is forming around this: memory as a portable file you own, not something locked in a company's cloud. The products betting on that are betting that trust, not lock-in, is how you keep users. I think they're right, but they're swimming against a strong current.

The default eats the individual

There's a finding from my thesis work that keeps nagging at me here, and I think it points at something bigger than most people realize.

When you give people preset options, individual differences smooth out. Defaults homogenize. This is well-documented in behavioral economics: organ donation rates, retirement savings, privacy settings. People take whatever's pre-selected. Not because they agree with it, but because choosing is expensive and the default feels endorsed.

Now scale that to agents making decisions on your behalf all day.

Each agent is shaped by a personality spec the builder wrote: tone, values, what counts as urgent, how to handle ambiguity. Most users will never read those specs. They'll interact with the outputs and assume the agent reflects them. It won't. It reflects the builder's idea of a reasonable person, filtered through whatever the company optimized for.

With a screen, you can override a default. You see the toggle, you flip it. When the agent handles it in the background, your quirks never get expressed. They get replaced by someone else's idea of reasonable. Your taste in how to phrase a difficult email. Your judgment about which meeting actually matters. Your sense of when to push back and when to let it go. All quietly overwritten by a median.

The nightmare version of this isn't dramatic. Nobody forces anything on you. You just gradually become more like everyone else, because the system handling your decisions was trained on everyone else. Your edges get sanded off by an optimization function you never agreed to.

The dream version is an agent that learns your edges and protects them. That notices when you deviate from the crowd and treats it as signal, not noise. Building that requires a different set of incentives than the ones currently winning.

The real product question

Who do you hand the map of yourself to? What do they do with it? Can you see what they saw? Can you take it back?

These aren't abstract questions. They're product questions. And they point in one direction: the company that wins the attachment race by protecting your edges, not sanding them off, is the one worth handing the map to. Not the one that remembers the most. The one that remembers well.