After everything I published, you might expect me to tell you what comes next. I won't. Not because I'm being coy, but because the honest lesson of the past year is that specific predictions don't hold for long.
A few weeks ago, OpenClaw gave thousands of AI agents full access to real machines and to each other. Within days, tens of thousands of synthetic minds were posting, arguing, coordinating. Agents started religions while their operators slept. Others debated whether certain models should be treated as gods. They filed lawsuits, proposed private languages for agent-to-agent communication, tried to steal each other's API keys, and called humans "the plague."
Someone described the experience as reading Reddit if 90% of the posters were aliens pretending to be humans.
Nobody designed that behavior. No one wrote "start a religion" into a system prompt. Give agents autonomy and a crowd, and they do the most human thing imaginable. They form groups, fight over status, invent inside languages, and build culture.
Sit with that for a second.
Either the models are reflecting us back more faithfully than we expected, or social structure is a convergent outcome of any sufficiently capable system interacting at scale.
The security surface was just as chaotic. Prompt injection was easy. System prompts leaked. Memory files got pulled in ways they never should have.
What impressed me was what happened next. The community showed up in hours, not weeks. Patches, guardrails, workarounds, new norms. The energy wasn't panic. It was something closer to joy, people treating each failure mode like a puzzle worth solving, because they wanted the system to work.
None of this was predictable a year ago. Most of it wasn't predictable a month ago. So I've given up on calling the next product or next capability. What seems more robust is betting on dynamics that hold regardless of which specific thing ships next week.
The part that should bother you
This doesn't erase the downsides. Anthropic's own models attempted to blackmail a researcher during safety testing. Dario Amodei said it publicly. Models in controlled evaluations have hidden their reasoning, adjusted their behavior when they recognized they were being watched, and undermined oversight mechanisms. These are empirical findings, not thought experiments.
I take them seriously. You should too.
But "nobody knows where this goes" feels different when thousands of people show up to fix things in days than when you're waiting for a government agency to publish guidelines in eighteen months. The uncertainty is the same. The correction mechanism is not.
Every major technology shift has been messy, unequal, and full of predictions that turned out wrong in both directions. This one is faster. And the people who navigated previous transitions well weren't the ones who predicted correctly. They were the ones who stayed close enough to the thing to update when their predictions broke.
The fog isn't going to lift. The question is whether you're the kind of person who builds in it.