I build AI systems for a living. Not the kind that make headlines — not frontier models, not chatbots that pass bar exams, not the demos that go viral on Twitter before quietly failing in production. I build the ones that have to work on Monday morning. Systems that sit inside one of the largest banks in the world and help real people do real jobs, where getting it wrong has consequences measured in dollars, compliance violations, and trust.
I tell you this not to establish credentials but to establish position. I’m writing from inside the machine. And from in here, the view is very different from what you’re hearing.
There’s a phrase that shows up in almost every serious conversation about artificial general intelligence. You’ve heard it so many times you’ve stopped noticing which version people use.
This is not a small distinction. It’s the fault line that runs beneath the entire AI discourse, and which side you stand on determines what you build, what you fund, what you regulate, and what you fear.
Certainty is a better product than nuance. Certainty gets funded. Certainty sells books. "AGI is coming, taking all our jobs" gets clicks, it's a story. "We're not sure what's coming, but here's how we think well about it" is a seminar.
The “when and if” crowd has won the narrative. Not because they’re right — nobody knows if they’re right — but because certainty is a better product than nuance. The result is an AI discourse colonized by two subspecies of certainty: the accelerationists, who are certain AGI will be glorious, and the doomers, who are certain it will be catastrophic. They disagree on the outcome but agree on the premise. The if has been settled. All that remains is the when.
Here’s what’s actually happening. Every day, in thousands of organizations, people are sitting down next to AI systems and trying to get work done. Not AGI. Not superintelligence. Systems that are brilliant at some things and bewilderingly bad at others.
The interesting problem — the urgent problem — isn’t how to survive AGI. It’s how to design the relationship between human intelligence and artificial intelligence right now, today, in a way that makes both better off. This is not a lesser problem. It is arguably the problem.
Consider what this looks like in practice. When a critical system goes down in a large enterprise, the traditional response is a war room full of engineers drowning in data — parsing millions of log entries, correlating across monitoring platforms, fielding status calls from every stakeholder who dials into the bridge, and trying to actually diagnose the problem somewhere in between.
Redesign that workflow with mutualism as the architecture. The AI processes the logs, correlates against historical incidents, and assembles a remediation recommendation in minutes instead of hours. The humans are freed to do the work that actually requires a human: assessing whether the recommended fix will cascade into downstream systems, applying institutional knowledge that exists nowhere in any log file, and owning the consequences of the decision.
The AI without the humans generates recommendations nobody should trust in production. The humans without the AI are buried in data while the clock runs. Neither party is diminished. Neither is the tool of the other.
Ecology has a term for this: obligate mutualism. A relationship between two organisms so deeply interdependent that neither thrives without the other. Not cooperation, which is optional. Obligate. Structural. The clownfish and the anemone. The mycorrhizal fungi and the forest.
The AI safety community talks about “alignment” — which sounds collaborative but isn’t. Alignment means one party steering the other. It’s a leash dressed up in friendlier language. Every major alignment framework starts from the same metaphor: there is a powerful thing, and our job is to keep it pointed in the right direction.
This is the logic of domestication, not partnership. And domestication has a ceiling. You can train a dog to heel, but you cannot train a dog to want what you want. You can only suppress what it wants instead. That works until it doesn’t.
We don’t need AI to want what we want — that’s the anthropomorphic trap the alignment community keeps falling into. We need to build systems where the architecture itself makes mutual benefit the path of least resistance. Where defection is structurally costly, not just prohibited. No amount of RLHF solves that problem. It is architectural, not ideological.
“If and when” isn’t a retreat from seriousness. It’s a demand for a different kind of seriousness.
And I want to be honest about where this argument is vulnerable. The dependency I’m describing is real today because there are things AI genuinely cannot do. But the “if and when” question applies to my own framework too. If those capability gaps close, then the structural dependency dissolves, and obligate mutualism becomes optional mutualism. And optional mutualism is one defection away from parasitism.
I don’t have an answer to that. I have a design philosophy that produces genuine interdependence at current capability levels, and an open question about whether the architecture survives scaling. That’s the most honest thing I can say. And I trust it more than anyone’s certainty about what’s coming.
The hardest problem in AI isn’t intelligence. It’s relationship.
And that problem is here now. Not if. Not when. Now.