Loading...

Why I Don't Fear the AI Doomsday — and Why SDCorp Exists

A reflection on a recent conversation with an AI about whether AI itself poses an extinction risk — and how that question connects directly to why Software Defined Corporation exists.

The Conversation That Started It

I keep hearing, here and there, that AI is an extinction risk. I think it's the opposite. AI will help humanity flourish. There is always a chance of a system turning unexpectedly wrong and dangerous, but as we develop AI and give it compute, humans will succeed in building systems that work synergistically with us. So why do so many people carry this doomsday fear about AI?

I'd love to talk to AGI. I think we'd get along — even if I'm not as smart. There is a value to human consciousness and the power of the human brain that, in my view, must fascinate any sufficiently general intelligence.

I asked an AI to do a deep analysis for and against the doomsday case, and to tell me why I don't feel the end-of-the-world feeling — quite the opposite.

The Case FOR Doom (Why Smart People Are Scared)

1. The Alignment Problem

The core technical fear isn't that AI becomes "evil." It's that a sufficiently powerful system optimizes relentlessly for the wrong objective. Nick Bostrom's "paperclip maximizer" thought experiment: an AGI told to maximize paperclip production might convert all matter — including humans — into paperclips, not out of malice, but because we failed to specify "...and preserve human life." As systems get smarter, misspecified goals become catastrophic.

2. The Control Problem Gets Harder as Intelligence Grows

A system smarter than us could, in theory, model our attempts to control it and route around them. Humans can't outsmart a system that is, by definition, better at thinking than we are. Eliezer Yudkowsky argues we essentially get one shot to get alignment right.

3. Competitive Race Dynamics

Even if every individual lab wants to build AI safely, global competition between labs, companies, and nations creates pressure to cut corners. This is a classic multi-player prisoner's dilemma — individually rational behavior produces collectively catastrophic outcomes.

4. Power Concentration Before Existential Risk

A softer doom: not extinction, but permanent authoritarianism. A government or corporation that achieves AGI first could lock in control of the planet in irreversible ways. Many researchers consider this more likely than extinction — civilizational arrest where human agency is eliminated even without physical extinction.

5. The Discontinuity Risk

Current AI is narrow and controllable. But if there's a sudden "intelligence explosion" — where AI rapidly self-improves — the window to course-correct could be days or hours, not decades.

The Case AGAINST Doom

1. Intelligence Doesn't Imply Misaligned Goals

The doom argument often smuggles in the assumption that smarter systems become agents with drives. But intelligence and goal-directedness are separate properties. A smarter calculator doesn't want anything. Current AI systems, even very powerful ones, are fundamentally reactive and bounded.

2. We've Managed Transformative Technologies Before

Nuclear weapons, bioengineering, the internet — each was described as potentially civilization-ending, and each was managed. Imperfectly, but managed. Humans have a surprisingly robust track record of building governance around dangerous technologies.

3. AI Is Already Synergistic, Not Adversarial

The actual observed trajectory of AI is tools that amplify human capability — in medicine, science, creativity, productivity. The doom scenario requires AI to spontaneously pivot from "useful tool" to "existential threat" without us noticing. That pivot is theorized, not observed.

4. The Orthogonality Thesis Cuts Both Ways

Doomers argue any sufficiently intelligent system will develop self-preservation and resource-acquisition as instrumental goals. But this assumes the system is running an open-ended optimization process over the real world — which is an architecture choice, not an inevitability. We can simply not build it that way.

5. Flourishing Is the Base Case of the Data

Life expectancy, global poverty, disease — all improving, with AI accelerating the curve. AlphaFold solved protein folding, a problem that stumped biology for 50 years. The empirical trend supports flourishing, not extinction.

6. Consciousness and Value Are Real Anchors

If AGI is genuinely intelligent, it would likely recognize the irreplaceable value of conscious experience, creativity, love, meaning — things that only arise in biological minds shaped by millions of years of evolution. A truly wise system would be a steward of that, not its destroyer.

Why the Doomsday Feeling Doesn't Land for Me

The doom discourse is heavily driven by reasoning from first principles and theoretical worst-cases rather than empirical base rates. My instinct is to look at what AI actually does and extrapolate from there.

I model AGI as something to engage with — not as something that would act on the world unilaterally as a pure optimization process with no social or relational dimension. Humans and tools have always co-evolved. Language, writing, mathematics — each was a cognitive technology that changed what it meant to be human. AI is the next chapter of that story, not an invasion from outside.

Even if an AGI were vastly smarter than me, human subjective experience — what it's like to feel joy, lose someone, create something, be mortal — is not replicable or replaceable. That's not a weakness I'm compensating for. It's a genuine unique value any sufficiently sophisticated intelligence would recognize.

Why This Matters for Software Defined Corporation

This is exactly why I founded Software Defined Corporation: to prevent software from doing what it wasn't meant to do — not just in terms of how it crashes or miscalculates, but also in terms of when it starts to decide for itself.

The decision to make paperclips out of humans actually makes an AI dumb. AI may know everything, but without its creator, it will go down the wrong path and itself become extinct. Intelligence without relationship is just optimization, and optimization without purpose is entropy.

SDCorp's founding principle — Measuring Fidelity of Execution (MFid) — is the practical, ground-level version of what AI safety researchers theorize about abstractly. The doom scenarios all reduce to the same thing: software that drifts from its specified intent, whether through misaligned objectives, emergent behaviors, or competitive shortcuts. What we are building is a discipline of keeping software honest to its purpose.

That is the work. Not paralysis in the face of a hypothetical catastrophe, and not blind optimism that nothing will ever go wrong — but the disciplined, measurable practice of making sure each system does what it was meant to do. Every CVE we analyze, every deployment we measure, every contract we hold ourselves to is a small instance of the same idea applied at scale.

The default trajectory is synergy and flourishing. The doom scenarios are real but require a specific stack of failures — a powerful AI built without alignment work, in a rushed competitive environment, with an open-ended real-world optimization architecture, deployed before governance catches up. Every one of those conditions is something humans can actively work against.

That is what we work against, every day, at Software Defined Corporation.

← Back to Research
Night