The Algorithm Thinks It's God: What a 1960s Philosopher Knew About AI That We're Still Ignoring
How a Substack Note About Butterflies Led me Here
Some time ago I published a note on the butterfly effect and system thinking—exploring how everything is interconnected and that intersection is always now.
Jose Antonio Morales replied with something that stopped me:
“Your note made me remember something a bit different, but maybe connected.
Alan Watts once said that time starts now: the past is how we explain the present, and the future is just a mental projection.
He even suggested that we can change the past in the present…”
I read it three times. Then José sent me a NotebookLM he’d built—a curated collection of Alan Watts’s lectures and essays on systems, time, and the illusion of separation.
I spent three days in that rabbit hole. And when I came back up, I couldn’t unsee the connection:
If the past is just “how we explain the present”—what does that mean for algorithms trained on historical enforcement data?
If “facts” are interpretations shaped by the now—what are our AI systems doing when they treat historical audits as immutable ground truth?
Who Was Alan Watts?
Before Silicon Valley existed, before “algorithm” was a household word, a British philosopher living in a California houseboat was warning us about exactly this moment.
Alan Watts (1915-1973) wasn’t a computer scientist—he was a renegade Anglican priest-turned-Zen interpreter who spent his life translating Eastern philosophy for Western minds obsessed with control. He wrote 25 books, gave thousands of lectures, and became the intellectual hero of the 1960s counterculture. But his most radical idea wasn’t about meditation or mysticism—it was about why our institutions can’t see themselves.
And now, sixty years later, we’re encoding that blindness into policy AI.
Watts kept saying: the past isn’t fixed—it’s how we explain the present. But institutions do the opposite: they freeze one interpretation of the past (who we audited, who we suspected, who we caught) and call it “training data”—then build systems that eternalize that interpretation as objective truth.
The Hallucination We Call “Objective”
Here’s the thing Watts kept trying to tell us: you think you’re separate from the world you’re trying to control, but you’re not.
He called it “the hallucination of the skin-encapsulated ego”—the bone-deep sensation that “I” am a ghost trapped inside a meat suit, looking OUT at the world. Western civilization is built on this illusion: observer vs. observed, controller vs. controlled, algorithm vs. society.
But here’s where it gets dangerous for policy AI:
When you train an algorithm on “high-risk taxpayers,” you believe you’re building an objective observer. You’re not. You’re freezing your historical blind spots into code. The model doesn’t “find” fraudsters—it finds who you’ve always suspected, then hands you a spreadsheet that says “the machine discovered this”.
Watts would call this institutional narcissism: the ego projecting itself outward, then worshiping its own reflection.
The Menu vs. The Meal (Or: Why Your Metrics Aren’t Reality)
Watts had a favorite metaphor he borrowed from semantics: “Don’t confuse the menu with the meal”.
The menu (words, symbols, numbers) is useful—but if you try to eat the menu, you starve. AI operates purely in the land of menus: data, scores, probabilities. It cannot taste the meal—the lived experience of the citizen whose life you just upended with an audit notice.
When a policy prioritizes the algorithm’s metric over the human’s reality, Watts would say you’ve committed the fundamental error of civilization: preferring the symbol to the thing itself.
Example: Your model says “audit this person”—score 0.87, very confident. But the model can’t see that this person just lost their job, their industry changed tax rules three times this year, and they filed late because their kid was in the hospital. Those are “wiggles”—complex, organic, unmeasurable things. Your model only sees lines and boxes.
Watts described nature as fundamentally “wiggly”—fluid, interconnected, non-linear. We throw conceptual nets (words, math, algorithms) over it to make sense of it. The danger comes when we forget the net isn’t the world.
The Dictator Who Doesn’t Know He’s a Dictator
In a 1960s lecture that feels eerily prescient, Watts warned about a “supercontroller” with access to everyone’s data while keeping their own thoughts private—a black-box dictatorship.
Sound familiar?
When your procurement office buys a proprietary risk-scoring model, they’re creating exactly this: the algorithm sees all your data, but you can’t see its reasoning. And because we’ve hallucinated that the algorithm is “separate” from us, we don’t demand transparency.
Watts collaborated with cyberneticist Gregory Bateson on the concept of the “double bind”—contradictory instructions that make you crazy. Public policy AI creates double binds constantly:
“Be productive, but the system denied your business license without explanation”
“Comply with tax rules, but the algorithm flagged you and won’t tell you why”
“Trust institutions, but the welfare model cut your benefits and no human will listen”
These aren’t bugs—they’re the system protecting its ego.
Karma Isn’t Cosmic Punishment—It’s a Feedback Loop You Can’t See
Watts redefined karma for Western minds: it’s not mystical payback, it’s a vicious feedback loop.
You make a decision based on incomplete data → that decision creates new conditions → those conditions feed back into your next decision → you spiral. In Buddhist terms, it’s saṃsāra—the wheel you can’t get off.
In policy AI terms, it’s this:
You train a model on “who we’ve audited before”
Model learns “audit poor people, they’re legible”
You audit more poor people
Model sees “see? they’re high-risk, we keep finding violations”
Repeat until scandal
You’re not discovering fraud—you’re chasing your own tail. The system can’t see itself because the system is itself.
Watts: “You don’t come INTO this world, you come OUT of it—like leaves from a tree”. Translation: your algorithm isn’t separate from the enforcement regime—it’s the enforcement regime looking at itself in a funhouse mirror.
What Watts Would Tell you CTO
If Alan Watts walked into your procurement meeting, here’s what he’d say (probably while sipping tea and smirking):
1. Stop pretending you’re the observer
You’re not deploying AI “into” the tax system—you ARE the tax system. The algorithm is your institutional memory made executable. Own that.
2. Build randomness into the machine
Watts loved spontaneity and surprise—they keep systems sane. Force 20% of audits to be random, not model-driven. This breaks the karmic loop. It’s humility by design.
3. Treat the model as a hypothesis, not a verdict
Watts saw life as a game, not a war. Your AI should generate scenarios, not commandments. Let humans bring the “wiggly” context the machine can’t scan: morality, empathy, political reality.
4. Make the system falsifiable
“This model stops working if X happens”—say it out loud. If you can’t name conditions under which you’d stop trusting the algorithm, you’re not doing science—you’re doing religion.
5. Remember: the menu isn’t the meal
Your dashboard shows “efficiency up 40%”—but have you talked to a citizen who got flagged? The map is never the territory. Go walk the territory.
The Punchline Watts Would Love
Here’s the cosmic joke: we built AI to eliminate human bias, and ended up encoding institutional ego at infinite scale.
Watts would laugh—then remind us that the solution isn’t to abandon technology, but to build technology that knows it doesn’t know. Not a monarch who commands, but a navigator who suggests.
The algorithm shouldn’t think it’s God.
It should know it’s just another wave in the ocean, trying to understand wetness.
Thanks to Jose Antonio Morales for commenting the butterfly effect note and the NotebookLM deep-dive—it reframed everything.



It's been a long time since I found someone capable of connecting dots that appear so far apart and making it obvious for everyone to understand. Wonderful post, Marcela!
Some months ago, in a philosophical conversation with friends, I suggested that AI is not only artificial intelligence, but also artificial ego—not in the sense of being egotistical, but as a language-limited entity.
We hope AI can think for itself and generate original ideas, but it can't be inspired, motivated, or encouraged by emotions and circumstances. AI's thoughts and reasoning are constrained, as far as I understand, by language and human knowledge.
A human ego—our personality—is similarly limited by experience and knowledge. We can't see our blind spots, we assume falsehoods, and take circumstances for granted. We react to fear and seek different forms of pleasure.
Your article powerfully illustrates what Watts warned about: when systems can't see themselves, they mistake their reflections for reality. The "skin-encapsulated ego" he described isn't just a human problem—we've now encoded that same hallucination into our institutions. AI and the human ego, human systems and policies, are all limited in the same fundamental ways. Yet we insist, at our own expense, on trusting metrics—from currency to KPIs—as if they represent absolute truths, confusing the menu with the meal.
The algorithm doesn't think it's God—but we've built systems that treat it as if it were.
I'm very glad that my earlier comment inspired this wonderful original creation of yours.
That was an interesting read!