Where were you when our simulation glitched?
In my version, it happened a little before 10 a.m. on Labor Day, 2024. For the second time in just over a month, xAI, through its mouthpiece and CEO Elon Musk, announced a new supercomputer called Colossus had come online. Somehow, they’d strung together one hundred thousand H100 GPUs in only 122 days. Elon meant for us to be awestruck by the Remarkable Feat of building the world’s largest supercomputing cluster in an unreasonably short time.
I scratched my head. xAI announced Colossus in July. Did this mean xAI just brought another massive cluster online in Memphis, Tennessee?
I had that weird sensation of flashing lights at the edge of my vision, the sound of static, and the shakiness you feel during an electrolyte deficit.
I queried my faithful AI assistant, Perplexity, and it found me plentiful links to articles reporting on the contents of this single tweet. (Apparently, reading tweets qualifies as reporting these days.) Surprisingly, none of the articles mentioned this July 22nd announcement or attempted to clarify what was different now from what happened before.
I’m not on social media, but it looks like A LOT of people saw this post.
Going purely on September 2nd’s reporting, It was almost as if the event on July 22nd hadn’t happened.
Or had it? I had a moment of doubt.
I had so many questions. What is the difference between bringing a massive training cluster online and starting training with that cluster? Wouldn’t bringing it online precede starting the training? July 22 comes before September 2 on the Gregorian calendar. Did the timeline somehow flip?
It took a few days (proper reporting usually does) before NPR produced this article, recounting surprise announcements at last-minute press conferences, xAI forcing local utility officials to sign NDAs, and the local city council left entirely in the dark about the presence of a giant new data center in their backyard. Had no one in Memphis been among the almost twenty million Twitter users who’d viewed the July 22 post? That seemed implausible.
The article focuses on the data center's environmental impacts–bravo, reporters!–while completely ignoring the distinctly glitch-like inconsistencies in the storyline–boo!
Local councilwoman Yolanda Cooper-Sutton suspects something is up.
“I have an old saying from my grandparents: What it won't get in the wash, it’ll take care of in the rinse,” she says. “So, if there's any secrets and if there's a dead cat on the line — it’ll soon show up.”
That is a very esoteric way of saying something doesn’t seem right, and she planned to get to the bottom of it. Had she noticed the glitch? Maybe she heard the same buzzy static, weirdly flashing lights, and unnerving shakiness that I had.
I believe there’s a strong probability that our race to superintelligence will finally hurl humanity off the cliff of climate change. Understandably, the July announcement caught my eye. I posted about it, referencing an article I wrote in 2023, in which I speculated that the AI industry would collide head-on with climate change to devastating effect.
Running a 100,000 GPU cluster requires about 150GW of electricity–enough to power 100,000 homes annually–and consumes about 1.3 million gallons of water daily. The local power authority claims the giant data center’s water and power requirements will not impact the local community. That seems unlikely, given it clearly can’t service a 150GW data center.
We know this because xAI allegedly fired up at least 18 methane gas generators to close the power gap without proper permits. Since methane is colorless and odorless, natural gas producers tend to add a chemical that smells like rotten eggs so you can detect leaks. These generators can emit roughly 130 tons of nitrogen oxides into the air each year.
That sounds super fun for the surrounding Boxtown community. I bet the residents look forward to Elon realizing part two of his boast: “Moreover, it will double in size…in a few months.”
1.3 million gallons of water is 3% of the total capacity of the local water supply. In the short term, that doesn’t sound like a huge number. But, the larger these data centers and clusters get, the more resource-intensive they become. Suppose those additional fifty H200s more than double the water consumption. And then Google, Meta, Microsoft, and a host of other AI players decide Memphis is a good home for their massive data centers. And then the data centers after that have a million GPU clusters, the ones after that have ten million, and the ones after that have one hundred million. By the end of the decade, three orders of magnitude seem inevitable if companies can start popping out 100K clusters in three months.
Take that, Memphis water supply.
I predict that ten years from now, Memphis will be forced to negotiate with emergent national superpower Cairo, IL, a formerly sleepy burgh nestled in the confluence of the Ohio and Mississippi rivers, to open their newly constructed damns and resupply its dwindling groundwater.
My head is itching. Is your head itching?
So then I saw this:
This before-and-after picture implies that on June 1, that ginormous data center had exactly zero computers in it. Let’s run some quick math.
May 3rd is 122 days before September 2nd, and March 22nd is 122 days before July 22nd.
Again, I heard brief bursts of static, like bugs flinging themselves against one of those electric tennis racquet bug zappers. My screen flickered.
Pretend for a second they turned this thing on at the end of July. On the first day of June, there were no computers in this data center, which means they built the world’s biggest cluster in only fifty-one days. Why isn’t Elon boasting about that? Fifty-one is a way tastier ego sandwich than one hundred twenty-two.
What was Colossus doing between July 22nd and September 2nd?
Am I the only person paying attention to duplicative events on an inverted timeline?
Simulation Theory
Computers weren’t yet all that powerful in 2003, but we had networked them together in such a way as to reveal their potential. If you believe Moore’s Law, even though it’s not a law so much as an observation of sustained economic value, you could easily envision a world where computers would become so powerful that we would lose control of them.
Only four years before, we’d left movie theaters, Rage Against the Machine still buzzing in our heads, pondering whether we’d just woken up to the reality that we might be living in a construct created by machines to subdue humanity. I felt weird, and I wasn’t alone.
Nick Bostrom, who looks like he could have been an extra in The Matrix, imagined the possibility of enormous computing power and thought up ways future generations of possibly posthuman civilizations might use such computational power.
His paper, Are you living in a computer simulation?, explored the consequences of future technological capabilities by proposing that at least one of the following is true: a) Humanity is very likely to go extinct before reaching a "posthuman" stage. b) Posthuman civilizations are extremely unlikely to run ancestor simulations. c) We are almost certainly living in a computer simulation.
Bostrom continues:
It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.
It’s not hard to devise a plausible scenario: a researcher in that posthuman civilization becomes curious about the conditions that led to the evolution from human to posthuman. Perhaps they are trying to understand how humans narrowly escaped extinction.
To test their hypotheses, the researcher runs a series of computer simulations to recreate, say, the last million years of human evolution, looking for the spark that forks the line. They run those simulations over and over again, tweaking parameters and refining experiments to understand the nature of human consciousness and the differences between it and posthuman consciousness.
In that scenario, postulates Bostrom, your consciousness is far more likely to exist within a simulation than not. Now, he expects you to accept at least one difficult proposition, that consciousness is “substrate-independent” and can be replicated in non-biological substrates, including computers. Since philosophers and neurologists cannot demonstrate exactly how our brains give rise to consciousness, this proposition is less problematic than at first appearance.
Still, those philosophers, theoretical physicists, and cosmologists disagree about the theory’s plausibility. It’s a reasonable disagreement because they can think up ways to test the theory, but those methods are subject to constraints that put definitive proof in doubt.
Let’s assume that even though the future computational capabilities are vast and beyond our comprehension, they are still finite. There are open questions about the ability of finite computational resources to run simulations of an infinite universe. (Indeed, there are open questions about whether our universe is infinite or finite.)
There may be a base reality running a simulation capable of running n number of simulations. Since there is no way of knowing whether the cause of the simulation is the actual reality or another simulation, there’s no clear path to proving or disproving whether we are in a simulation. We struggle through a similar infinite regress problem while pondering the origins of our universe–what caused the first cause?
It’s unsurprising, then, that brilliant theoretical physicists and philosophers struggle to comprehend a system of infinite simulations of infinite universes, eventually trudging reluctantly to a ‘turtles all the way down’ position that is as unsatisfactory as it is unlikely.
Are we living in an undetectable simulation?
We are already running our own simulations
If we believe the weird Colossus data center episode is a glitch, does this glitch happen each time we reach the point in our simulation where we build the thing capable of running the simulation?
As it turns out, we are already running small-scale simulations of complex human interactions and the formation of societal structures.
In one simulation called Sid, researchers set loose one thousand AI non-player characters inside a Minecraft (!?) world. These AI NPCs built an economy, formed a government, created a religion, and started doing remarkably human-like things, like abandoning their dreams for the good of their community. Sacrifice! Altruism!
The simulations run daily. In one–and this is by far my favorite part–an AI priest establishes a religion known as Pastafarianism and becomes the economy’s most active member by bribing the other AI Agents to convert. Jim and Tammy Faye, eat your hearts out!
The Minecraft simulation is small-scale, and the company behind it had only the paltry sum of $9M at its disposal, so the training cluster wasn’t anywhere near the size of Colossus. Lacking access to greater funds means this “simulation” is pretty rudimentary, and those AI agents aren’t pushing the boundaries of superintelligence. Yet.
Bostrom asks us to assume the future posthuman society has access to computational resources that are more powerful than we can conceive, running a vast number of ancestor simulations. Perhaps each of those simulations can run n number of its simulations. Inside each simulation layer reside beings with consciousness and subjective experiences.
Now, suppose a Sid-like AI model trained on the Colossus cluster gains the ability to run powerful simulations. If those simulations closely mimic our reality, eventually spawning an intelligence capable of inventing technology to run even more powerful simulations, where both the original simulation and the subsequent simulations are indistinguishable from reality, and we lack the intelligence to prove whether or not our reality is a simulation, does it matter?
In the immortal words of Morpheus…
Simulation Theory, redux
Our key postulation is that building the clusters required to run simulations indistinguishable from our current reality could very likely hasten the extinction of the human race. Each model generation and training cluster advances the technology by an order of magnitude. We push through those orders of magnitude, blindly chasing the allure of better algorithms and more parameters, sucking up resources to make the machines powerful, cool, and comfortable while humans die of thirst.
That takes care of Bostrom’s first proposition.
Would an advanced posthuman superintelligence bother to simulate human existence? As technology innovation accelerates exponentially, so does the understanding of the universe. Our superintelligent progeny is just as likely to be obsessed with conceiving some new phenomena that mere humans cannot hope to understand as they are to want to look backward. After all, we humans spend our computational resources on pressing forward, not looking back. It follows a superintelligence species would do the same. But since we can only speculate, let’s give the human existence simulation scenario a 50-50 probability.
If humans become extinct before evolving to posthuman, that probability drops to zero.
That resolves Bostrom’s second proposition.
Which means your consciousness and subjective experience are simulated.
Sorry.
On the other hand
I concede the possibility that the simulation didn’t glitch and Elon Musk just lied—more than once. Or xAI’s social marketers were confused. The work started in March, May, or June, and it either took 122 days or longer. It’s either fully online or lacks the power to operate reliably.
Lying about when Colossus went online and how long it took achieves nothing of consequence, which perfectly aligns with the kind of lying done by public figures these days.
Isn’t it weird that Memphis city officials would claim to be entirely in the dark about a large-scale infrastructure project that had been going on for months? The data center is at the edge of town, but there’s only one road going in and out, and surely trucks laden with construction materials, giant computers, networking equipment, and generators piqued the locals’ interest. Are they lying, too?
Did social media truncate our memory span so much that we could not remember identical news stories six weeks apart? Perhaps our programming recently changed, and humans no longer recognize inconsistencies in our simulation. Or, who knows?
The final analysis
Humans are racing to achieve superintelligence, knowing that effort might exhaust our natural resources. We cannot evolve into a superintelligent posthuman race if we are extinct, and posthumans are likely disinterested in our hubris anyway. Therefore, Bostrom’s third proposition has the highest probability: we live in a simulation.
Our simulation glitched at the exact moment we achieved the computational resources to train AI models capable of running advanced simulations. It follows that achievement changes the rules of the simulation or perhaps exhausts the finite computational resources running something suddenly infinite.
Is there a parallel me somewhere blissfully unaware? Is there a parallel simulation where we never make it to an agrarian society, instead hunting and gathering in stasis with our environmental partners (you know, animals and plants)?
If we find a way to prove any of this, will that mark the point at which we become posthuman?
Appendix: the timeline of the weird
The Sid project:
Wow! I made it to the end....now I have to go home from my simulated office and have a simulated scotch - neat, and really give this some serious thought. I have no idea of the kilojoules your brain expended with this analysis but I love it. I will probably spend a few hours on a whiteboard with this one Eric. Thank you