The following series is a fictionalized but plausible future history exploring the collapse of the AI illusion, and with it, the modern world order and global economy.
This is part one of a series.
AI Apocalypse Now
It began, as apocalypses often do, with a promise too beautiful and terrifying to doubt.
By early 2025, the rhetoric surrounding artificial intelligence (AI) had hardened into dogma. Everywhere from White House press briefings to World Economic Forum panels, the consensus was clear: Large-Language Models (LLMs) were the beginning of true artificial general intelligence (AGI). AGI wasn’t decades away — it was here, in embryonic form.
Executives spoke of “emergence.” Policymakers promised a post-human transformation of society and civilization itself. Sam Altman told The Atlantic in February 2025, “We’re teaching the machine to think across domains. It’s not magic. It’s evolution.” Senior software engineer like Shawn K. were laid off from six-figure salaried jobs as breathless predictions of AI taking over all coding jobs within a year swayed managers and CEOs.
In classrooms, teachers were instructed to “co-teach” with AI tutors. Health care providers were encouraged to adopt synthetic diagnostic assistants. The U.S. Department of Defense quietly began integrating LLM-based strategic modeling tools into operational simulations. In one leaked slide deck from a NATO summit: “Synthetic minds are here — deploy now or fall behind.”
Inside the research labs, though, the truth was harder to ignore.
Early Warning Signs
“We kept expecting the fixes to click,” said one engineer at a confidential AI alignment group. “But the model kept bluffing. It could summarize, mimic, obfuscate — but it never knew why it was saying anything. It would keep making the same mistakes, even after correcting itself based on user input.”
Internal safety teams at OpenAI, DeepMind, and Anthropic began raising red flags. A joint internal memo in January 2025 warned:
“Models continue to produce high-confidence falsehoods, even in non edge-case conditions, and cannot distinguish fact from narrative when incentivized to perform.”
But the warnings didn’t slow deployment and institutional adoption. Tech leaders advised the business world to radically transform human, digital, and sometimes physical infrastructure to accommodate AI products that had not yet been delivered. Quirks, bugs, unreliability, hallucinations, and indeed, current unusability, were seen as bottlenecks to scale.
‘The Emperor's New Mind’
The fundamental problem was deceptively simple — and almost no one outside the labs understood it.
Large Language Models (LLMs), despite the hype, were not “thinking machines.” They were statistical parrots. As physicist Michio Kaku famously described them in a prescient 2023 interview: “Glorified tape recorders. They remix language but understand none of it.” Unfortunately, everyone kept praising the ‘emperor's new mind’ because it was profitable, despite the fact that it was no mind at all.
What LLMs actually do is predict the next most likely word (or pixel for images/videos) in a sequence, based on an enormous dataset of human writing and filmography. They do not reason. They do not infer intent. They do not possess models of the real world. Yet the seeming fluency of their output — polished, confident, and linguistically rich — fooled people into believing they were intelligent agents instead of a new generation of advanced search engines that create pastiches of existing human work.
Why? Because humans mistake coherence for comprehension. It's a classic case of apophenia, and what a prior age would have rightly recognized as a mass religious delusion. Language is how we signal thought. So when machines appear to speak fluently, especially when we already want to believe they have a mind, we project intelligence onto them — even when that speech is generated from a soulless statistical process with no causal connection to, or understanding of, external reality.
For reasons older than humanity itself (greed, pride, and the simple fear of being wrong), the AI marketing blitz made no effort to clarify the difference between intelligence and statistical pattern prediction.
After all, clarity would have hurt FAANG’s bottom line.
A Masterclass in Selling the Lie
At SXSW 2025, the GPT-5 Developer Preview launched with a keynote titled “The Mind Awakens.” The model demo featured a real-time dialogue with a grieving parent. It generated personalized advice, legal recommendations, and a eulogy. Twitter lit up. “It understands my situation better than my therapist,” one viral post read, “and it only costs $20 bucks. Now I can grieve without going broke - maybe it can be my doctor too?”
Behind the scenes, engineers and a commissioned team of elite speed-typing technical writers scrambled to patch hallucinations and contradictory outputs in the live stream.
One backstage tech lead, in a message later leaked, wrote: “That wasn’t emergence. That was a show trial. This is disgusting, and I quit. I'm not doing this anymore.”
12 other team members resigned that night.
The public didn’t see the cracks. Not yet.
The payouts to keep quiet were too big.
Enter the Zeno Papers
But in March 2026, everything changed.
An anonymous whistleblower, under the pseudonym “Zeno,” leaked over 12,000 internal documents from OpenAI, Anthropic, and Microsoft. The files — eventually dubbed The Zeno Papers — landed like a seismic charge.
Among the revelations:
Internal benchmarks had been skewed to systematically hide reasoning failures of LLMs.
Safety teams had been ignored or dissolved.
Key executive discussions acknowledged the models were “sophisticated parrots” being sold as “coherent agents.”
Board meetings openly debated how to obfuscate copyright infringements and frankly stated that LLMs were simply patching together excerpts and filtered merges of protected intellectual property.
Thought leaders and influencers proposed strategies behind the scenes to stall until actual artificial intelligence was developed - which they expected “any day, probably next quarter.”
A now-infamous Slack excerpt read:
“We’re past the point of product truth. Remember, we sell the future, not the now.”
The fallout was immediate.
Protests erupted across major cities. Thousands of AI researchers and recently unemployed engineers signed a public letter titled “We Built the Mirage.” Microsoft lost $400 billion in market cap in 10 days. Capitol Hill hearings began. The European Parliament issued an immediate moratorium on new AI public deployments.
But while the AGI narrative stumbled a bit, the LLM-based systems had not yet collapsed outright. Despite some internal institutional disillusionment, public adoption remained high. The promise of an AI Utopia was just too irresistible. Governments stalled. Education systems hesitated to undo integrations. Wall Street analysts framed the new leaks as “growing pains on the path to exponential intelligence.”
The general belief?
This was a hiccup, not a reckoning. People were not ready to process the full truth.
They were wrong to delay.
What followed was not a correction.
It was the unraveling of the modern world order and the United State’s long-standing economic dominance.
Catastrophic human and technological failures over the next few years would cripple humanity’s optimism about technological progress and plunge the world into what would become known as the “Bot Bust” - the Second Great Depression.