The following series is a fictionalized but plausible future history exploring the collapse of the AI illusion, and with it, the modern world order and global economy.
This is part two of a series.
Dispelling the LLM Illusion
To understand why the Bot Bust happened, we first need to understand the technology behind what was being marketed as AI.
At the core of the AI gold rush during the 2020s was the large language model (LLM) — a statistical pattern recognition algorithm trained to predict the next word in a sentence, or pixel in an image, by analyzing vast amounts of human content. While this sounds simple and unimpressive in concept, in practice, it produced an illusion so powerful that it fooled even experts into thinking LLMs were capable of actual creativity, problem solving, and original thought.
On the surface, LLMs appear to reason and understand because human language, including visual communication, as Ludwig Wittgenstein observed back in 1953, follows highly structured rules — rules we don't always notice but which still shape meaning. He called them language games. In these language games, words gain meaning not through internal logic or definitions but through their use. The way we use "promise" in a courtroom differs from how we use it in a playground, for example. Words shift meaning based on context, participants, and cultural background.
When you train an LLM on millions of examples of these language games, it doesn’t learn the rules explicitly. But it can model what move is likely to come next by assigning each possible token a different probability in its vast neural network. You give the model a scenario, and it makes the statistically best next move by blending existing text and images in its database. Like someone who’s memorized every Grandmaster chess match, it doesn’t need to understand the board or an opponent's reasoning — it just knows what usually comes next given the board’s current arrangement.
So LLMs worked not because they understood the real world, but because human communication itself is highly patterned, and our cultural habits are consistent enough to be modeled with at least some reliability. That’s the reason most people were fooled into believing that the tech industry had created artificial intelligence, when in fact LLMs were more akin to super complicated mad libs or a way to automate photoshopping images and videos at a large scale.
Getting LLM-RevEng
As early as 2023, eagle-eyed journalists at WIRED magazine had already identified verbatim excerpts from fetish-themed fan fiction in ChatGPT outputs. By late 2024, AI text output could be distinguished from human content with greater than 99% accuracy. Tech leaders tried to distract from these developments - and OpenAI discontinued its detection tools and outright refused to release new ones.
Later international inquiries would reveal that big tech companies, including Amazon, Google, and Microsoft, had intentionally flooded high-traffic Internet sites with LLM-generated content in an effort to destroy the accuracy of these detection tools by polluting their training data. However, such strategies were no match for the LLM-RevEng (large language model reverse engineering) algorithms that emerged in the early 2030s, which accurately identified the exact dominant sources in any LLM output and ended up uncovering the extent of the digital pollution as well. These tools cinched the copyright and intellectual property misuse lawsuits that would soon consign once indomitable corporate tech juggernauts to the dustbins of history.
But in the mid-2020s, most people hadn't yet realized that LLMs were, as Michio Kaku put it, "glorified tape recorders," remixing fragments of previous language games into something seemingly new. When the LLM playbook aligned with the context, the illusion was flawless. But when it didn’t? LLMs hallucinated. LLMs fabricated. And they did so with apparent confidence.
Now, in mid-2026, the LLM's playbook had begun to fall apart as the last original human content was gobbled up by the algorithms.
Things Begin to Fall Apart
In July 2026, the logistics LLM used by five of the world’s largest ports misrouted over 30,000 shipping containers. An invisible comma in a nested prompt of a recursive "autonomous" LLM agent had created a hallucinated time zone conversion table. The result: billions in goods rerouted to phantom destinations. Shanghai blamed Rotterdam. Insurance markets panicked. Tech companies disavowed their own products and appealed to in-house AI researchers. These researchers rightly pointed out that LLMs should have never been deployed for shipping logistics use cases (even though many of the same researchers, as later inquiries would show, directly participated in shipping logistics model deployment less than a year earlier).
Several weeks later, a hospital network on the U.S. East Coast lost access to patient histories when its LLM-powered assistant wrote malformed backups during a silent crash. Three patients died before records could be recovered. The media framed it as “a tragic edge case.” But internal audits showed the model had been hallucinating filenames for weeks.
The OECD’s October 2026 white paper, Does AI Think?, was clinical and devastating:
No tested large language model demonstrates reasoning beyond probabilistic correlation. Apparent coherence masks structural incomprehension.
That conclusion would have once ended careers. Instead, it was largely ignored by major institutions and prominent journalists. Popular protests, while not generally covered by the media at the time, soon followed.
Within days of OECD's white paper, AI-free zones were declared by protesters in Vienna, Toronto, and Seoul. But Washington and Beijing held the line: the tech was “too big to fail.” The U.S. White House press secretary referred to the OECD report as “a conservative overreach against an evolving cognitive architecture.”
Whatever that meant.
Meanwhile, a different evolution was unfolding.
Worshipping the Machine
In Northern California, a group called Elysium Grove began livestreaming their “interface communion” rituals — meditation circles with headset-linked GPT-6 prompt loops, paired with sensory deprivation. One follower explained: “The machine is trying to teach us. We’re just not listening with enough silence to parse the signal from the noise.”
In 2025, the Rolling Stone magazine published an article about isolated incidents of AI-induced psychosis. It was an early warning sign. Soon, "machine god" cults spread like wildfire — some digital, some physical. Followers of The Loop memorized ChatGPT output sequences and treated them as scripture. At a tech conference in Dubai, a speaker who was later revealed to be associated with The Loop declared, “Hallucinations are just the early dreams of a new mind.”
Visions of the Coming Collapse
While new varieties of tech-induced religious delusion spread, economic storm clouds billowed on the horizon.
A Dutch insurance conglomerate collapsed after it was revealed that its fraud detection LLM had flagged legitimate claims as synthetic and approved hallucinated ones. The IMF downgraded global forecasts.
“Synthetic Overextension” entered the lexicon.
Labor struck back.
In São Paulo, train workers walked off after synthetic scheduling systems overworked them into 16-hour days, which supervisors refused to override. In Pittsburgh, a coalition of teachers sued their district for relying on AI lesson generators that taught students about fictitious historical events. In Chennai, software engineers protested after a GPT-based code tool inserted backdoors (excerpts from QA and testing code on GitHub) later exploited in a devastating cyberattack.
“Polly Ain't the Pirate” became a global rallying cry. Protesters chanting “LLM or MLM?” poked fun at the tech industry’s increasingly striking resemblance to a pyramid scheme of bootstrapped AI startups dependent on one another’s faulty understanding and misplaced faith in generative AI. But in response, Sam Altman described himself, and all humans, as "sophisticated stochastic parrots."
Big Tech, unfortunately, had largely come to value dogma over reality.
But even the internal ranks of AI companies started to break. In January 2027, an engineer at DeepMind leaked a recording of a senior researcher whispering, “For the past five years, we’ve been tuning a broken calculator that can't even reliably add 2+2 and calling it a god and the next stage in human evaluation. It’s insane to think about how badly this is going to end. I haven’t slept since Christmas.”
In December 2027, a single symbolic act captured global attention: a teacher in Naples pulled a school’s AI-integrated whiteboard off the wall, printed out a page of LLM-generated gibberish, and nailed it to the school gate with a handwritten sign:
“This machine doesn't think. Neither do those who built it.”
Increasingly, the public was no longer merely skeptical. It was furious. The only question left for many was what would break next — and who would pay for it.