☞ The Conspiratorial Mindset and AI's Latent Spaces
Vonnegut, Pynchon, and Artificial Intelligence
In Kurt Vonnegut’s novel The Sirens of Titan, human history is revealed to be a mechanism (at least in part) for a stranded alien to receive a part needed to repair his spaceship.
As discussed in Peter Cooper’s Signs and Symptoms: Thomas Pynchon and the Contemporary World:
Vonnegut also mocks the belief in conspiracies, suggesting that crazy circumstance has produced things as they are. In The Sirens of Titan he pushes the notion of plotted history to comic and cosmic extremes: the entire course of human civilization and precivilization has been arranged—for a bathetic purpose—by Tralfamadore, a planet about 150,000 light-years away.
Everything is connected, everything is meaningful, but also everything is absurd.
This vast interconnection of events and knowledge is also seen in the works of Thomas Pynchon. Pynchon’s The Crying of Lot 49 is suffused with a conspiracy that may or may not exist. The phrase “everything is connected” is even stated explicitly in Gravity’s Rainbow in relationship to paranoia. As per Cooper again, “For Pynchon, paranoia is the model of a cosmos in which ‘everything is connected’ and of a world in which interlocking interest groups do manipulate events to some uncertain degree.”
Conspiracies abound, coincidences might be more than they seem, and details can be stitched together in surprising and exciting ways.
Of course, every story we tell is a conspiracy of sorts: events interconnect and become more meaningful than is found in our everyday lives. Normal reality is just a bunch of facts and events, and storytelling—whether creating satisfying fictional tales, the process of how we make sense of our own lives, or even the writing of history—is a mechanism for imposing a kind of narrative on top of these things after the fact.
But paranoid and conspiratorial thinking is having a moment. And it seems as if AI might be abetting this and making it more likely.
As per one article discussing one person’s breakdown:
At the same time, it's difficult to ignore that the specific language he's using — with cryptic talk of "recursion," "mirrors," "signals" and shadowy conspiracies — sounds strikingly similar to something we've been reporting on extensively this year: a wave of people who are suffering severe breaks with reality as they spiral into the obsessive use of ChatGPT or other AI products…
In fact, in this instance, ChatGPT was even using a particular style that fits this kind of approach to the world:
Social media users were quick to note that ChatGPT’s answer to Lewis' queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.
Fiction becomes paranoia becomes meaning where none exists, all courtesy of ChatGPT.
There are likely many reasons why this is happening when people interact with AI (eg. sycophancy), so I don’t want to make too much of any specific aspect of LLMs and how their use can lead to this kind of problem. But at least one reason might be due to the specific nature of large language models: the latent spaces of large neural network systems embed all knowledge within them, so everything is somewhat interconnected. You pour all of our texts into these models and it stitches everything together: concepts, terms, people. If you navigate far enough within this high-dimensional space, everything can be discovered some number of hops away from everything else. And even more than that, these models have ingested our stories, which thrive on this kind of interconnection and meaning, perhaps making this kind of thing even more likely.
This is like the dark version of E.O. Wilson’s Consilience, or even just plain interdisciplinarity: the concept that ideas can and should be stitched together and that there is something productive in connecting together what we know. This is something that I am deeply sympathetic to. But it can be taken too far, finding connections where none are there. It is the pareidolia of knowledge.
A balance lies somewhere between nihilism and conspiracy-mongering. But relying on AI to help us find this balance is not the right answer. ■
Some Updates
I wrote an essay for Wired: “Programmers Aren’t So Humble Anymore—Maybe Because Nobody Codes in Perl”
And here a few podcasts I’ve appeared on related to The Magic of Code:
The Enchanted Systems Roundup
Here are some links worth checking out that touch on the complex systems of our world (both built and natural):
🜸 ChatGPT and the Meaning of Life
🝤 Flounder Mode: “Kevin Kelly on a different way to do great work”
🝳 The Making of Kurt Vonnegut’s Cat’s Cradle: “How the novelist turned the violence and randomness of war into a cosmic joke”
🝤 AI Comes Up with Bizarre Physics Experiments. But They Work: ‘Even so, the researchers were befuddled by the AI’s design. “If my students had tried to give me this thing, I would have said, ‘No, no, that’s ridiculous,’” Adhikari said. But the design was clearly effective.’
🜹 Most Americans Opposed the Moon Landing: “After President Kennedy’s 1961 declaration to put man on the lunar surface by the end of the decade a Gallup poll revealed only 33% supported, compared with 58% opposed”
🝊 Cable Bacteria are Living Batteries: “How a discovery in a Danish lake changed our understanding of biological communities and energy.”
🝤 The people's history of collapse: “How dominance hierarchies doom societies”
Until next time.
Speaking of interconnectedness Dirk Gently and his theories of detection is my favourite
“Most Americans opposed the moon landing” is the fear of the unknown.
I have the front page of our local newspaper the day of the moon landing framed on my wall and treasure the brilliant scientists and brave astronauts who went “where no one had dared to go before”.
As always, Colin, your posts are deeply researched and a treasure to read.