Have you ever puked and had diarrhea at the same time? I have, and when it happened, I was listening to a fan-made audiobook version of Harry Potter and the Methods of Rationality (HPMOR), a fan fiction written by Eliezer Yudkowsky.
No, the dual-ended bodily horror was not incited by the fanfic, but the two experiences are inextricable in my mind. I was shocked to discover years later that the 660,000-word fanfic I marathoned while sick has some bizarre intersections with the ultra wealthy technorati, including many of the figures involved in the current OpenAI debacle.
Case in point: in an easter egg spotted by 404 Media (which was too minor for anyone else – even me, someone who’s actually read the thousand-odd page fanfic – to notice), there is a once-mentioned Quidditch player in the sprawling story named Emmett Shear. Yes, the same Emmett Shear who co-founded Twitch and was just named interim CEO of OpenAI, arguably the most influential company of the 2020s. Shear was a fan of Yudkowsky’s work, following the serialized story as it was published online. So, as a birthday present, he was gifted a cameo.
Shear is a longtime fan of the writings of Yudkowsky, as are many of the AI industry’s key players. But this Harry Potter fanfic is Yudkowsky’s most popular work.
HPMOR is an alternate universe rewriting of the Harry Potter series, which begins with the premise that Harry’s aunt Petunia married an Oxford biochemistry professor, instead of the abusive dolt Vernon Dursley. So, Harry grows up as a know-it-all kid obsessed with rationalist thinking, an ideology which prizes experimental, scientific thinking to solve problems, eschewing emotion, religion or other imprecise measures. It’s not three pages into the story before Harry quotes the Feynman Lectures on Physics to try to solve a disagreement between his adoptive parents over whether or not magic is real. If you thought actual Harry Potter could be a little frustrating at times (why doesn’t he ever ask Dumbledore the most obvious questions?), get ready for this Harry Potter, who could give the eponymous “Young Sheldon” a run for his money.
It makes sense that Yudkowsky runs in the same circles as many of the most influential people in AI today, since he himself is a longtime AI researcher. In a 2011 New Yorker feature on the techno-libertarians of Silicon Valley, George Packer reports from a dinner party at the home of billionaire venture capitalist Peter Thiel, who would later co-found and invest in OpenAI. As “blondes in black dresses” pour the men wine, Packer dines with PayPal co-founders like David Sacks and Luke Nosek. Also at the party is Patri Friedman, a former Google engineer who got funding from Thiel to start a non-profit that aims to build floating, anarchist sea civilizations inspired by the Burning Man festival (after fifteen years, the organization does not seem to have made much progress). And then there’s Yudkowsky.
To further connect the parties involved, behold: a ten-month-old selfie of now-ousted OpenAI CEO Sam Altman, Grimes and Yudkowsky.
Yudkowsky is not a household name like Altman or Elon Musk. But he tends to crop up repeatedly in the stories behind companies like OpenAI, or even behind the great romance that brought us children named X Æ A-Xii, Exa Dark Sideræl and Techno Mechanicus. No, really – Musk once wanted to tweet a joke about “Roko’s Basilisk,” a thought experiment about artificial intelligence that originated on LessWrong, Yudkowsky’s blog and community forum. But, as it turned out, Grimes had already made the same joke about a “Rococo Basilisk” in the music video for her song “Flesh Without Blood.”
HPMOR is quite literally a recruitment tool for the rationalist movement, which finds its virtual home on Yudkowsky’s LessWrong. Through an admittedly entertaining story, Yudkowsky uses the familiar world of Harry Potter to illustrate rationalist ideology, showing how Harry works against his cognitive biases to become a master problem-solver. In a final showdown between Harry and Professor Quirrell – his mentor in rationalism who turns out to be evil – Yudkowsky broke the fourth wall and gave his readers a “final exam.” As a community, readers had to submit rationalist theories explaining how Harry could get himself out of a fatal perdicament. Thankfully, for the sake of happy endings, the community passed.
But the moral of HPMOR isn’t just to be a better rationalist, or as “less wrong” as you can be.
“To me, so much of HPMOR is about how rationality can make you incredibly effective, but incredibly effective can still be incredibly evil,” my only other friend who has read HPMOR told me. “I feel like the whole point of HPMOR is that rationality is irrelevant at the end of the day if your alignment is to evil.”
But, of course, we can’t all agree on one definition of good vs. evil. This brings us back to the upheavals at OpenAI, a company that is trying to build an AI that’s smarter than humans. OpenAI wants to align this artificial general intelligence (AGI) with human values (such as the human value of not being killed in an apocalyptic, AI-induced event), but it just so happens that this “alignment research” is Yudkowsky’s specialty.
In March, thousands of notable figures in AI signed an open letter arguing for all “AI labs to immediately pause for at least 6 months.”
Signatories included Meta and Google engineers, founders of Skype, Getty Images and Pinterest, Stability AI founder Emad Mostaque, Steve Wozniak, and even Elon Musk, a co-founder of Open AI who stepped down in 2018. But Yudkowsky did not sign the letter, and instead, penned an op-ed in TIME Magazine to argue that a six month pause isn’t radical enough.
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter,” Yudkowsky wrote. “There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.”
While Yudkowsky argues for the doomerist approach when it comes to AI, the OpenAI leadership kerfuffle has highlighted the wide range of different beliefs around how to navigate technology that is possibly an existential threat.
Acting as the interim CEO of OpenAI, Shear – now one of the most powerful people in the world, and not a Quidditch seeker in a fanfic – is posting memes about the different factions in the AI debate.
There’s the techno-optimists, who support the growth of tech at all costs, because they think any problems caused by this “grow at all costs” mentality will be solved by tech itself. Then there’s the effective accelerationists (e/acc) which seems to be kind of like techno-optimism, but with more language about how growth at all costs is the only way forward because of the second law of thermodynamics says so. The safetyists (or “decels”) support the growth of technology, but only in a way that is regulated and safe (meanwhile, in his “Techno-Optimist Manifesto,” venture capitalist Marc Andreessen decries “trust and safety” and “tech ethics” as his enemy). And then there’s the doomers, who think that when AI outsmarts us, it will kill us all.
Yudkowsky is a leader among the doomers, and he’s also someone who has spent the last decades running in the same circles as what seems like half of the board of OpenAI. One popular theory about Altman’s ousting is that the board wanted to appoint someone who aligned more closely with its “decel” values. So, enter Shear, who we know is inspired by Yudkowsky and also considers himself a doomer-slash-safetyist.
We still don’t know what’s going on at OpenAI, and the story seems to change about once every ten seconds. For now, techy circles on social media continue to fight over decel vs. e/acc ideology, using the backdrop of the OpenAI chaos to make their arguments. And in the midst of it all, I can’t help but find it fascinating that, if you squint at it, all of this traces back to one really tedious Harry Potter fanfic.