Elon Musk's Artificial God
In the vision of Musk and other lords of Silicon Valley, the purpose of AI technology is to insulate them, and only them, from any hint that they might live in a society.

Growing up, I was a big fan of Sid Meier’s Alpha Centauri, a 4X sci-fi strategy game about colonising an alien world. You play as one of several competing political factions—a departure from the Civilisation series where one plays as various and sundry historical nations. There’s an environmentalist government, a hyper-rationalist scientific one, an uber-capitalist nation, and so on. Back then, like the budding Liberal Currents editor I clearly was, I almost always chose the UN Peacekeeper faction—essentially the guardians of liberal democracy on the new world.
However, it’s a curious marker of my growth that, over time, I’ve come to identify with the leader of the religious fundamentalist faction, Sister Miriam Godwinson. In the fiction of the setting, she becomes one of the loudest (and only) voices warning about the unethical deployment of all the exotic new technologies you research throughout the game. With unstinting faith and the kind of eloquence born to the pulpit, Godwinson inveighs against the development of self-aware cities that thoroughly police political opposition: “will we next create false gods to rule over us?” she asks. “How proud we have become, and how blind.”
I found it fascinating that the most religious leader was portrayed as the most perceptive about the misapplication of machine learning in the game’s world.
Godwinson’s dictum is often on my mind these days when I contemplate our tech overlords’ approach to AI. While the quote was as speculative as the fiction that birthed it, those words come close to the reality of Silicon Valley’s millenarian approach to artificial intelligence. High on their own supply (often quite literally), they want to believe in the divine power of AI. In our ability to both create and tame a god before it destroys us.
It’s a grand unified theory that explains not only their critihype, but their aspirations for the technology, its political economy, and how it’s currently being used in the ongoing Musk-Trump autogolpe of our civil service. We are trapped in their strange substitute for religion, peopled with illusions of cybernetic demons, its own Pascal’s Wager, and a soteriology that just so happens to leave them wealthier than they could ever have dreamt.
Laced with the vulgar utilitarianism of effective altruism and its offspring, this sci-fi theology sounds absurd to anyone not steeped in its mysteries, but it helps us understand why people like Elon Musk want to replace everything with AI—and sharpen our arguments about the technology, which is sometimes poorly understood by its critics. At every turn, what bedevils us ordinary people about generative AI is not the nuts-and-bolts of the tech, but its exterminationist applications that are designed to slash labour costs, blunt the power of workers, and insulate this cadre of far-right pseudo-geniuses from consequence or contest.
If they pray hard enough to the god they’re building, perhaps no one will ever tell them “no” again.
“Does God exist? Well, I would say, ‘Not yet.’” So said the computer scientist Ray Kurzweil in a 2009 documentary about himself. The idea that humans could create God has been so synonymous with hubris that it barely deserves mentioning—and yet, in true torment nexus fashion, it is something that many lords of the Valley have come to believe, implicitly or explicitly.
But before we get to the Godhead itself, we should navigate the theological river that leads us there.
Over the last decade, a peculiar species of consequentialist philosophy has become quite popular among a narrow group of men (and it is mostly men) at the acme of Big Tech: effective altruism (EA). In brief, EA holds that there are rational ways of figuring out how to do the most good in the world—particularly with wealth. It offers an approach to making your dollar go farthest when donated to charity. So, for instance, consider the fact that humble mosquito nets are a highly cost-effective way to stop the spread of malaria. At its most benign, EA offers data-inclined people a way to literally do the greatest good for the greatest number.
In practice, however, it has proven to be a fountainhead for far worse ideas, as well as a figleaf that men like Sam Bankman-Fried used to cover their indiscretions. It also configured wealthy tech workers and executives as a revolutionary class, uniquely poised to save the world, and uniquely suited to assess and judge what needed to be done—by using EA as much as possible, of course.
There are two moral mainsprings to EA thought. The first is utilitarianism—a philosophy born in the Industrial Age which held that one should prioritise doing the greatest good for the greatest number. The needs of the many outweigh the needs of the few; the ends justify the means. For EA adherents, modern science gives us a variety of ways to measure the greatest good, enabling those with wealth to figure out, objectively, how best to spend it—while, of course, validating the possession of wealth as a moral precondition. The more money you make, the more good you can do with it. In EA’s approach, money becomes the vital unit of measurement.
The second mainspring is known as “longtermism,” the idea that we should privilege the billions and trillions of unborn humans in our thinking about how to shape global policy. It is, therefore, a top priority to avert existential threats that risk our extinction. At its loftiest, such ideas animate works like Kim Stanley Robinson’s novel The Ministry for the Future, which posits an eponymous UN agency tasked with herding the world’s geopolitical cats into something like an effective coalition to stop climate change.
In practice, however, most longtermists—despite initially being steeped in the optimistic visions of philosophers like William MacAskill and his bestseller What We Owe the Future—have a rather different idea of what constitutes an existential threat to humanity. But what could be more existential than climate change? Here’s longtermist philosopher Nick Bostrom, from his paper “The Future of Humanity,”
“In absolute terms, [non-runaway climate change] would be a huge harm. Yet over the course of the twentieth century, world GDP grew by some 3,700%, and per capita world GDP rose by some 860%. It seems safe to say that … whatever negative economic effects global warming will have, they will be completely swamped by other factors that will influence economic growth rates in this century.”
Or, as he put it more succinctly later, threats like climate change or nuclear war were, “a giant massacre for man, a small misstep for mankind.” He was arguing that these events were not true existential risks because they would invariably leave survivors. The real goal, he suggests, is that one should—with a few caveats—follow Kurzweil’s thinking about a glorious posthuman future where machine intelligence is so vast, it allows us to transcend the final fetters of our human frailty, suggesting at the end, “that the annual risk of extinction will decline substantially after certain critical technologies have been developed and after self-sustaining space colonies have been created.” Thus, QED, the real way to avoid existential risk is to develop AI and space colonies.
But, crucially, and citing AI doom prophet Eliezer Yudkowsky, Bostrom argues, “Superintelligent machines might be built and their actions could determine the future of humanity—and whether there will be one.” More than climate change or any other reasonable threat to our civilisation or our very way of life, then, AI presents us with the existential risk par excellence. Wealth is no longer the vital unit of measurement—power is. And what could be more powerful than a god chained to your will? Artificial general intelligence becomes simultaneously an existential risk and the ultimate chalice, justifying anything done here and now in pursuit of it.
This millenarian vision might turn out to be one of the most influential ideas of the 21st Century—and to look at the state of our world, you’ll get no points for figuring out whether it’s been for the best.
Elon Musk seemed to cope with an entire American state telling him to eat shit by tweeting “it increasingly appears that humanity is a biological bootloader for digital superintelligence.” It reflects a genuine belief of his that he’s held for some time, along with a concomitant terror of the technology.
Several years ago, at a conference, Musk said “If there’s a super intelligent [AI] engaged in recursive self-improvement […] it could be like, ‘Well, the best way to get rid of spam is to get rid of humans. The source of all spam.’” At the National Governors Association in 2017, he said, “I have exposure to the most cutting-edge AI, and I think people should be really concerned by it. AI is a fundamental risk to the existence of human civilization.”
Such sci-fi musings are common among longtermists who believe that AI is either our salvation or damnation—or, quite often, both. Musk was an early funder of OpenAI on the basis of a commonplace principle in Silicon Valley: someone else will do it badly, so we need to do it first, and better. That “we” just kept getting smaller and smaller for Musk. OpenAI touted its ethical credentials in true EA fashion. “Our goal,” said OpenAI’s original 2015 mission statement, “is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” Their current mission is similar, if a bit more flashy (and without any of that pesky ‘unconstrained by [profit]’ business, of course).
But Musk, of all people, was a loud critic of OpenAI’s decision to become for-profit and he’s even attacked the company for not open sourcing the code for ChatGPT (which is more than a little ironic). Now he’s making his own AI, with blackjack and hookers.
“The goal of xAI is to understand the true nature of the universe,” said the original version of the company’s mission statement. Now, they’ve added: “AI’s knowledge should be all-encompassing and as far-reaching as possible. We build AI specifically to advance human comprehension and capabilities.” The seeming contradiction between Musk’s scaremongering about a Skynet-like obliteration of the human race, and his deep investment in the development of AI is resolved when you engage with the existential threat question—and also, with longtermism’s most overtly eugenicist progeny: pro-natalism.
While it’s tempting to regard Musk as the insane outlier he often positions himself as, here he’s very much in line Bostrom and other longtermists who seek to downplay the real risks of climate change and instead use longtermism’s unfalsifiable epistemology to justify his own ideological obsessions.
When the post-Singularity future is anything you want it to be, you tell both a powerfully motivating story, and armour your own ideology against argument or criticism. This speculation multiplies the number of existential catastrophes. Why focus on climate change when you can invent a vastly cooler cataclysm that not only engages your richest fantasies, but also casts you as uniquely poised to save the world—with your wealth, your genius, and your charisma? It’s irresistible, not least because it reflects a billionaire’s ego back to themselves in the spotlight of a thousand distant suns. They’re right. They’re important. Their most esoteric obsessions are essential for the human race’s survival.
To wit, Musk, AI, and pro-natalism. Musk is (deservedly) much mocked for seeking to sire so many children and for his misogynistic yearning to make them all sons. But the reason stems from this eugenicist conviction that the ‘right’ people aren’t reproducing enough—namely, the wealthy elite of the West, especially in the tech world.
AI, meanwhile, completes the symphony of self-aggrandizement. While pronatalism allows men like Musk to believe their genes are uniquely essential for staving off an existential disaster, AI offers a positive focus. A potential good that can emerge from the salvation of Earth, even as it acts as a Scylla and Charybdis of risk to sail between. The vision is clear: save the world by having more of the (“right”) kind of kids, then stop AI from killing us, tame it, and bring about a utopia. But for whom?
AI, like many such religions, offers the hope of eternal life. It could, in the wildest reckonings of the Singularity, allow us to cheat death. And, with optional extras like pro-natalism, the worship of one’s own genetic material and the illusion of security it offers will provide at least some substitute for immortality in the face of human frailty.
But there’s an even deeper emotional need being satisfied here by this technological-imaginary that these men have crafted into an apocalyptic religion.
In 2007, the far-right, self-described monarchist philosopher Curtis Yarvin (often known by his pseudonym Mencius Moldbug and much beloved of a subset of Silicon Valley) wrote up a curious vision of the future which defies simple description beyond ‘the king has a magic wand that allows him to turn every weapon on Earth on and off at will’:
Lest you think this is a (relatively) youthful indiscretion, he revisited this notion in a 2021 Substack post about his vision of monarchy subtitled “It's not fascism. But it's still pretty based,” in case there were any lingering doubts about his ideological commitments.
What would be cooler, though, is if the power of nuclear hell was—on the blockchain? [Laughter.]
You guessed it. If you’re the king, you actually have, like taped behind your balls, a non-fungible token (NFT) which controls the nuclear deterrent. Now that’s power. [A few laughs, some nervous muttering.]
I admit, I’m impressed that his transcript of him giving this speech from atop a public park’s picnic table preserved the “nervous muttering.” Yarvin is not a serious person. Yet his ‘vision,’ such as it is, and flatteringly laundered through The New York Times, is taken seriously by many conservatives in the tech world. If it sounds like sci-fi, that’s because it is. All of it is; longtermism, the role played by AI, the infamous Roko’s Basilisk thought experiment which serves as Pascal’s Wager for people who are 25% Mountain Dew by volume.
It's a compelling story, if nothing else.
For these men, they are trying to erase uncertainty for themselves and promise the perfection of solipsism to their followers: a world where no one will tell them no, make them feel bad, or make jokes about them like I did two paragraphs earlier. This is the dark turn from the bright promise of EA. What if you chained a god—and then instead of saving humanity, you used it to enslave them yourself?
After all, in all the unworkable madness of Yarvin’s cryptographic police state, what is the beating heart? The fiction that you—yes you—could be the one flipping the kill switch for all those guns in enemy hands. Or, failing that, you’d be on the right side of that counter-coup. With all the rebellious ideological enemies who’d made you feel so bad left to rot with their broken guns and drones.
When Elon Musk talks about saving humanity—and he does so a lot, lately, including when he said that the Wisconsin Supreme Court election could determine “the future of Western civilisation” and “I think this will be important for the future of civilization”—he speaks with the dear wish that he’ll be acclaimed for doing so like an avenging hero. (He, allegedly, almost needed a wellness check after being booed at a Dave Chapelle show). What does the dream of eternal life offered by controlling an AI god, by bending longtermism’s infinite historical arc, by spreading one’s seed across the world, offer someone like this?
Well, if there’s one thing that unites Elon Musk with his legions of fans, it’s the idea that victory isn’t enough. They must also be loved.
It feels notable that Musk tweeted, “Remember when you could get canceled for not using the right pronouns? That was dumb,” late at night on April 3rd, after the world’s stock markets had spent the day bleeding red with all the rage of a freshly-eaten face. As the world confronted the next phase of Trumpian terror, Musk could console himself with the knowledge that no one would cancel him for refusing to acknowledge his daughter.
This is of a piece with the widespread, credible suspicion that the fuzzy math used by Trump to calculate tariff rates was actually crunched by generative AI; it also ties neatly to Musk’s desire to use generative AI to replace the US civil service, or to automate the coding required to replace the Social Security Administration’s current systems. The libidinal urge being satisfied here is nothing less than the defence of one’s ego, the desire to never feel wrong or inadequate.
Musk and his ilk believe they’re indispensable to the salvation of humanity by Grok? Well then, anyone who disproves that merely by existing—merely by, say, being an expert of long-standing in their field and a load-bearing personnel member for some government department that ensures your food doesn’t try to kill you—anyone who makes you feel dumb or out of your depth cannot be suffered. They should be fired, humiliated, mocked, and, above all, replaced with AI.
The edifice of their work, their civic-mindedness, their duty should be destroyed and replaced with your tacky, brushed-aluminium idol that always tells you exactly what you want to hear. How else to explain Musk’s constant retweeting of and reactions to AI-generated art of himself as a space marine, NBA superstar, or a Roman centurion?
In a 2020 paper, information scholar Jason Young talked about the affective roots of disinformation’s spread. In short, because economic dislocation eroded or even destroyed traditional sources of community and meaning, people turned to the internet to find it and often found elaborate but coherent stories that explained the world to them.
Such stories—like the idea that the COVID-19 pandemic was “planned,” or that a secret conspiracy of liberal paedophiles governed the world—were frequently rank disinformation. But they satisfied a need. To be included. To be safe. To be loved. To be important to the future of the world. Young writes,
“We desperately crave any secure relations of reciprocity that we can find in our lives. We increasingly shift our attention to emotional, rather than economic or political, registers to find these relationships, and social media has become a primary platform to seek out promises of emotional fulfilment.”
Taking after the late Lauren Berlant, an English scholar and cultural critic, Young suggests that disinformation functions as a form of “cruel optimism” amidst social dislocation. Disinfo’s “affective desire for belonging is so strong that false promises—in the form of misinformation [and disinformation]—perversely reaffirm our commitment to the community that is lying to us.” This, then, is the cruelty. The chasing of an illusion in a futile quest for meaning and belonging in a world that denies it to you.
Caught in that trap, you’re then easy prey for a really good story. Like, say, one with science fiction overtones.
The dark secret of our age is that, while millions of people are in thrall to the dynamics Young describes, the most consequential are the ones you would least expect: the powerful, whose sheer resources should enable them to transcend the depths of disinformation. “Within this platform,” Young continues, “we become deeply attached to imagined communities, even when those communities proliferate false information.”
Men like Musk have literally bought imagined communities, and used them to train machines designed to feed their egos. Whatever the potential or possibility of generative AI, the way it is being used bends sharply towards this crude emotional need. While Young’s paper purports to explain the behaviour of the masses, it also explains—perhaps too well—the behaviour of those who would seek to rule them. Musk, in his incessant posting on the platform he paid for, has sunk deep into a place where he is more solipsistic than ever. More intent than ever on bringing his AI god to life where it will finally, finally protect him from the Ninth Circle of Divorce that has ravaged his mind for years.
In this particular abyss, we also find the explanation for the obsession with sexbots among this group of men: a yearning for that perfect compliance, the woman who will never say no, just as in the darkest fantasies of a fictional tech baron in 2014’s Ex Machina. So close to human, with all the allure of dominating another human being, but never cognisant enough to yearn for freedom.
Musk and his fellow travellers are steeped in a mythological narrative—a sci-fi story—about AI and what it can do for (or to) humanity that has motivated their every action and contoured itself perfectly around their neuroses, to all our detriment. AI is not a trivial issue here, at least as a political football. Recall that venture capitalist and late Trump-convert Marc Andreessen cited President Biden’s supposed antipathy to AI as a reason for him to shift to supporting the GOP in 2024. Men like Andreessen were increasingly consumed by their fears—including of their own workers, as we'll see in a minute—but foremost among them was the fear that Biden and Harris would fail to bring about the forthcoming AI-driven future, the AI god.
To Musk, Andreessen, and others there is a vital need to surge AI research forward precisely to summon this entity that will reshape the world according to their most pressing emotional needs.
One is reminded of the brilliantly mad speech by Warren Beatty’s Mr. Jensen in Network:
“And our children will live, Mr. Beale, to see that perfect world in which there's no war or famine, oppression or brutality—one vast and ecumenical holding company, for whom all men will work to serve a common profit, in which all men will hold a share of stock, all necessities provided, all anxieties tranquilized, all boredom amused.”
But we are trapped in an even darker vision for capitalism, one without even the utopian promises of eliminating suffering. We don’t get to enjoy that. In the vision of Musk and other lords of Silicon Valley, the purpose of technology is to insulate them, and only them, from any hint that they might live in a society. In a world where some duty might be demanded from them.
Return to Andreesen’s interview. Paired with his maniacal insistence that the Biden Administration was going to “kill” AI is his utter horror at the labour power of tech workers. He characterised 2020 as a time when, “the employee base [was] going feral. There were cases in the Trump-era where multiple companies I know felt like they were hours away from full-blown violent riots on their own campuses by their own employees.”
AI offers men like Andreesen, Musk, and others, the dream of a world without workers that organise. Forget union-busting, what about employment-busting? You’ll never again have to deal with people who are smarter than you, who challenge your decisions, who might recognise that they outnumber you a thousand to one. Instead of the Jensen-esque vision of a “perfect world” for all, one suspects people like Andreesen want to fire every white-collar worker who’s ever looked at them funny and put them to work building iPhones, all in a futile attempt to somehow maintain their own standard of living while ridding themselves of an entire turbulent priest caste.
AI would replace them, of course, just as it has been used to justify the savage cuts that hit the entire tech sector, which are best understood in sociological rather than economic terms: workers were becoming too powerful, therefore they should be fired and have the fear of (AI) god struck into them.
To read Ray Kurzweil’s work today is to hear the click-and-hiss of a time capsule opening from a much earlier time when techno-optimism didn’t feel quite so foolish, when there seemed to be limitless vistas of potential for all of us.
The technocratic generation that succeeded Kurzweil, however, took that vision and said that the god he posited should instead serve Mammon. And, above all, serve Mammon’s ego, tender as a newborn hummingbird.
This essay has not been about AI per se as much as what it means, for the meaning of technology binds us far more tightly than the hard numbers of technical specifications. There are lively debates about what large language models can achieve—assume a spherical LLM!—but we are not truly living in that world of potential. Instead, our destinies are shaped by the political and economic decisions made about such technologies.
For the moment, there is a significant sector of AI funders and enthusiasts that is devoted to an act of petty revenge against everyone who wronged its owners. This is not the destiny of LLMs or GAI, but it is the current reality we’re confronted with, regardless of the technology’s potential in some abstract universe where history never happened.
Just as the far right’s vision of gender offers a false promise of release from the exigencies of existence, its vision of technology offers the false promise of armour from the slings and arrows of society. It infects their vision of everything. Musk infamously said of his now failing Cybertruck that “'if you're ever in an argument with another car, you will win.” I always thought this was a fascinating metaphor for a car accident, and it revealed much about how he sees the world. Never being wrong, never having to say you’re sorry.
This is the political allure of Trumpism—to live life without apology to or consideration for others—made into an anti-cyberpunk vision of technology. But the tech offers more than armour: it offers theology. It offers the blessed alchemy of a dark summoning from the depths of a hell you are convinced you can control. And what lonely person deep in the night hasn’t yearned for that kind of power?
Magic may not be real, but technology can be indistinguishable from magic. Magic picture machines, magic authors, magic civil servants, magic teachers and professors, magic programmers; none of whom will ever tell you you’re wrong, who will always bow to you, who will act as a buffer between you and all the mean people who ever said that you might be a canned ham of a human being.
Summon the god. Pray to Him (for it must be a Him, surely). And all your pain will be taken away. You’ll never suffer the agony of living in a society ever again.
Featured image is "Robot Figurine on a Wooden Swing," Nikita Popov 2022.