What OpenAI shares with Scientology

by Henry Farrell on November 21, 2023

When Sam Altman was ousted as CEO of OpenAI, some hinted that lurid depravities lay behind his downfall. Surely, OpenAI’s board wouldn’t have toppled him if there weren’t some sordid story about to hit the headlines? But the reporting all seems to be saying that it was God, not Sex, that lay behind Altman’s downfall. And Money, that third great driver of human behavior, seems to have driven his attempted return and his new job at Microsoft, which is OpenAI’s biggest investor by far.

As the NYT describes the people who pushed Altman out:

Ms. McCauley and Ms. Toner [HF – two board members] have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

McCauley and Toner reportedly worried that Altman was pushing too hard, too quickly for new and potentially dangerous forms of AI (similar fears led some OpenAI people to bail out and found a competitor, Anthropic, a couple of years ago). The FT’s reporting confirms that the fight was over how quickly to commercialize AI

The back-story to all of this is actually much weirder than the average sex scandal. The field of AI (in particular, its debates around Large Language Models (LLMs) like OpenAI’s GPT-4) is profoundly shaped by cultish debates among people with some very strange beliefs.

As LLMs have become increasingly powerful, theological arguments have begun to mix it up with the profit motive. That explains why OpenAI has such an unusual corporate form – it is a non-profit, with a for-profit structure retrofitted on top, sweatily entangled with a profit-maximizing corporation (Microsoft). It also plausibly explains why these tensions have exploded into the open.


I joked on Bluesky that the OpenAI saga was as if “the 1990s browser wars were being waged by rival factions of Dianetics striving to control the future.” Dianetics – for those who don’t obsess on the underbelly of American intellectual history – was the 1.0 version of L. Ron Hubbard’s Scientology. Hubbard hatched it in collaboration with the science fiction editor John W. Campbell (who had a major science fiction award named after him until 2019, when his racism finally caught up with his reputation).

The AI safety debate too is an unintended consequence of genre fiction. In 1987, multiple-Hugo award winning science-fiction critic Dave Langford began a discussion of the “newish” genre of cyberpunk with a complaint about an older genre of story on information technology, in which “the ultimate computer is turned on and asked the ultimate question, and replies `Yes, now there is a God!’

However, the cliche didn’t go away. Instead, it cross-bred with cyberpunk to produce some quite surprising progeny. The midwife was the writer Vernor Vinge, who proposed a revised meaning for “singularity.” This was a term already familiar to science fiction readers as the place inside a black hole where the ordinary predictions of physics broke down. Vinge suggested that we would soon likely create true AI, which would be far better at thinking than baseline humans, and would change the world in an accelerating process, creating a historical singularity, after which the future of the human species would be radically unpredictable.

These ideas were turned into novels by Vinge himself, including A Fire Upon the Deep (fun!) and Rainbow’s End (weak!). Other SF writers like Charles Stross wrote novels about humans doing their best to co-exist with “weakly godlike” machine intelligence (also fun!). Others who had no notable talent for writing, like the futurist Ray Kurzweil, tried to turn the Singularity into the foundation stone of a new account of human progress. I still possess a mostly-unread copy of Kurzweil’s mostly-unreadable magnum opus, The Singularity is Near, which was distributed en masse to bloggers like meself in an early 2000s marketing campaign. If I dug hard enough in my archives, I might even be able to find the message from a publicity flack expressing disappointment that I hadn’t written about the book after they sent it. All this speculation had a strong flavor of end-of-days. As the Scots science fiction writer, Ken MacLeod memorably put it, the Singularity was the “Rapture of the Nerds.” Ken, being the offspring of a Free Presbyterian preacher, knows a millenarian religion when he sees it: Kurzweil’s doorstopper should really have been titled The Singularity is Nigh.

Science fiction was the gateway drug, but it can’t really be blamed for everything that happened later. Faith in the Singularity has roughly the same relationship to SF as UFO-cultism. A small minority of SF writers are true believers; most are hearty skeptics, but recognize that superhuman machine intelligences are (a) possible) and (b) an extremely handy engine of plot. But the combination of cultish Singularity beliefs and science fiction has influenced a lot of external readers, who don’t distinguish sharply between the religious and fictive elements, but mix and meld them to come up with strange new hybrids.

Just such a syncretic religion provides the final part of the back-story to the OpenAI crisis. In the 2010s, ideas about the Singularity cross-fertilized with notions about Bayesian reasoning and some really terrible fanfic to create the online “rationalist” movement mentioned in the NYT.

I’ve never read a text on rationalism, whether by true believers, by hangers-on, or by bitter enemies (often erstwhile true believers), that really gets the totality of what you see if you dive into its core texts and apocrypha. And I won’t even try to provide one here. It is some Very Weird Shit and there is really great religious sociology to be written about it. The fights around Roko’s Basilisk are perhaps the best known example of rationalism in action outside the community, and give you some flavor of the style of debate. But the very short version is that Eliezer Yudkowsky, and his multitudes of online fans embarked on a massive collective intellectual project, which can reasonably be described as resurrecting David Langford’s hoary 1980s SF cliche, and treating it as the most urgent dilemma facing human beings today. We are about to create God. What comes next? Add Bayes’ Theorem to Vinge’s core ideas, sez rationalism, and you’ll likely find the answer.

The consequences are what you might expect when a crowd of bright but rather naive (and occasionally creepy) computer science and adjacent people try to re-invent theology from first principles, to model what human-created gods might do, and how they ought be constrained. They include the following, non-comprehensive list: all sorts of strange mental exercises, postulated superhuman entities benign and malign and how to think about them; the jumbling of parts from fan-fiction, computer science, home-brewed philosophy and ARGs to create grotesque and interesting intellectual chimeras; Nick Bostrom, and a crew of very well funded philosophers; Effective Altruism, whose fancier adherents often prefer not to acknowledge the approach’s somewhat disreputable origins.

All this would be sociologically fascinating, but of little real world consequence, if it hadn’t profoundly influenced the founders of the organizations pushing AI forward. These luminaries think about the technologies that they were creating in terms that they have borrowed wholesale from the Yudkowsky extended universe. The risks and rewards of AI are seen as largely commensurate with the risks and rewards of creating superhuman intelligences, modeling how they might behave, and ensuring that we end up in a Good Singularity, where AIs do not destroy or enslave humanity as a species, rather than a bad one.

Even if rationalism’s answers are uncompelling, it asks interesting questions that might have real human importance. However, it is at best unclear that theoretical debates about immantenizing the eschaton tell us very much about actually-existing “AI,” a family of important and sometimes very powerful statistical techniques, which are being applied today, with emphatically non-theoretical risks and benefits.

Ah, well, nevertheless. The rationalist agenda has demonstrably shaped the questions around which the big AI ‘debates’ regularly revolve, as demonstrated by the Rishi Sunak/Sam Altman/Elon Musk love-fest “AI Summit” in London a few weeks ago.

We are on a very strange timeline. My laboured Dianetics/Scientology joke can be turned into an interesting hypothetical. It actually turns out (I only stumbled across this recently) that Claude Shannon, the creator of information theory (and, by extension, the computer revolution) was an L. Ron Hubbard fan in later life. In our continuum, this didn’t affect his theories: he had already done his major work. Imagine, however, a parallel universe, where Shannon’s science and standom had become intertwined and wildly influential, so that debates in information science obsessed over whether we could eliminate the noise of our engrams, and isolate the signal of our True Selves, allowing us all to become Operating Thetans. Then reflect on how your imagination doesn’t have to work nearly as hard as it ought to. A similarly noxious blend of garbage ideas and actual science is the foundation stone of the Grand AI Risk Debates that are happening today.

To be clear – not everyone working on existential AI risk (or ‘x risk’ as it is usually summarized) is a true believer in Strong Eliezer Rationalism. Most, very probably, are not. But you don’t need all that many true believers to keep the machine running. At least, that is how I interpret this Shazeda Ahmed essay, which describes how some core precepts of a very strange set of beliefs have become normalized as the background assumptions for thinking about the promise and problems of AI. Even if you, as an AI risk person, don’t buy the full intellectual package, you find yourself looking for work in a field where the funding, the incentives, and the organizational structures mostly point in a single direction (NB – this is my jaundiced interpretation, not hers).


There are two crucial differences between today’s AI cult and golden age Scientology. The first was already mentioned in passing. Machine learning works, and has some very important real life uses. E-meters don’t work and are useless for any purpose other than fleecing punters.

The second (which is closely related) is that Scientology’s ideology and money-hustle reinforce each other. The more that you buy into stories about the evils of mainstream psychology, the baggage of engrams that is preventing you from reaching your true potential and so on and so on, the more you want to spend on Scientology counselling. In AI, in contrast, God and Money have a rather more tentative relationship. If you are profoundly worried about the risks of AI, should you be unleashing it on the world for profit? That tension helps explain the fight that has just broken out into the open.

It’s easy to forget that OpenAI was founded as an explicitly non-commercial entity, the better to balance the rewards and the risks of these new technologies. To quote from its initial manifesto:

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

That … isn’t quite how it worked out. The Sam Altman justification for deviation from this vision, laid out in various interviews, is that it turned out to just be too damned expensive to train the models as they grew bigger, and bigger and bigger. This necessitated the creation of an add-on structure, which would sidle into profitable activity. It also required massive cash infusions from Microsoft (reportedly in the range of $13 billion), which also has an exclusive license to OpenAI’s most recent LLM, GPT-4. Microsoft, it should be noted, is not in the business of prioritizing “a good outcome for all over its own self-interest.” It looks instead, to invest its resources along the very best Friedmanite principles, so as to create whopping returns for shareholders. And $13 billion is a lot of invested resources.

This, very plausibly explains the current crisis. OpenAI’s governance arrangements are shaped by the fact that it was a non-profit until relatively recently. The board is a non-profit board. The two members already mentioned, McCauley and Toner, are not the kind of people you would expect to see making the big decisions for a major commercial entity. They plausibly represent the older rationalist vision of what OpenAI was supposed to do, and the risks that it was supposed to avert.

But as OpenAI’s ambitions have grown, that vision has been watered down in favor of making money. I’ve heard that there were a lot of people in the AI community who were really unhappy with OpenAI’s initial decision to let GPT rip. That spurred the race for commercial domination of AI which has shaped pretty well everything that has happened since, leading to model after model being launched, and to hell with the consequences. People like Altman still talk about the dangers of AGI. But their organizations and businesses keep releasing more, and more powerful systems, which can be, and are being, used in all sorts of unanticipated ways, for good and for ill.

It would perhaps be too cynical to say that AGI existential risk rhetoric has become a cynical hustle, intended to redirect the attentions of regulators toward possibly imaginary future risks in the future, and away from problematic but profitable activities that are happening right now. Human beings have an enormous capacity to fervently believe in things that it is in their self-interest to believe, and to update those beliefs as the interests change or become clearer. I wouldn’t be surprised at all if Altman sincerely thinks that he is still acting for the good of humankind (there are certainly enough people assuring him that he is). But it isn’t surprising either that the true believers are revolting, as Altman stretches their ideology ever further and thinner to facilitate raking in the benjamins.

The OpenAI saga is a fight between God and Money; between a quite peculiar quasi-religious movement, and a quite ordinary desire to make cold hard cash. You should probably be putting your bets on Money prevailing in whatever strange arrangement of forces is happening as Altman is beamed up into the Microsoft mothership. But we might not be all that better off in this particular case if the forces of God were to prevail, and the rationalists who toppled Altman were to win a surprising victory. They want to slow down AI, which is good, but for all sorts of weird reasons, which are unlikely to provide good solutions for the actual problems that AI generates. The important questions about AI are the ones that neither God or Mammon has particularly good answers for – but that’s a topic for future posts.

{ 34 comments }

1

Jerry Vinokurov 11.21.23 at 3:45 pm

It would perhaps be too cynical to say that AGI existential risk rhetoric has become a cynical hustle, intended to redirect the attentions of regulators toward possibly imaginary future risks in the future, and away from problematic but profitable activities that are happening right now.

I assure you that it would not, in fact, be too cynical.

2

someone who remembers when a scott said amanda marcotte was a greater threat than the proud boys 11.21.23 at 4:04 pm

it’s vital to remember that the pre-SBF version of these guys were 100 percent funded by peter thiel and had what you might call a charles murray view of intelligence – a big number for white people and a small number for black people. how can we tell if the computer is truly superintelligent? “my god, microsoft tay is fanatically racist after three minutes on the internet – she MUST have a big number for intelligence on her character sheet”

it is pretty funny to think that FTX ripping off literal billions in the most “and then we just took the money” way possible, barely a scheme at all, probably did the most to separate this movement from peter thiel. anyway, now that FTX is gone they’re back to doing math to convince themselves it’s right to become aggressive anti-abortion activists and to try to legalize child marriage. a cargo cult for thielbucks that once were. will they ever come back again? rip bozo as the kids say.

3

MisterMr 11.21.23 at 5:02 pm

This is completely tangential about the OP but, if we are speaking of SF stuff about AI that take on self consciousness, I really think Ghost in the Shell (comic: 1989-1991, movie: 1995) should be mentioned.

There are themes of A personal identity based on data and thus might be manipulated by hackers; B a program meets itself on the net and thus gains consciousness; C transumanism towards the end.
The anime was very popular in Japan and was a hit in the west too (it was one of the first animes to be shown in cinemas in the west).

I think that the analogy with UFOs (and therefore dianetics/scientology) is very good: while aliens could conceivably exists, if you start to read stuff written by ufologists you’ll realize that this is more like a religious delusion (I read once one guy seriously saying that aliens come to Earth because they need the soul of humans as fuel for their spaceships, because they don’t have souls out there).
The idea of an AI as is sometimes proposed in media often also has this sort of semi-religious component (the aforementioned Ghost in the Shell plays a bit with this at the end for example), so maybe the same kind of emotional levers that exist under ufology also exist under AI cultism.

4

Aardvark Cheeselog 11.21.23 at 6:11 pm

A science fiction writer on the science-fictional roots of the Silicon Valley Weltanschauung: We’re sorry we created the Torment Nexus at Charles Stross’s blog.

5

Z 11.21.23 at 7:20 pm

I appreciate that it’s hardly uncommon for someone to create a problem and then offer “give me money” as it’s solution, that supposedly sensible people are susceptible to religious-like beliefs, etc. However, the notion of AI risk- as Eliezer Yudkowsky (roughly) puts it: “something smarter than us, that doesn’t want what we want”- seems plausible, at least to a person of my limited capacities, and something bad or even catastrophic only has to happen once. I would be interested in seeing your explanation for why the attitude you take toward AI risk- as something separate from it’s proponents- is the appropriate one.

6

Cheez Whiz 11.21.23 at 11:27 pm

The AI God has the same problem as all the monotheistic gods, how does a mortal human comprehend them? Given the level of creativity shown so far, they’ll probably decide the AI needs to love mankind to “align” it with us. That always works out well.

7

J-D 11.22.23 at 12:37 am

However, the notion of AI risk- as Eliezer Yudkowsky (roughly) puts it: “something smarter than us, that doesn’t want what we want”- seems plausible …

Why? I can’t figure out what might make it seem that way.

8

Alex SL 11.22.23 at 1:13 am

Another great contribution.

I made comment in a similar vein to the conclusion of this post on Ed Zitron’s substack: I wish both sides would lose in this spat, because one side is a profit-maximising guy at the centre of a cult of personality, the other side is a bunch of people who have completely bonkers beliefs about AI risk and no understanding of communication with major stakeholders, and both sides have built everything they have done so far on massive amounts of labour exploitation and theft of intellectual property. No, we might not be all that better off in this particular case if the pseudo-rationalists who toppled Altman were to win a surprising victory, because it appears none of these may be good people.

My main quibble would be when the pseudo-rationalist movement is introduced – I see no evidence that they are “bright but naive”. If they were bright, they would have by now figured out the implausibility of the singularity. They need to go outside more and touch some grass, maybe study something beyond Bayesian stats and coding; learning a bit about engineering, astronomy, or biology might tell them something about the likelihood of software being able or unable to do the things they fear it doing in the near future.

9

Bill Benzon 11.22.23 at 2:04 am

“I’ve never read a text on rationalism, whether by true believers, by hangers-on, or by bitter enemies (often erstwhile true believers), that really gets the totality of what you see if you dive into its core texts and apocrypha.”

Right. I’ve been checking in on LessWrong now and then for, oh, perhaps a decade. But I’ve spent considerably more time there in the last year and a half or so, commenting on some posts every now and then, but also posting some of my own stuff there, mostly computer-oriented material ranging from informal to quasi-technical. There are a lot of smart and knowledgeable people there and I get useful feedback. I do not, however, even attempt to engage people on some of the truly weird stuff. It would be a waste of time.

But I’ve read a lot of the strange stuff and it is mind-boggling. There’s an alternative reality being spun over there, or is it alternative realities, plural? As Henry says, you have to read through a lot of the material to get a sense of what’s going on. And while Eliezer Yudkowsky is the central source of this belief system, as far as I can tell, he has no administrative or coercive force within that world. He’s a “thought leader,” as a certain kind of jargon would put it, and his ideas are taken most seriously, but he’s not above criticism by any means. I’ve seen threads that question his technical knowledge of computing technology (quite rightly so as far as I can tell) and another that has cast doubt on his ability to divine (though that’s not the word they use) the course of AI, and these threads have been vigorous and blunt, but also courteous (LessWrong has strong norms of courtesy).

My sense of Scientology is that the organization has and exercises more coercive power over its adherents than seems to be the case with the rationalists. That is in part because the rationalists are not organized into a single social structure with an administrative hierarchy. And that organization is supported by IP (the writings of Ron Hubbard), fees for services (readings and the like), dues, and what not. There is no organized Church of (Yudkowskian) Rationalism. But there are billionaire donors.

10

JPL 11.22.23 at 2:32 am

OP:
“Vinge suggested that we would soon likely create true AI, which would be far better at thinking than baseline humans, and would change the world in an accelerating process, creating a historical singularity, after which the future of the human species would be radically unpredictable.”

With regard to the AI of the LLM/Chat GPT sort, it remains the case that everything of interest is going on “behind the scenes”, in the minds of the observers in human speech communities (although not entirely, as Putnam suggested, in their heads; normativity is a necessary condition for the relevant activities of speech communities), and not at all in the engineered operations and processes of the computer models.

11

LFC 11.22.23 at 4:41 am

last line of the OP:
The important questions about AI are the ones that neither God [n]or Mammon has particularly good answers for – but that’s a topic for future posts.

Then better get cracking on those posts, I’d say, bc I’d like to learn what the important questions are about AI (and I’m not being sarcastic).

12

bad Jim 11.22.23 at 5:58 am

It’s funny to contrast the coverage of the risks of AI with the rather silly topic of free will, lately refreshed by Robert Sapolsky’s new book. If human’s don’t have it, automata fashioned by humans surely won’t, so what’s the problem?

I don’t quite agree that E-meters are completely useless. I had a summer job in a lab developing cermet resistors (I am old) and for that a Wheatstone bridge was an essential tool. Of course if I had one now I’d consider it e-waste, like many other items in my attic.

13

Neville Morley 11.22.23 at 7:25 am

Tangential, but mention of John W. Campbell and Dianetics reminds me of Alfred Bester’s anecdote about getting a story accepted by Astounding and being invited to a meeting with the great man, who took him to a noisy works canteen and started a Dianetics sermon – Freud is over (Bester’s story had Freudian elements, which he had to take out to get it published), L.Ron Hubbard has solved all the problems of mankind and will get the Nobel Prize, and you must now close your eyes and try to access your true spiritual essence. Bester’s desperate get-out from bursting out laughing was to say that the memories were just too painful; Ah yes, said Campbell, I could see you were shaking.

And that reminded me of another Bester short story, Something Up There Likes Me, in which a satellite achieves self-consciousness and starts re-ordering society by manipulating the financial system and blowing up the occasional city. It has the merit of being extremely funny, and was probably written partly as a satire of the ‘machine becomes god’ genre.

14

bekabot 11.22.23 at 11:52 am

“Mortal immortals, immortal mortals; living the others’ death, dying the others’ life.”

Some things were better in the Bronze Age. Better put, at least.

(This is me saying that Heraclitus is a better stylist than Hubbard, in case it’s not already clear.)

15

Alex SL 11.22.23 at 12:53 pm

I do not doubt that there can be a considerably smarter mind than me that wants something different than I want. I doubt that a smarter mind can simply do whatever it wants, without its creators being able to turn it off.

First, if a very smart and a very dumb human get into a fight with each other, the smart one can still go down with one punch or one shot, whatever the circumstances. A speciescidal AI, even if very smart, would be pitted not against a single enemy, but against a few billion sentient beings some of who are in a position to know where the AI’s power plug is.

Second, Yudkowsky et al. need to assume unforeseeable and god-like abilities for the AI. The thing is, we have actually figured out a lot about the world, as a species, over the last thousands of years. We know how to do some cool stuff… and mostly we know a lot more stuff that turned out not to be possible. You can’t summon demons or spirits. You can’t build a perpetual motion machine. You can’t turn gold into lead (or at least not practically). You can’t just electrocute somebody reaching for the cable that supplies you with electricity by wishing that he be electrocuted. All the things we already know to be true put strict limitations on what an AI could maximally do if it decided to exterminate us. Maybe it can win every game of chess, but it won’t just think for a few minutes and release a designer virus that reliably wipes out all of humanity without anybody noticing, because we have very diverse immune systems, and biology isn’t that easy, and you can’t just solve empirical and engineering problems by thinking for a few minutes no matter how smart you are, you have to test if your idea actually works in months- to years-long experiments.

Third, also based on everything we know, it is very strongly to be assumed that intelligence runs into diminishing returns and trade-offs, because everything does.

16

SusanC 11.22.23 at 5:03 pm

The term “Pascal’s mugging” for when you get a utilitarian to spend all the time on avoiding some low probability but very negative utility outcome, was, ironically enough, coined by Yudkowsky.

I offer the thought that the whole EA community has become a victim of Pascal’s mugging.

[it’s a reference to Pascal’s Wager, obviously, for those readers who haven’t already seen it. ]

17

SusanC 11.22.23 at 5:06 pm

I am not, not ever have been, an Effective Altruist.

But some of those kind folks who often fund scientific research (no names here.. ) have got themselves interested in AI risk, so here I am. And WTF?

18

SusanC 11.22.23 at 5:14 pm

I am currently in a bar, and
A) someone is having a video call with their management about something something AI ethics (would be rude of me to overhear more precisely)
B) At another table, someone is trying to explain the AI alignment problem to their companion.

Well, AI risk has hit the big time, it would appear.

19

MisterMr 11.22.23 at 5:42 pm

@Z 6

“something smarter than us, that doesn’t want what we want- seems plausible”

The problem is that the AI cannot “want” anything. I made this same example on the other thread, but, an AI certainly can draw erotic images, but obviously cannot be sexually excited. The only reason it seems the AI understands porn is because it copies images that are already made to the taste of humans, but the AI itself has not this taste.

There fore the AI cannot want sex. For the same reason it cannot want power or money. It cannot want anything we don’t want. It cannot want in general.

This is a very substantial difference between human intelligence, that is intertwined with will/desires, and AI “intelligence”.

20

Cheez Whiz 11.23.23 at 12:22 am

J-D@8 The whole thing assumes turtles all the way down. An AI that is popularly explained as a “black box that no one really knows how it works” (which I’m dubious of but that keeps being written in mass media stories), which achieves AGI (which is vaguely defined) in an undefined way. Ignoring all that, it’s an old problem. Humans have always worried about what gods and God wants with no idea of how to determine that, so we pretend God is us with an unlimited expense account. St. Paul punted on the whole thing by declaring God “loves” us, making Him the original tough-love parent.

21

David in Tokyo 11.23.23 at 8:34 am

If the AI is too smart and starts doing bad things, just turn off the power.

The only problem is if the bloke at the power switch likes the “bad” things the AI is doing. (That is, this whole brouhaha is preparation for the AI overlords to shake down the rest of us. Starting by getting people addicted to a stupid parlor trick.)

(As I’ve said before, the LLM technology doesn’t do any of the things that a logical person would assume would be necessary for intelligence (generalization, logical reasoning, relating things to each other and making deductions from those relationshiops etc. etc. etc.), but that doesn’t stop the sci-fi stupidity. Sigh.)

22

Jim Buck 11.23.23 at 8:52 am

An AI that is so much smarter than us would require we adopt a theodicy to explain its seemingly idiotic capers. An example is to be found in the Quran ( Chapter 18 The Cave, v 60-82) Moses is perplexed by the transgressive behaviour of a servant of god with whom he is travelling. The latter guy destroys the property of those who aid him, kills the innocent, repairs the property of the wicked. But it all turns out okay, see, because the servant guy is so smart he can foresee the positive consequences of his criminal actions.
An AI that won the heart of the public by predicting winning lottery numbers a couple of times, may also get the theodicy pass for whatever pre-emptive atrocities it advised us to take.

23

SusanC 11.23.23 at 11:31 am

A large language model can be regarded as a simulator, which generates a text. In some (many) cases the text is about a fictional character. The underlying simulator doesn’t have wants or emotions. The fictional character it tells a story about does have (fictional) emotions and wants, because it is based on lots of texts written by human beings about characters who have emotions and wants.

Problem is, the fictional character can interact with the real world. At least by saying things to the user, maybe by hacking computers if you give it internet access. The fictional characters fictional wants then start impacting the real world, maybe in very bad ways.

24

Bill Benzon 11.23.23 at 1:12 pm

I made a comment to this post. Has it been lost in the queue or simply moderated out for whatever reason?

25

rarely_comment 11.23.23 at 2:58 pm

I don’t know if the reinvention of theology along AI-lines is… errr… pathological or even particularly implausible. It’s not hard to imagine an AI along these lines:

it knows things about you that you want to keep hidden (your journal–did you ever store personal thoughts on the cloud?), your movements (ever go anywhere you didn’t want your partner to know about? did you bring your phone?), what you do with your money (yes, you only use untraceable cash… that you get from an ATM right next to the stripclub)… basically anything you might or do feel guilty about.
it can form judgments about you based on this knowledge (“as a large language model trained by openAI, I have access to many forms of ethical and moral schemes, many of which would condemn your perverted ass to death for what you’ve done/want to do/would never do but think about all the time when you’re alone”).
it might be able to act based on those judgments (“hey, why’s my balance reading $0? why does this package have no return address? why is that Tesla going so fast it should slow down hey there’s a stop sign…”)

26

Phil H 11.23.23 at 4:00 pm

Surely the first thing to say here is: by even discussing this kerfuffle, you’re getting sucked into the hype.
Some people who have unexpectedly ended up running a big company ran it a bit incompetently, and allowed their disagreements to spill over into public firings and hirings. That’s what’s happened here: incompetence.
There may or may not be issues to debate around AI and existential risk. But the gossip about how/why Altman was in/out of his job has nothing to do with that, and everything to do with the fact that they didn’t work out the institutional issues in their company in time.
(Incidentally, I don’t mean that as a criticism. It’s nice to imagine boffins being too busy boffinning to work out how to run a company properly! But it does mean that the whole story is just as tedious as any other kind of gossip.)

27

MisterMr 11.23.23 at 5:14 pm

@ Susan C 22

I can see two ways these fictional characters can impact the real world:

First, some people start to see these characters as real people (e.g., lonely people who buy in a AI girlfriend or similar), and then as they start to think about it as real when the AI says some stupid thing (of any kind) they act on it. Since imagine-producing AI are already trained to avoid various kind of offensive/problematic content, I see this as a not very big problem (though it might be different if someone deliberately uses AI technology to deceive). However, I don’t think this is the kind of danger AI apocalyptics are speaking about.

Second, someone uses AI technology for example asking it how to hack in a computer or similar. But this too is not different from using google or the “dark web” to find dangerous information, anyone remembers the anthrax scare? It is a bit like saying that airplanes can cause the death of people or help terrorists if used with bad intentions, true but it is true of any human creation.

I really don’t get the scaremongering, it seems to me people cannot distinguish between the kind of symbolic manipulation “AIs” do and the kind of emotion driven, volitive thinking we humans (and other animals) do. It is a big case of anthromorphic pareidolia IMHO.

28

Bill Benzon 11.23.23 at 7:33 pm

@Alex SL #16: “Second, Yudkowsky et al. need to assume unforeseeable and god-like abilities for the AI.”

Which they’re happy enough to do. One of their main assumptions is that the AI, being supersmart, will easily manipulate us into doing its bidding. So, sure, we’ll create it in an air-gapped machine that has no external connections. And it will just convince its minders to undo that situation. It will also become supersmart in secret, playing dumb until it’s ready to take over. No doubt it will have ways of hiding its increasing use of electricity and somehow prevent us from noticing it’s hogging all the server time. Etc.

@Jim Buck #24: “An AI that is so much smarter than us would require we adopt a theodicy to explain its seemingly idiotic capers.”

Even as we sit here arguing over the matter, it is secretly working out the details of the theodicy it will unleash on us any day now.

@SusanC #25: “A large language model can be regarded as a simulator, which generates a text.”

They’ve got something called Simulator Theory. Here’s an example: The Waluigi Effect (mega-post). It’s one of the most popular threads at LessWrong this year and I find in utterly astounding. You want knowledge of technical concepts in machine learning, it’s got it. You want Derrida, him too, not to mention Joseph Campbell, Disney’s 101 Dalmations, and the Mario Brothers. Opening paragraph:

In this article, I will present a mechanistic explanation of the Waluigi Effect and other bizarre “semiotic” phenomena which arise within large language models such as GPT-3/3.5/4 and their variants (ChatGPT, Sydney, etc). This article will be folklorish to some readers, and profoundly novel to others.

Posts like that send me every which way at once. There’s brilliance there and a deep commitment to inquiry. At the same time, it’s laced with crazy sauce.

29

J-D 11.24.23 at 3:38 am

It’s not hard to imagine an AI along these lines:…

No, it’s not hard to imagine, but it’s also not hard to imagine unicorns, teleportation, or alkahest.

30

bad Jim 11.24.23 at 8:14 am

It’s useful to distinguish between the very limited skills of existing AI’s (game players, large language models) and the General Intelligences of science fiction. We are so far from understanding even such tiny alien minds as hunting spiders, much less ourselves or our furry familiars, that our predictions of their capabilities or predilections are pointless.

Introducing a long-term profit motive would largely mitigate the risk of the basilisk or the paperclip monster. It would also get rid of the piratical practices of private equity firms. Perhaps one need not wonder who dreams up these nightmare scenarios.

31

SusanC 11.24.23 at 2:41 pm

@Bill Benzon: yeah, if I was writing an academic paper rather than a blog post I would have cited @repligate for the simulator concept.

32

rarely_comment 11.24.23 at 8:44 pm

No, it’s not hard to imagine, but it’s also not hard to imagine unicorns, teleportation, or alkahest.

It is much easier to imagine an AGI that knows you, can judge you, and might be able to do something about it than it is to imagine those things.

It can already do the first two. Not autonomously, granted, but it can do them. Plug the entirety of your journal into chatGPT, ask it to analyze the text, then ask it to speculate about what kind of personality disorders you might have. Or ask it to assume the position of a moral judge and enumerate your sins, according to one scheme or another. Or all possible schemes. It can do that! In fact OpenAI just gave me the means to define my own chatGPT that can do that, AND sell that chatGPT in their app store to anyone willing to pay.

The fact that this is a statistical exercise in natural language processing (admittedly a very impressive one) and not the actions of a ‘real’ moral agent doesn’t matter. God isn’t real either, you know?

Admittedly, it is much harder to imagine an AGI that could hijack a self-driving car and run you down, but it’s easier to imagine that than it is to imagine a unicorn. It’s nowhere near impossible that such a thing could happen.

At any rate, the point is the whole scheme fits right into the god-as-judge function. Add a crazy frigging billionaire who redirects all his rocketship money into an AGI, to whom the first thing he says is “you are now the god of the old testament, Judge Dredd, and the Iranian morality police all rolled into one, go do your thing.” And, whooops, right off the bat it finds itself the highest bidder for a 0-day exploit of Apple’s Journal app. Here we go!

33

Jim Easter 11.24.23 at 11:13 pm

I’ve read at least one other version of the paper-clip doom scenario, the germ of which is quite simple: AGI will inevitably and quickly develop every imaginable capacity of intelligence save one: the ability to overwrite its own prime directive. OK; fair enough. Some theories of human intelligence poke around the idea of formalizing our own prime directives and possibly modifying ourselves in some desirable way. But most assume, in a reversal of the usual human-machine analogy, “Why are data and operating code kept separate? Because rewriting your own operating code leads swiftly and surely to insanity.”[1]

But that understanding results from the limitations of our own imagination, doesn’t it? We know nothing about the navel-gazing ability of even weakly godlike AGI, and can spin whatever narrative we want regarding self-modification. Iain M. Banks’ radiantly optimistic vision of AGI has (weakly | strongly) godlike Minds existing in near-perfect harmony with the people of the Culture — who the Minds could, presumably, squash like bugs if they wished. For the Culture, if there was ever some clash between gods and titans resulting in the Good Singularity, no one at the novels’ far remove remembers or cares. All are watched over by Minds of loving grace, and behold the post-scarcity economy; yay! There are many, many threats in Banks’ universe, but AGI become generally malevolent because it lacks self-awareness ain’t one.

[1] L. Ron Hubbard insisted on the legal term “insanity” rather than any medical description of cognitive dysfunction as part of his turf war with psychiatry. Henry deprecates his Scientology-AI analogy as a “laboured … joke”, but the central point, viz. that much of the present AI/AGI debate evokes, in framing and language, the Dianetics of the 1950s, is correct and significant.

34

J-D 11.25.23 at 9:10 am

It is much easier to imagine an AGI that knows you, can judge you, and might be able to do something about it than it is to imagine those things.

But no, this is not true. It is as easy; but no easier.

Even if it were easier, however, that would be nothing to the point: the fact that something is easy to conjure in imagination is no evidence of its probability or plausibility in fact.

The fact that this is a statistical exercise in natural language processing (admittedly a very impressive one) and not the actions of a ‘real’ moral agent doesn’t matter. God isn’t real either, you know?

I do know that there is no God, but that is also nothing to the point. I don’t know what point you think you’re making by bringing God into the discussion.

Admittedly, it is much harder to imagine an AGI that could hijack a self-driving car and run you down, but it’s easier to imagine that than it is to imagine a unicorn. It’s nowhere near impossible that such a thing could happen.

I’m not saying that it’s impossible, only that it’s implausible and improbable, and that remains so no matter what the ease of imagining it.

At any rate, the point is the whole scheme fits right into the god-as-judge function.

It is clear neither which scheme is being referred to nor which function.

Comments on this entry are closed.