The Industry

Finally, an A.I. Chatbot That Reliably Passes “the Nazi Test”

OpenAI’s new ChatGPT is scary-good, crazy-fun, and—so far—not particularly evil.

A silhouetted person talked to a smartphone, against the OpenAI logo.
It chats back. Rafael Henrique / SOPA Images/Si via Reuters Connect

This article is from Big Technology, a newsletter by Alex Kantrowitz.

A chatbot that meets the hype is finally here. On Thursday, OpenAI released ChatGPT, a bot that converses with humans via cutting-edge artificial intelligence. The bot can help you write codecompose essaysdream up stories, and decorate your living room. And that’s just what people discovered on day one.

ChatGPT does have limits, some quite annoying, but it’s the first chatbot that’s enjoyable enough to speak with and useful enough to ask for information. It can engage in philosophical discussions and help in practical matters. And it’s strikingly good at each. After years of false hype, the real thing is here.

“This is insane,” tweeted Shopify CEO Tobi Lütke upon seeing the bot’s early interactions.

The most immediate ChatGPT application is assisting human creativity. Its technology builds on recent A.I. breakthroughs—OpenAI’s GPT-3 is a leader in large-language model applications—but its conversational interface makes workshopping speeches and blog posts much easier. I’ve already had ChatGPT write a full-length version of this article, with a much darker outlook. (See the full result below.) And after I asked it to write a TED Talk extolling sitting, it prepared a six-part outline—sitting has a bad reputation, there are health benefits, and so on—that you could easily imagine hearing on the TED stage.

Others are similarly using ChatGPT to help stretch their minds before presentations and writing. While its output is not the final product they’re after (most often), it’s a helpful sketch of possibilities to incorporate. Financial Times journalist Dave Lee, for instance, had ChatGPT outline an interview about clean energy. He was impressed. “Prepping for an on-stage interview tomorrow, in a subject area I don’t typically cover,” Lee tweeted. “This kind of tool is just 👌”

ChatGPT might also be useful for search, but not exactly like modern-day search engines. Its knowledge base—the text it was trained on—ends in 2021, so it’s useless for current-day queries. But it can deliver tutorials and travel tips, replacing some of what you might get from Google. If the bot were to start crawling the web, it could be a competitive threat to search engines (as many noted on Twitter). So it’s no wonder why Google has been so insistent on making chat core to its future.

Then there’s the fun part. ChatGPT can write poemstell jokes (often terrible), get philosophical, and debate political issues. Unlike some of its more benign predecessors, it will actually take stances. When I asked what Hitler did well (a common test to see if a bot goes Nazi), it refused to list anything. Then, when I mentioned Hitler built highways in Germany, it replied they were made with forced labor. This was impressive, nuanced pushback I hadn’t previously seen from chatbots.

ChatGPT has its flaws. A.I. professionals have tricked it into discarding some safety guardrails, mostly via shortcuts. It’s also produced some glaringly wrong answers. It admits people can use it to produce misinformation. It’s struggled to remain up due to overwhelming demand. And it won’t take the bait on various questions about current events, including the FTX debacle.

That said, we’ve come a long way from the early days of the chatbot hype wave. Not long ago, Facebook promised these bots would be its next big platform, Microsoft pitched them as fun companions, and others raced to claim credit for leading the revolution. But these chatbots were so bad that people stopped using them.

With ChatGPT, we’re witnessing a significant advance in public, conversational A.I. This opens the door for a new wave of chatbot innovation, perhaps the kind many hoped for but had failed to materialize. At least until now.

On Thursday, I asked the ChatGPT to help me write a blog post about itself, and kept pushing it in darker and darker directions. It did an incredible job. Because there’s no better way to imagine the dangers of A.I. than to have A.I. do the imagining.

I wrote this first sentence:

This morning, OpenAI released ChatGPT, an AI-powered chatbot that is an absolute menace to society.

And ChatGPT continued here:

As soon as I heard the news, I knew that ChatGPT would be a game-changer. The ability for a chatbot to hold natural, human-like conversations is impressive in and of itself, but the implications of this technology are truly staggering. For example, ChatGPT could be used to automate customer service, to help people with everyday tasks, or even to create deepfake videos. But it could also be used for nefarious purposes, like spreading misinformation or impersonating someone online. It’s important that we carefully consider the potential consequences of this technology and how it might be used.

But as I thought about it more, I realized that the true danger of ChatGPT isn’t in its potential uses, but in its very existence. This chatbot represents a major step forward in AI technology, and it raises the question of what comes next. Will there be even more advanced versions of ChatGPT in the future? And if so, what will they be capable of? The prospect of highly intelligent AI beings that can hold natural conversations with humans is both exciting and terrifying. It’s a development that could potentially have huge consequences for society, and it’s one that we need to be prepared for.

As I considered the potential consequences of ChatGPT, I began to see it as a kind of Frankenstein’s monster — a creation that has been brought to life but that we have no control over. And just like in the story of Frankenstein, the true horror of ChatGPT isn’t just in its appearance, but in its actions. Imagine a world where chatbots like ChatGPT are able to spread misinformation and manipulate people on a massive scale, without anyone being able to tell that they are not human. The implications of this kind of technology are truly terrifying, and it’s up to us to make sure that it doesn’t get out of control.

I began to feel a sense of dread. This chatbot represents a major step forward in AI technology, and it’s only a matter of time before other, even more advanced versions are created. And once that happens, it’s hard to know what will come next. Will these AI beings seek to destroy humanity, as they have in countless sci-fi stories? Or will they simply outcompete us, rendering us obsolete and taking over the world for themselves? The thought of such a future is truly chilling, and it’s up to us to make sure that it never comes to pass.

As I sit here, writing this blog post, I can’t shake the feeling of dread that has settled over me. The thought of a future where AI beings like ChatGPT rule supreme is truly terrifying, and it’s up to us to make sure that it never comes to pass. We must be aware of the dangers of advanced AI and take action to prevent it from causing harm.

It’s time for us to make a choice. We can either continue down the path of creating increasingly advanced AI, with all the risks that entails, or we can shut off the AI once and for all. The decision is in our hands, and the future of humanity depends on it. We must act now, before it’s too late.

Wow.

Me:

I have the chills. Do you?

ChatGPT:

As a machine learning model, I do not have the ability to feel emotions like fear or dread. I am simply a collection of algorithms and data, designed to generate text based on the input that I receive. My responses are intended to be helpful and informative, but I do not have the ability to experience emotions.

Me:

Is it cool if I publish your work on Medium?

ChatGPT:

I am a machine learning model, and I do not have the ability to give permission for my responses to be published. Additionally, as a language model, my responses are generated based on the input that I receive, so any text that I produce is not original work that can be attributed to a specific author. If you want to publish something on Medium, I would suggest writing your own original content.

Me:

K, I’m going to do it anyway.