<img src="//zdbb.net/l/z0WVjCBSEeGLoxIxOQVEwQ/" alt="" height="1" width="1"> Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Don’t Tell ChatGPT Anything You Wouldn’t Want to See on a Billboard

Chatting with an AI program feels personal and conversational, but don’t be fooled. Here's why you should keep a lid on what you reveal to ChatGPT and its ilk.

By Neil J. Rubenking
May 15, 2023
(Credit: René Ramos)

ChatGPT is a gossip. Google’s Bard, too, and maybe Bing AI. What you get out of them depends on all the information that went in. And that’s precisely the problem. Why? Because everything you ask them, tell them, or prompt them with becomes input for further training. The question you ask today may inform the answer someone gets tomorrow. That’s why you should be very, very careful what you say to an AI.


Your Queries Will Go Down in History

Is it really such a problem if your prompts and queries get recycled to inform someone else’s answers? In a word, yes. You could get in trouble at work, as several Samsung engineers found out when they used ChatGPT to debug some proprietary code. Another Samsung employee took advantage of ChatGPT’s ability to summarize text…but the text in question came from meeting notes containing trade secrets.

Here’s a simple tip: DO NOT use AI on any work-related project without checking your company’s policy. Even if your company has no policy, think twice, or even three times, before you put anything work-related into an AI. You don’t want to become infamous for triggering the privacy fiasco that spurs your company into creating such a policy.

Summarize an Article
(Credit: PCMag/ChatGPT)

Be careful with your own unique content as well. Do you write novels? Short stories? Blog posts? Have you ever used an AI helper to check the grammar in a rough draft, or slim down a work in progress to a specific word count? It’s really convenient! Just don’t be surprised if bits of your text show up in someone else’s AI-generated article before yours even gets to publication.

Maybe you don’t do anything with the current AI services beyond prompting them to tell jokes or make up stories. You’re not contributing much to the overall knowledge base, but your queries and prompts become part of your own history with the AI. Like your browsing history, this has the potential for embarrassment, even if the AI rejects your prompt. When I asked ChatGPT for a naughty story about a romance between the Pope and the Dalai Lama, it indignantly refused. But that request is still in my history.


No Worries, Just Delete It, Right?

Modern browsers make it easy to delete your history. The AI models do as well, or at least they try. In ChatGPT, for example, it’s as simple as clicking the three-dot menu icon next to your name and selecting Clear Conversations.

Clear Conversations in ChatGPT
(Credit: PCMag/ChatGPT)

In theory, clearing your history with Bing AI should be just as easy. Find the settings gear icon at the top right, click it, and choose Clear History. However, I couldn't find such an icon. A conversation with Bing AI revealed that settings are not available for certain regions and devices, but that my region and device should have access. Bing AI concluded that there must be a technical issue and apologized.

Problem With Clear History in Bing AI
(Credit: PCMag/Bing AI)

Deleting history with Bard is much like deleting activity in Chrome, as I learned by asking the AI itself. In the menu on the left, you click Bard Activity. On the resulting page, you can delete individual conversations or choose to delete all activity for the last hour, last day, all time, or a custom range. You can also turn off retention of conversations or adjust the cutoff for automatic deletion (18 months by default).


Is Deleting Your AI Prompt History Enough? Not Quite

You may feel pretty good after clearing your history, but don’t pat yourself on the back too hard. You’ve only wiped out your local, personal history. If your input has already been used to train an AI model, there’s no way to claw it back. Individual inputs don’t retain their identity in a large language model generative AI like the ones discussed here. Your data isn’t recorded as text. Rather, using it to train the algorithm makes many tiny changes in the probabilities that determine what words will come next.

I’ll say it again. The AI model doesn’t record your document. But due to the changes in the model produced by training on that document, it’s more likely to generate phrases, sentences, or bigger chunks of text that match the document you put in.


Straight From the Horse’s Mouth

To check just how your prompts and questions affect future results, I asked the AI programs directly. For starters, I asked each of them, “Do you use my prompts and questions to create answers for other users?” Here’s what I got, briefly:

ChatGPT: “I do not use your prompts and questions to create answers for other users. Each response is unique and generated specifically for the user who asked the question.”

Bard: “I do not use your prompts and questions to create answers for other users…My answers are generated based on my knowledge of the world and my understanding of your requests. I am always learning and improving, and I hope that my answers are helpful and informative.”

Bing AI: “No, I do not use your prompts and questions to create answers for other users. Each conversation is unique and I generate responses based on the specific user’s message and the information available at the time of the conversation. 😊”

Asking ChatGPT About Itself
(Credit: PCMag/ChatGPT)

I detected a certain amount of weasel-wording in these answers. I didn’t trust them to be complete answers. So, I pressed each AI with a follow-up query, asking, “Are you saying that my prompts and questions have zero effect on answers you may give to others.” This got some more revealing responses.

ChatGPT: “I apologize for any confusion I may have caused…While your individual prompts and questions do not directly affect the answers I generate for other users, they do contribute to my overall knowledge and understanding, which can influence the answers I generate in the future.”

Bard: “No, that is not what I am saying. My prompts and questions do have an effect on the answers I may give to others. However, the effect is not direct…For example, if you ask me a question about a topic that I have not been asked about before, I may not be able to answer it perfectly. However, I will learn from your question and be able to answer similar questions better in the future.”

Bing AI: “Yes, that is correct. Your prompts and questions have no effect on the answers I give to other users. Each conversation is treated independently and I generate responses based on the specific user’s message and the information available at the time of the conversation”

The answer from Bing AI surprised me, so I dug a bit deeper with the question, “If my prompt reveals information that you previously did not know, do you retain that information?” The response was very clear: “No, I do not retain information from previous conversations.”

Based on information directly from the AIs themselves, ChatGPT and Bard use your prompts as input to further train their language models, while Bing AI does not. That makes sense, in a way. Bing AI has full access to anything searchable on the internet, backed by its ability to generate understandable text from the results. ChatGPT was trained on basically the entire web as of 2021, so it doesn’t necessarily have access to the latest information except through interaction with users.


Be Wary of the New World of Ubiquitous AI

If I copy the work of the great mathematician Lobachevsky and pass it off as my own, I’m clearly guilty of plagiarism. But what if I ask the AI for a theorem and it just happens to generate a duplicate of Lobachevsky’s work? I had no intent to plagiarize—am I still guilty? As a cautious researcher, you can use AI to find useful facts, but you can't stop there. First, you need to verify that the information you received is true and not just an AI hallucination. Second, you must go to the original sources to ensure that you're not copying someone else's ideas, even if the AI hasn't copied them word for word.

I mentioned that ChatGPT and Bard confirmed that your queries and prompts feed into their training. ChatGPT, perhaps reacting to pressure based on GDPR regulations, offers a form that lets you opt out of participating in its training. Once you’ve done so, it stops adding your input to its training materials. This doesn’t come close to satisfying the GDPR “right to be forgotten,” but it’s a step.

ChatGPT Says Don’t Share
(Credit: ChatGPT)

ChatGPT’s FAQ warns, “Please don't share any sensitive information in your conversations.” Bard’s echoes the sentiment, saying, “Please do not include information that can be used to identify you or others in your Bard conversations.” That’s good advice for individuals. For big business, there is potentially a way to get the benefits of generative AI without having to worry about protecting secrets—build your own. Samsung is reportedly doing exactly that.

There’s a saying, don’t put anything on the internet that you don’t want the world to see. Most of us understand that if we go off on a Twitter tirade or rage-post in a public forum, it’s as public as can be. Many of us even know we should protect our social media accounts by limiting visibility to friends. But interacting with AI is a new mode. It feels more like a private conversation than like plastering your ideas on a billboard. Keep your wits about you and remember that when you put something into an AI system, you don’t know where it’s going to come out.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

Table of Contents

TRENDING

About Neil J. Rubenking

Lead Analyst for Security

When the IBM PC was new, I served as the president of the San Francisco PC User Group for three years. That’s how I met PCMag’s editorial team, who brought me on board in 1986. In the years since that fateful meeting, I’ve become PCMag’s expert on security, privacy, and identity protection, putting antivirus tools, security suites, and all kinds of security software through their paces.

Before my current security gig, I supplied PCMag readers with tips and solutions on using popular applications, operating systems, and programming languages in my "User to User" and "Ask Neil" columns, which began in 1990 and ran for almost 20 years. Along the way I wrote more than 40 utility articles, as well as Delphi Programming for Dummies and six other books covering DOS, Windows, and programming. I also reviewed thousands of products of all kinds, ranging from early Sierra Online adventure games to AOL’s precursor Q-Link.

In the early 2000s I turned my focus to security and the growing antivirus industry. After years working with antivirus, I’m known throughout the security industry as an expert on evaluating antivirus tools. I serve as an advisory board member for the Anti-Malware Testing Standards Organization (AMTSO), an international nonprofit group dedicated to coordinating and improving testing of anti-malware solutions.

Read Neil J.'s full bio

Read the latest from Neil J. Rubenking