Advertisement
Full access
Editorial

ChatGPT is fun, but not an author

Science
26 Jan 2023
Vol 379, Issue 6630
p. 313
PHOTO: CAMERON DAVIDSON
In less than 2 months, the artificial intelligence (AI) program ChatGPT has become a cultural sensation. It is freely accessible through a web portal created by the tool’s developer, OpenAI. The program—which automatically creates text based on written prompts—is so popular that it’s likely to be “at capacity right now” if you attempt to use it. When you do get through, ChatGPT provides endless entertainment. I asked it to rewrite the first scene of the classic American play Death of a Salesman, but to feature Princess Elsa from the animated movie Frozen as the main character instead of Willy Loman. The output was an amusing conversation in which Elsa—who has come home from a tough day of selling—is told by her son Happy, “Come on, Mom. You’re Elsa from Frozen. You have ice powers and you’re a queen. You’re unstoppable.” Mash-ups like this are certainly fun, but there are serious implications for generative AI programs like ChatGPT in science and academia.
ChatGPT (Generative Pretrained Transformer) was developed with a technique called Reinforcement Learning from Human Feedback to train the language model, enabling it to be very conversational. Nevertheless, as the website states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Several examples show glaring mistakes that it can make, including referencing a scientific study that does not exist.
Many concerns relate to how ChatGPT will change education. It certainly can write essays about a range of topics. I gave it both an exam and a final project that I had assigned students in a class I taught on science denial at George Washington University. It did well finding factual answers, but the scholarly writing still has a long way to go. If anything, the implications for education may push academics to rethink their courses in innovative ways and give assignments that aren’t easily solved by AI. That could be for the best.
More worrisome are the effects of ChatGPT on writing scientific papers. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. That’s a lot of AI-generated text that could find its way into the literature soon.
For years, authors at the Science family of journals have signed a license certifying that “the Work is an original” (italics added). For the Science journals, the word “original” is enough to signal that text written by ChatGPT is not acceptable: It is, after all, plagiarized from ChatGPT. Further, our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works. Of course, there are many legitimate data sets (not the text of a paper) that are intentionally generated by AI in research papers, and these are not covered by this change.
Most instances of scientific misconduct that the Science journals deal with occur because of an inadequate amount of human attention. Shortcuts are taken by using image manipulation programs such as Photoshop or by copying text from other sources. Altered images and copied text may go unnoticed because they receive too little scrutiny from each of the authors. On our end, errors happen when editors and reviewers don’t listen to their inner skeptic or when we fail to focus sharply on the details. At a time when trust in science is eroding, it’s important for scientists to recommit to careful and meticulous attention to details.
The scientific record is ultimately one of the human endeavor of struggling with important questions. Machines play an important role, but as tools for the people posing the hypotheses, designing the experiments, and making sense of the results. Ultimately the product must come from—and be expressed by—the wonderful computer in our heads.

(0)eLetters

eLetters is a forum for ongoing peer review. eLetters are not edited, proofread, or indexed, but they are screened. eLetters should provide substantive and scholarly commentary on the article. Embedded figures cannot be submitted, and we discourage the use of figures within eLetters in general. If a figure is essential, please include a link to the figure within the text of the eLetter. Please read our Terms of Service before submitting an eLetter.

Log In to Submit a Response

No eLetters have been published for this article yet.

Information & Authors

Information

Published In

Science
Volume 379 | Issue 6630
27 January 2023

Article versions

Submission history

Published in print: 27 January 2023

Permissions

Request permissions for this article.

Authors

Affiliations

H. Holden Thorp [email protected]
H. Holden Thorp Editor-in-Chief, Science journals.

Notes

[email protected]; @hholdenthorp

Metrics & Citations

Metrics

Article Usage

Altmetrics

Citations

Cite as

Export citation

Select the format you want to export the citation of this publication.

Cited by

  1. Heat and Moisture Exchanger Occlusion Leading to Sudden Increased Airway Pressure: A Case Report Using ChatGPT as a Personal Writing Assistant, Cureus, (2023).https://doi.org/10.7759/cureus.37306
    Crossref
  2. ChatGPT for Future Medical and Dental Research, Cureus, (2023).https://doi.org/10.7759/cureus.37285
    Crossref
  3. Can an artificial intelligence chatbot be the author of a scholarly article?, Science Editing, 10, 1, (7-12), (2023).https://doi.org/10.6087/kcse.292
    Crossref
  4. The rise of artificial intelligence: addressing the impact of large language models such as ChatGPT on scientific publications, Singapore Medical Journal, 64, 4, (219), (2023).https://doi.org/10.4103/singaporemedj.SMJ-2023-055
    Crossref
  5. ChatGPT for Computational Materials Science: A Perspective, Energy Material Advances, 4, (2023)./doi/10.34133/energymatadv.0026
    Abstract
  6. Where Are We Going with Statistical Computing? From Mathematical Statistics to Collaborative Data Science, Mathematics, 11, 8, (1821), (2023).https://doi.org/10.3390/math11081821
    Crossref
  7. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns, Healthcare, 11, 6, (887), (2023).https://doi.org/10.3390/healthcare11060887
    Crossref
  8. The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field, Big Data and Cognitive Computing, 7, 2, (62), (2023).https://doi.org/10.3390/bdcc7020062
    Crossref
  9. Marketing with ChatGPT: Navigating the Ethical Terrain of GPT-Based Chatbot Technology, AI, 4, 2, (375-384), (2023).https://doi.org/10.3390/ai4020019
    Crossref
  10. Can an artificial intelligence chatbot be the author of a scholarly article?, Journal of Educational Evaluation for Health Professions, 20, (6), (2023).https://doi.org/10.3352/jeehp.2023.20.6
    Crossref
  11. See more
Loading...

View Options

View options

PDF format

Download this article as a PDF file

Download PDF

Check Access

Log in to view the full text

AAAS ID LOGIN

AAAS login provides access to Science for AAAS Members, and access to other journals in the Science family to users who have purchased individual subscriptions.

Log in via OpenAthens.
Log in via Shibboleth.

More options

Purchase digital access to this article

Download and print this article for your personal scholarly, research, and educational use.

Purchase this issue in print

Buy a single issue of Science for just $15 USD.

Media

Figures

Multimedia

Tables

Share

Share

Share article link

Share on social media