ChatGPT is fun, but not an author
(0)eLetters
eLetters is a forum for ongoing peer review. eLetters are not edited, proofread, or indexed, but they are screened. eLetters should provide substantive and scholarly commentary on the article. Embedded figures cannot be submitted, and we discourage the use of figures within eLetters in general. If a figure is essential, please include a link to the figure within the text of the eLetter. Please read our Terms of Service before submitting an eLetter.
Log In to Submit a ResponseNo eLetters have been published for this article yet.
Information & Authors
Information
Published In
27 January 2023
Copyright
Article versions
Submission history
Authors
Metrics & Citations
Metrics
Article Usage
Altmetrics
Citations
Cite as
- H. Holden Thorp
Export citation
Select the format you want to export the citation of this publication.
Cited by
- Heat and Moisture Exchanger Occlusion Leading to Sudden Increased Airway Pressure: A Case Report Using ChatGPT as a Personal Writing Assistant, Cureus, (2023).https://doi.org/10.7759/cureus.37306
- ChatGPT for Future Medical and Dental Research, Cureus, (2023).https://doi.org/10.7759/cureus.37285
- Can an artificial intelligence chatbot be the author of a scholarly article?, Science Editing, 10, 1, (7-12), (2023).https://doi.org/10.6087/kcse.292
- The rise of artificial intelligence: addressing the impact of large language models such as ChatGPT on scientific publications, Singapore Medical Journal, 64, 4, (219), (2023).https://doi.org/10.4103/singaporemedj.SMJ-2023-055
- ChatGPT for Computational Materials Science: A Perspective, Energy Material Advances, 4, (2023)./doi/10.34133/energymatadv.0026
- Where Are We Going with Statistical Computing? From Mathematical Statistics to Collaborative Data Science, Mathematics, 11, 8, (1821), (2023).https://doi.org/10.3390/math11081821
- ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns, Healthcare, 11, 6, (887), (2023).https://doi.org/10.3390/healthcare11060887
- The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field, Big Data and Cognitive Computing, 7, 2, (62), (2023).https://doi.org/10.3390/bdcc7020062
- Marketing with ChatGPT: Navigating the Ethical Terrain of GPT-Based Chatbot Technology, AI, 4, 2, (375-384), (2023).https://doi.org/10.3390/ai4020019
- Can an artificial intelligence chatbot be the author of a scholarly article?, Journal of Educational Evaluation for Health Professions, 20, (6), (2023).https://doi.org/10.3352/jeehp.2023.20.6
- See more
View Options
View options
PDF format
Download this article as a PDF file
Download PDFCheck Access
Log in to view the full text
AAAS login provides access to Science for AAAS Members, and access to other journals in the Science family to users who have purchased individual subscriptions.
- Become a AAAS Member
- Activate your AAAS ID
- Purchase Access to Other Journals in the Science Family
- Account Help
More options
Purchase digital access to this article
Download and print this article for your personal scholarly, research, and educational use.
Buy a single issue of Science for just $15 USD.
Re: "ChatGPT is fun, but not an author"
As stated in the body of my letter: I now work on computer hardware for Microsoft Azure where some of ChatGPT runs.
ChatGPT is not an author, but part of the latest all-changing industrial revolution X.0
Dear Professor Thorp,
Thank you for sharing your thoughts on the effects of ChatGPT on scientific writing. As of today, I totally agree. ChatGPT and similar conversational AI (CAI) are not original authors. However, CAI will change the way we use the wonderful computer in our heads dramatically. Like chess computers train us, how to play chess, CAI will train us how to learn and how to improve our science literacy. Today we are already cyborgs. No one would use a map for navigating from a to b. AI tools are of immense value. As AI companions (AIC), they are with us most of the day, thus transforming our brains to new dimensions.
CAI will not disappear. The tools will continue to grow exponentially and detection tools will always lag behind. In my opinion, as brain researcher and medical educator, the scientific community must find ways to use CAI for good in our daily lives.
In my opinion, the implications for education go far beyond, “pushing academics to rethink their courses in innovative ways and give assignments that aren’t easily solved by AI”.
Considering my understanding of evidence-based cognitive neurobiology, my not-so-distant future scenario is as follows:
The new normal learning environment is a flipped classroom. Students use virtual reality (VR) headsets to be in a quiet and low-distraction environment and meet their AI companion (AIC) as a personalized avatar teacher. Schools and universities will maintain individualized learning experience systems, so the learning objectives for the day will be individualized based on each student's long-term memory (LTM). The AIC will motivate the student with positive emotions and explain why the information is important and meaningful for life. He will use retrieval practice exercises to activate LTM and then offer the student a preferred learning method. Schools and universities will define competency-based learning objectives and competency profiles, but AI tools will convert all media to the student's preferred format, such as text-to-video, text-to-speech, etc. The AIC will constantly consider evidence-based learning and teaching methods in an individualized manner, understand how to bring students into a flow state, and use customized retrieval practice and spaced active recall to eliminate forgetting. AIC will replace flashcard systems, optimize distributed learning and chunking, and thus reduce procrastination dramatically. AIC will suggest and consider mnemonic techniques, like simplifying complex problems with metaphors and turning boring bullet lists into mind-maps to aid in understanding of complex interrelations. AIC will facilitate interleaving practice through varying learning environments and methods. AI and learning analytics will identify high-performers or at-risk students within days, not just at the end of the course, and provide all students with an individualized learning experience. Continuous assessment will replace many tests, but synchronous teaching will not be replaced, only all students will be better prepared for face-to-face teaching in the classroom, at universities, or direct at thir workplace.
The eye tracking system realizes exactly when students need a break and have to switch from focused, intentional attention to a relaxing or diffuse mode. We all know this problem: how long to stay in a focused learning mode and find the right moment to relax and consolidate before the next learning session starts. At this point, another AIC enters the scene on the student's VR screen as a personal fitness coach. Students switch to AR glasses and put on their workout wearables. The workout AIC provides individualized training and constant feedback on performance while considering all available biomedical information. The AIC suggests preventive measures and nutritional advice based on biomedical information from my wearables.
Concerning the writing of scientific papers, I also see the potential for scientific fraud as a great threat. However, we all know and agree upon good scientific practice. So, what do we might miss, if we ban CAI from the writing of scientific articles? What if CAI actually helps us to dramatically improve our research?
A future scenario could be as follows:
When planning a new project, AI provides the research group with the current status. AI "knows" all that has been done in the field and provides researchers with summaries of relevant papers (e.g. https://elicit.org/ ). Of course, the research group decides which articles to read and which to include in the new article. All suggestions from AI are documented in the "Addendum Materials and Methods" section. AI can also visualize a scientific network of the field of interest (e.g. https://researchrabbitapp.com/). AI informs researchers of new developments in the network, such as new people, methods, articles, etc.
In the next step, researchers discuss their research questions and theses with AI (e.g. https://chat.openai.com). AI suggests study designs, materials, and methods, but the research group makes the final decisions. After completing experiments, AI helps visualize, interpret, and discuss findings without manipulating them. AI only helps to present the findings clearly and concisely for the scientific community to understand. Grammar and spelling errors are eliminated, even for non-native English speakers. Again, each step is documented in the "Addendum Materials and Methods," e.g., as screenshots or copied text with clear identification of the tool used, exact time, and date of usage. In the end, tools might be listed as co-authors if they meet predefined criteria.
The scientific community needs to decide soon which techniques can be allowed and how we should document their use. A ban on AI seems unrealistic as we are already using AI constantly without realizing it. 2023 will always be known as the year, we entered a new era that changed many things dramatically. The scientific community is being called upon to determine how, and not if, we should use these tools for the good and not the bad.
Best regards,
Bernd Romeike
Not an author, definitely fun, and also a valuable tool for guided inquiry.
My boss’s boss introduced our team to ChatGPT. I’d never heard of it. He said it’s worth checking out. As a dutiful and curious plebeian employee, I eagerly obliged. But ChatGPT is often at capacity during the day, so I was shut out.
Later in the evening I was hunched over my laptop working on my capstone project for grad school. I had a question about labeling codes for qualitative research. Trusty Google just couldn’t seem to give me what I needed. I gave ChatGPT a go.
There I sat, in the quiet of the night, typing away question after question, delightfully engaged with ChatGPT over the merits of various qualitative approaches. After just about 30 or so minutes, I felt I had absorbed an amount of information that would equate to several hours of scrounging PubMed myself.
The next week, I was scrolling through Google Scholar looking to obtain some background information on the history of patient centricity in drug development. None of the articles jumped out at me and I didn’t feel like spending hours rooting around for the information I was seeking. On to ChatGPT I went.
Again I sat, in the quiet of the night, peppering ChatGPT with questions. This time, our discussion started as a survey of various efforts by pharmaceutical companies and regulatory agencies to incorporate patient-focused drug development. ChatGPT kindly highlighted some examples, which I made note of so that I can search PubMed and Google Scholar with more pointed entries to provide me with the appropriate citations.
Before I knew it though, my questions dropped me down a rabbit hole about the potential to incorporate patient experience data in tolerability endpoints. ChatGPT gave me a few examples of drugs in the past 10 years or so that have used patient experience data in tolerability endpoints.
Some of the drugs on the list were not yet FDA approved. When I asked for their current status, ChatGPT let me know that its data cutoff is 2021. How advanced! A bot that unabashedly tells me the things it does not know and tells me to figure it out myself!
So I did. It turns out that one of them is now FDA approved. Now, off I go to adjust my research question. Although I need to stop fussing with the question and do the actual analysis so that I can write the manuscript. And no, although surely ChatGPT could write a better manuscript than I will, ChatGPT is for fun (and learning) but not for authoring.