In November 2022, San Francisco-based tech company OpenAI unveiled ChatGPT. It works like this. You type a prompt–for example, “Write a blog post about the environmental impacts of COVID-19”–and it does. ChatGPT will generate a written piece based on your prompt, complete with scholarly-looking citations if you ask for them. It was a hit upon its release. People used it to answer millions of questions, and Microsoft said it would incorporate the chatbot into its search engine, Bing. Though it has not been out in the world for a long time, ChatGPT is already making an impact on the way we write.
ChatGPT and other AI writing bots have the potential to dramatically change how we work, communicate, and express ourselves creatively. However, individuals and institutions have questioned their reliability and raised ethical concerns about their use. Of great concern to many is the use of ChatGPT and other AI writing bots in the classroom. If a student can use AI to help them write an essay, what does that mean for their overall learning experience? A recent article from the Rutgers-Camden magazine explored this issue, noting that educators expressed similar concerns when calculators became commonplace. Rutgers-Camden English professor James Brown, quoted in the article, said that he believes ChatGPT and similar tools can be used to enhance the learning experience. “If AI tools are used with human guidance, there’s no limit to their future applications,” Brown states.
Not everyone shares Brown’s more favorable view of AI in scholarly writing. A few recent cases of AI authorship in scientific publishing have brought the issue to the forefront. In December 2022, a pair of computational biology researchers used GPT-3, a predecessor of ChatGPT, to edit their papers. In another case, ChatGPT was listed as an author on a research paper published by bioRxiv and as a co-author of an editorial published in Nurse Education in Practice. Some defend these uses of ChatGPT and similar writing bots. However, the software is not perfect. A Nature article from February of this year describes AI chatbots, also called LLMs (large language models), as “fundamentally-unreliable.” They are capable of producing text that looks reliable, complete with fictitious citations from scholarly publications. This unreliability comes from how LLMS are built. They “learn” by analyzing “enormous databases of online text.” That means they pick up on all the errors, outdated information, and prejudices found in that text. In some cases, the AI can even generate hate speech based on the “education” it received from the database. These issues necessitate human moderation of AI-generated text. Publishing the chatbot’s work without any editing risks spreading misinformation.
There are concerns as well about AI’s use outside of academia. If ChatGPT can write pretty much anything, from scholarly papers to legal case summaries to emails and marketing copy, how will it impact people’s jobs? Will it replace the human writer entirely, or will it be used to “augment” or “improve” work done by humans? Defenders of AI argue that, however sophisticated it becomes, it cannot replace human “original thought” or creativity. However, the recent surge in popularity of AI-generated visual art and creative writing raises major questions about the future of creative work. In late 2022, an AI-generated image won first prize in the “emerging digital artists” category at the Colorado State Fair. Earlier this year, the science fiction magazine Clarkesworld had to close submissions after being flooded with hundreds of AI-written short stories. Now, there is a note on their Submissions page stating, “We will not consider any submissions written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.”
While AI writing tools have the potential to make everyday tasks easier and streamline the writing process, these technologies themselves are imperfect and their uses in various circumstances raise important ethical concerns. Given the novelty of ChatGPT and other AI chatbots (and artbots), it is impossible at the present moment to know what their true impact will be. We can be sure that AI is here to stay, and so we must figure out how we will live and work with it.