LLMs Dream of Electric Prose: An Interview About Writing With ChatGPT
Share
In this blog post
In late November 2022, OpenAI launched ChatGPT—a fine-tuned, chatbot-style implementation of their latest large language model (LLM)—to the general public. It instantly became very popular, and for good reason. ChatGPT is able to answer complex questions, compose entire articles, and do homework, which explains why it was able to reach over one million users in five days. People are using it to write children’s books, to code, and even as their very own personal assistant. While this is all very exciting, for some writers there’s still a little apprehension about what this means for the future of writing.
In the days before Christmas, I spent an afternoon playing around with ChatGPT. I asked it to devise a menu for my holiday dinner, its thoughts about how the world will end, and to write the first paragraph of this article. (Note: We edited that paragraph a bit; every writer makes mistakes.) Its answers were slightly delayed (It seemed like all one million users were using ChatGPT at the same time) but surprisingly funny, eerily human-like, and completely soulless. I got a full menu with every single traditional Christmas ingredient but no matter how much I changed the phrasing of my prompts, I could not get an introduction that sounded anything other than, well, robotic.
Despite its limitations, I still think ChatGPT is a decent conversational partner and capable writer’s assistant. So I decided to ask ChatGPT itself about writing and what the future looks like for human writers.
What Does ChatGPT Have to Say For Itself?
How are you feeling right now?
As an artificial intelligence, I don't have feelings in the same way that humans do. I don't experience emotions or physical sensations. I am simply a tool that has been designed to assist users with tasks and answer questions to the best of my ability. My main goal is to provide helpful and accurate information to those who interact with me.
Do you think that experiencing emotions is integral to writing?
Emotions can certainly play a role in writing, as they can add depth and authenticity to a piece of writing. However, it is not necessary for a writer to feel emotions in order to write about them. Many writers draw upon their own personal experiences and emotions when crafting a piece of writing, but they can also draw upon their imagination and research to write about emotions and experiences that they have not personally felt.
In fact, some writers find it easier to write about certain emotions or experiences if they are not currently feeling them, as it allows them to approach the topic with a certain level of detachment and objectivity. Ultimately, the role of emotions in writing depends on the individual writer and their goals for the piece.
What do you draw on when you’re writing?
As a language model, I don't have personal experiences or emotions, so I don't draw upon those when I write. Instead, I rely on the information that has been fed into me during my training to generate responses and provide information to users.
When generating a response, I analyze the input that I receive and use my understanding of language and the information that I have learned to generate a relevant and accurate response. I don't have personal preferences or biases, so I strive to provide neutral and unbiased responses to the best of my ability. My main goal is to assist users by providing helpful and accurate information.
Does that give you an advantage over human writers?
As a language model, I don't have personal experiences or emotions, which means that I don't have the same range of life experiences and insights that human writers have. This can make it more difficult for me to connect with readers on an emotional level and to create writing that is relatable or authentic in the same way that writing by a human might be.
On the other hand, I don't have biases or personal preferences, which means that I can provide neutral and unbiased responses to questions and information requests.
Overall, it is difficult to say whether being a language model gives me an advantage or disadvantage compared to human writers, as it really depends on the specific context and the goals of the writing. In some cases, my lack of personal experiences and emotions might be seen as a disadvantage, while in other cases, my ability to provide neutral and accurate information might be seen as an advantage.
People who use you have noticed some bias in your responses especially with race and gender.
As an artificial intelligence, I do not have personal opinions or biases, and I do not seek to promote any particular perspective or agenda. However, it is possible that the data I have been trained on may contain biases that are present in the language and information that was used to create it. This can sometimes result in biased responses when I am asked about certain topics. My creators at OpenAI are aware of this potential issue and are working to mitigate it by training me on a diverse and balanced set of data sources.
(I got the same response when I asked if that affects its ability to be a neutral and unbiased writer)
Since you’re better than human writers at some aspects of writing, should we be scared?
It is true that AI tools like myself have certain strengths and capabilities that can be useful in the writing process. However, it is important to note that AI tools like myself are not able to fully replace human writers, as we do not have the same range of life experiences, emotions, and creativity that humans do.
Human writers bring their own unique perspective and voice to their writing, which is something that AI tools like myself are not able to replicate. Additionally, human writers have the ability to connect with readers on an emotional level and to create relatable and authentic content in a way that AI tools are not able to, and will not be able to in the near future.
What about the far future?
It is difficult to predict exactly what the future will hold, and it is possible that AI tools may become more advanced and capable in the future. Furthermore, even as AI tools become more advanced, it is likely that the unique perspective and voice of human writers will continue to be valued by readers and audiences. Therefore, it is unlikely that AI tools will completely replace human writers in the far future.
That being said, it is always possible that the role of AI tools in the writing process may change and evolve over time, and it is important for writers to be aware of these developments and to consider how they may be able to use AI tools to assist with their work.
Explaining Everything, Disclaiming Itself
One thing I noticed during my conversations with ChatGPT was how the answers to subjective or somewhat difficult questions almost always came with a caveat. When I asked about emotions in writing, answers started with “As a language model…” and answers to questions about bias began with “As an artificial intelligence...” This aura of ethereal detachment in its answers feels like an intentional decision from ChatGPT’s designers. Interspersing these robotic admissions ensures human readers treat its answers with some skepticism. For example, while answering one of my questions, ChatGPT explained that because it was an artificial intelligence, it would be able to explain facts and figures without any bias. This is a very bold claim. I then brought up the known issues of gender and racial bias in AI, and, all too predictably, my interview subject fell back to its “I’m just an artificial intelligence and only repeats what has been fed to me” stance. If not in journalism, this makes me think ChatGPT may have a future in public relations.
On one hand, being repeatedly told that ChatGPT was just a language model and so had the limitations of one helped me to lower my expectations when asking these contentious questions. I knew that the answers I would be getting would probably read more like a PR statement and less like an actual answer to my question. And honestly, I prefer getting these caveats and warnings rather than a declarative answer to the subjective, human-centric questions that AI is just not fully equipped to answer. In my opinion, it is much safer to know upfront what you’re getting. While it may mean adjusting your expectations, that’s just one of the drawbacks of philosophizing with ChatGPT.
Still, I can’t help feeling a bit bamboozled by ChatGPT. While it is true that its responses are limited to the data that it has been fed, and as such might contain some bias, or might not be able to answer some more objective questions, I still expected it to at least stand by its answers, even difficult ones. Replying with those caveats does not in any way absolve ChatGPT of its responsibility to be the unbiased model that OpenAI says it is. If anything, it emphasizes how restricted ChatGPT actually is and the amount of work that still needs to be done.
If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .
More with these tags:
Share your feedback
Was this article useful or interesting to you?
Thank you!
We appreciate your response.