All posts

LLMs: The Rubber Duck Debugger for Creative Work

LLMs: The Rubber Duck Debugger for Creative Work

Rubber duck debugging (or rubberducking) is one of the most rudimentary yet powerful lessons a developer can learn in their programming journey. For those who are unfamiliar with this quirky concept, it refers to the act of talking through a block of code using plain-language, as if you were explaining it to a rubber duck. By doing so, a developer can better conceptualize their approach and work through any sticking points when writing their code.  Colloquially, the term is used as  a metaphor for active methods of debugging, as opposed to passively debugging code in silence and thought.

Rubberducking was one of the first programming lessons that was instilled in me during my introduction to computer science, and it allowed me to remain even-keeled when faced with a host of bugs to squash in my code. Sometimes we think a lot quicker than we speak, which leads to missing the simplest of errors. The so-called rubber duck paradigm is a simple yet powerful one that even the most senior developers employ in their workflow. Can this notion be extended to other areas besides programming.

The rubber duck paradigm prompts us to explore similar problem-solving methods in other areas. Besides programming, large language models (LLMs) may have a substantial impact on various other creative fields. 

In fact, we used GPT-4 to write a draft of this very transition, saving me (the author) and my editor at least 10 minutes of brainstorming when we both got stuck.

Text conversation between my editor and GPT-4This is exactly what I mean when I suggest that LLMs are like a rubber duck debugger for creative work: We started with an obstacle, then used words to describe the problem, and finally—rather than hallucinate a dialog with an inanimate object (e.g. an actual rubber duck)—carry on a collaborative discussion with a LLM to at least get inspiration for solving the problem, if not solving it outright.

The Creative AI Revolution

Large language models have recently demonstrated a propensity for accomplishing tasks of many creative mediums. For some background: LLMs are basically massive deep learning models trained on a huge corpus of data that can take in a textual prompt from a user and generate original outputs based on the prompt. They possess emergent abilities, meaning that more, sometimes unexpected capabilities develop as the size of these deep neural networks grows. 

To say the least, these models have been transformative when it comes to completing certain creative tasks. With models like ChatGPT, DALLE, GPT4, Stable Diffusion, etc., users are able to rapidly generate original artifacts of many different creative mediums. These include text, images, audio, and even code (which is a form of text). This has spurred many different LLM-driven products in fields like writing, art, design, and music. These cutting-edge models, with their superior performance to incumbent creative AI, have ushered the possibility of creative output at an unprecedented speed.. 

An Accelerant, Not a Replacement

The recent rapid development of AI-powered tools enables creative professionals—ranging from writers to software engineers, and seemingly everyone in between—to accelerate their workflows, improving productivity on many cognitively intense creative tasks. Specifically, advancements in generative AI—AI that yields a unique, original output—have allowed users to quickly iterate on their ideas. With generative AI, users are able to speed up their workflows and drum up a bunch of suggestions for ideas they might have. This is especially evident in text-based LLMs. The ability to engage in conversational dialogue and “speak” with these models really evokes the rubber duck paradigm: much in the same way a programmer may treat her rubber duck as an interlocutor, so too can creative professionals have a dialogue with large language models. Doing so effectively, of course, is a whole skill in itself, known as “prompt engineering.”

Generative LLMs like ChatGPT can serve as a sort of rubber duck for creatives. However, the rubber duck is now able to respond to the user! Walking through a creative problem or sticking point with another entity, whether it’s a person, a rubber duck, or a language model, externalizes the task at hand, making it more tractable and addressable since it’s no longer just in your head. LLMs can serve as a sounding board for creative ideas much like how rubberducking helps to verbalize thoughts during the creative process.

Bridging the Gap Between Creativity and Technology  

LLMs, in various instantiations, clearly demonstrate the ability to augment human productivity by acting as a rubber duck for creatives. ChatGPT’s viral success and conversational UI have already enabled users to write, code, and answer questions much faster than conventional manual methods. GPT-4 and its recent plugin beta even allow users to invoke actions from the model, such as retrieving information from the internet. Being able to use these text-based models as a sounding board to work through any actions or thoughts regarding creative tasks demonstrates how the rubber duck paradigm is reflected in the usage of LLM tools. Copilots, such as Github’s and Microsoft 365’s, also provide users with a software agent to ask questions and make requests on their respective platforms. These copilots are another instantiation of LLMs that serve as creative rubber ducks.

LLMs allow people to draw inspiration, perform quick iterations, and answer quick questions to reduce friction in creativity. Their ability to help users scaffold any obstacles in the creative process makes them pivotal in bridging the gap between creativity and technology. These revolutionary interfaces of human-computer interaction better equip people to accomplish creative tasks. Much like the all-powerful proverbial rubber duck, these text-based models allow creatives to actively traverse their ideas and offer paths of improvement.

Newsletter

Get Deepgram news and product updates

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

More with these tags:

Share your feedback

Thank you! Can you tell us what you liked about it? (Optional)

Thank you. What could we have done better? (Optional)

We may also want to contact you with updates or questions related to your feedback and our product. If don't mind, you can optionally leave your email address along with your comments.

Thank you!

We appreciate your response.