Essays

budding

Large Language Models as Epistemic Rubber Ducks

Using large language models for reflective thinking and metacognition, rather than generating facts or final outputs

Assumed Audience
People who have read my piece on , or who are already intimately familiar with the current failures and shortcomings of large language models

In [[Language Models as Failed Oracles] I laid out a series of reasons that our current language models are not the oracles we were promised. Some of the problems will be solved by brute-force scaling compute. Others, such as the lack of stable, situated knowledge from a unified viewpoint, feel inherent to language models.

Until we develop more robust language models and interfaces that are transparent about their reasoning and confidence level, we need to change our framing of them. We should not be thinking and talking about these systems as superintelligent, trustworthy oracles. At least, not right now.

Several interesting metaphorical frames could lead us down very different pathways for how these models develop. We could think of them as a giant, searchable databases, as research assistants, as xxx.

We should instead think of them as rubber ducks.

Epistemic rubber ducking

Rubber ducking is the practice of having a friend or colleague sit and listen while you work through a problem. They aren't there to tell you the solution to the problem or help actively solve it. They might prompt you with questions and occasionally make affirmational sounds. But their primary job is to help you solve their problem yourself. They're like a rubber duck, quietly listening, while you talk yourself out of a hole.

The term comes where you're frequently faced with poorly defined problems that require a bit of thinking out loud. Simply answering the question "what am I trying to do here?" is often enough to get started on a solution.

Language models are well suited to rubber ducking. Their mimicry makes them good reflective thinking partners, not independent sources of truth.

And not just any rubber ducking...

[decorate the text with floating rubber ducks and sparkles]

Epistemology is the study of how we know what we know, also called “theory of knowledge.” It deals with issues like how valid a claim is, how strong its claims and counter-arguments are, whether the evidence came from a reliable source, and whether cognitive biases might be warping our opinions.

Epistemic rubber ducking, then, is talking through an idea, claim, or opinion you hold, with a partner who helps you think through the epistemological dimensions of your thoughts. This isn't simply a devil's advocate incessantly pointing out all the ways you're wrong.

A useful epistemic duck would need to be supportive and helpful. It would need to simply ask questions and suggest ideas, none of which you're required to accept or integrate, but are there if you want them. It could certainly prod and critique, but in a way that helps you understand the other side of the coin, and realise the gaps and flaws in your arguments.

A collection of speculative prototypes

What would this look like in practice?

[Placeholder images for now - will make these video demos. Likely going to explore more tiny ideas to include]

Branches

Argument maps

Daemons

Epi


From anthropomorphism to animals

There's a side quest I promised myself I wouldn't go down in this piece, but I'll briefly touch on it. I think we should take the duck-ness of language models as rubber ducks seriously. Meaning that conceiving of language models as ducks – an animal species with some capacity for intelligence – is currently better than conceiving of them as human-like agents.

I have very different expectations of a duck than I do of a human. I expect it can sense its environment and make decisions that keep it alive and happy and fat. I expect it has a more nuanced understanding of fish species and water currents and migration than I do. I don't expect it would be a very competent babysitter or bus driver or physics teacher.

In short, the duck has very different intellectual and physical capacities from you or I. The same will be true of various “AI” systems like language models. Their form of “thinking” will certainly be more human-ish than duck-ish, but it would be a mistake to expect the same of them as we do of humans.

Kate Darling has around robots: that we should look to our history with animals as a touchstone for navigating our future with robots and AI.

An alternate analogy I've heard floating around is “aliens.” talk about ML systems as a kind of alien intelligence. Given our cultural narratives around aliens as parasitic killers that are prone to exploding out of your chest, I'm pretty averse to the metaphor. Having an alien intelligence in my system sounds threatening. It certainly doesn't sound like a helpful collaborative thinking partner.

I think there's a lot more to explore here around the metaphors we use to talk about language models and AI systems, but I'll save it for another post.

Want to share?