Does ChatGPT Have Imposter Syndrome?

Dr Rosemary Francis
3 min readOct 16, 2023

--

“’Let’s build robots with Genuine People Personalities’, they said. So they tried it out with me. I’m a personality prototype. You can tell, can’t you”? — Marvin the Paranoid Android, The Hitchhiker’s Guide to the Galaxy by Douglas Adams

Many of us are familiar with the woes of Marvin the Paranoid Android, but does ChatGPT ever worry that it isn’t good enough? Does it worry that its answers will not live up to expectations or that it will be found out to be just a machine and not an AI after all?

Most people have suffered from some form of imposter syndrome at one point in their lives. It is different for everyone, but generally characterised by a feeling of inadequacy, fear that one is not deserving of some success or responsibility, and perhaps a fear of that deficiency being found out. “When will they find out that I don’t know what I’m doing”? “Everyone else here has achieved so much more”.

Machine learning models obviously don’t experience those feeling of inadequacy in the same way humans do, but can they be made to behave like real humans? Machine learning models, by their probabilistic nature, do have an element of doubt built in. In fact, it is the uncertainty about which response is correct that makes the models look creative and why they can end up “hallucinating” and generating a response that is not obviously derived from their training models.

Despite uncertainty about the correctness of a response, most large language models (LLMs) have a very confident writing style. This is, in part, because the large bodies of text used to train these models, from sources such as Wikipedia and LinkedIn, do not represent the very wide range of communication styles that humanity has developed. Writers on these platforms are overwhelmingly while males in Western Europe and North America, and they present most statements as fact. The propensity to write with absolute conviction is infectious; most of us have found ourselves dragged into a debate online with “someone who is wrong”.

All over the internet opinions are expressed as fact without justification and without many of the floral ways of acknowledging limitation in our own knowledge, such as “in my opinion” or “as far as I know”, and without citing the basis for our opinions. The phrase “scientists say” is frequently abused in that way.

To test my theories, I asked ChatGPT about how it felt about imposter syndrome and whether it knew how to inject any uncertainty into its responses:

Hi ChatGPT, are you programmed to inject a degree of uncertainty into the language which frames your responses?

ChatGPT: Yes, as an AI language model, I have been designed to generate responses that reflect a certain degree of uncertainty and nuance. This is because language is often ambiguous and can be interpreted in different ways, and it’s important to acknowledge the limitations and potential biases of any information or perspective that is being presented.

One way that this uncertainty is expressed is through the use of qualifying words and phrases such as “might,” “could,” “possibly,” and “in some cases.” These words are used to indicate that there is some level of uncertainty or variability in the information being presented, and to acknowledge that different situations and contexts may require different approaches or solutions.

Additionally, I have been trained on a vast corpus of text from diverse sources, which includes a wide range of opinions and perspectives. This exposure to different viewpoints has helped me develop a nuanced understanding of many topics, including imposter syndrome, and I strive to present this information in a way that reflects the complexity and variability of real-world experiences.

It is with some irony that the model responds that it knows how to inject doubt, but it does not bother to do so in its response. It is very certain that it uses the correct level of uncertainty.

So if the models have a known uncertainty for their responses but express total confidence in those responses, is that the opposite of imposter syndrome or are they faking it ’til they make it? Can we inject a bit more uncertainty into the vernacular of the models that reflects the uncertainty of the responses? Is it possible to make a large language model feel small?

--

--

Dr Rosemary Francis

Chief Scientist for HPC at Altair. Fellow of the Royal Achademy of Engineering. Member of the Raspberry Pi Foundation. Entrepreneur. Mum. Windsurfer