As 2023 comes to a close, Cambridge Dictionary crowns ‘hallucinate’ as Word of The Year and offers a fresh update to it’s meaning. This announcement carries a strong significance following the surge of interest witnessed this year in generative artificial intelligence (AI) tools such as ChatGPT and Bard and Grok. A shift of attention has occurred: the public now questions the limitations of AI and if they can be overcome. To tally the soaring attention, lexicographers have equipped ‘hallucinate’ with a meaning which extends beyond a psychological effect, giving the term AI-relevancy.
Of course, ‘to hallucinate; means to see or feel things that do not actually exist. Hallucinations is pretty much summed up as an expected symptom when tripping on illicit substances. But, the word and its meaning have been tweaked ever so often.
In recent times, it’s taken a life of its own and people have started creating full-fledged relationships and whirlwind romances with people they barely know. These fantasy-sustained hallucinations are famously known as ‘delusionships’ on TikTok, also happen to use generative AI to create augmented reality (AR) effects on its platform.
It all begins with a little wishful thinking and the next thing we know, we’re in a parasocial relationship that we have fully manifested within the comfort of our minds. As the word ‘hallucinate’ suggests, it all just feels too real!
@redformandumbazz Me 🤝 no commitments #delusionship ♬ original sound - user11750148531
However, that fun, imaginative bit is reserved for us humans – still holding onto its original definition, ‘hallucinate’ now also explains the case of AI malfunction – yes, it is in fact possible.
The updated definition reads: “When an artificial intelligence hallucinates, it produces false information.” What this basically refers to is when systems like ChatGPT, which generates human-like dialogue, makes errors and fails to produce fully accurate material.
Confabulations: this is the new word for AI hallucinations. It is useful for AI users to know that only sometimes, these hallucinations appear nonsensical. For these confabulations can also seem completely plausible, despite being inaccurate or highly illogical.
How exactly does this concern us, you may wonder?
Well, at the rate at which AI is being integrated into our lives for the purpose of ease and convenience through the likes of smart living (such as Apple’s Siri and Amazon’s Alexa) and work assistance, it is essential to be aware of possible shortcomings. Ultimately, as powerful as they are, these tools of acutely advanced technology are prone to errors. The dictionary post reminds us that we must learn “how to interact with [AI] safely and effectively” and “this means being aware of both its potential strengths and its current weaknesses.”
Take this article as a nudge that our critical thinking skills are still very much needed when handling AI and we should not be overly dependent on these tools. Alas, AI systems, just like you and I, hallucinate – but with them, it’s a tad bit more complicated!