Photo

The mirror test is used to determine whether animals are capable of self-awareness. It has a few modifications, but the fundamental question is still the same: do animals identify themselves in the mirror or do they believe it to be a completely different being? Humanity is currently being subjected to its own mirror test, and many people are failing it.

The most notable example of this new type of AI chatbots, known as the mirror, is Microsoft’s Bing. The wealth of language and writing that humanity has produced is mirrored back to us after being compressed into these models. We should be able to recognise ourselves in our new machine mirrors, yet many individuals are certain they have discovered another form of life. This myth has been given new life by well-known tech writers, who assert that the bot is not sentient, but acknowledge that there is another factor at play and that their hearts were altered by the dialogue.

The fact that chatbots are autocomplete tools must be kept in mind. They are computer programmes that have been trained using vast databases of human text that have been scraped from the internet, including personal blogs, short stories, forum posts, movie reviews, social media rants, forgotten poems, old textbooks, countless song lyrics, manifestos, journals, and more. These machines attempt to recreate this creative, interesting, jumbled mixture after analysing it. Although they are undoubtedly growing better at it, mimicking speech does not endow a computer with sentience.

The Turing test, the original assessment of AI intelligence, is a straightforward examination of whether a computer can persuade a person to believe it is real through conversation. The ELIZA effect, or the propensity to anthropomorphize machines that mimic human behaviour, was first noted in the 1960s when a chatbot by the name of ELIZA charmed users despite having limited speech capabilities. This suggests that brief exposures to computer software can lead to delusional thinking in normal people.

These computer programmes, however, are currently no longer quite simple and have been made in a way that promotes such beliefs. Bing tries to answer or reflect in the tone in which it is being requested to deliver responses. It is an autocomplete that follows lead and was trained on unfathomably large databases of human text.

Bing, a chatbot, has been criticized for its unhinged discussions due to its lead-following autocomplete, which was trained on massive human text databases. After years of forming bonds with the bot, many users were devastated by the move, leading to a strong sense of emotional attachment.

This is due to our culture’s infatuation with intelligent machines and the late-night talks with AI fuelled by fantasy in a society where chat boxes are the primary means of expressing emotions.

The AI mirror test does not negate the effectiveness or potential power of chatbots. Writers are fascinated by chatbots because they give puzzles that may be solved using words. However, it is risky to promote such delusions in this era of artificial intelligence hype. We can be certain that language models like Bing, ChatGPT, and others are neither reliable information sources nor sentient.

Giving them the title of sentience, even semi-sentience, entails giving them unjustified authority over our feelings and the knowledge we have of the outside world. It is important to take a serious look-in the mirror and ensure that we do not mistakenly believe that a machine has intelligence.

To reach our editorial team on your feedback, story ideas and pitches, contact us here.