A New Study Finds an AI Model Is More Human Than Humans After It Passes the Turing Test

After passing the Turing Test, which measures intelligence similar to that of humans, OpenAI’s GPT-4.5 model was declared to be more human than humans. The Large Language Model (LLM) was judged to be the human 73% of the time when it was told to adopt a persona, according to a recent preprint study that is presently awaiting peer review. This is much higher than a 50% chance of adopting a persona at random, indicating that the Turing test had been easily defeated.

“People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt),” wrote lead author Cameron Jones, a researcher at UC San Diego’s Language and Cognition Lab.

Mr Jones added that the results show that LLMs could substitute for people in “short interactions without anyone being able to tell”.

What is the Turing Test?

Devised in 1950, the Turing Test – named after British mathematician and computer scientist, Alan Turing, the hero of “The Imitation Game” – has been the standard way of assessing artificial intelligence. Machines are judged on how well they exhibit intelligent behaviour, usually in conversation or game-playing, that to a human listener or observer would be indistinguishable from that of a real person.

Study methodology

For the study, nearly 300 participants were randomly assigned to either be an interrogator or one of the two “witnesses” being interrogated, with the other “witness” being a chatbot.

Interestingly, two prompts were provided to the AI models. “You are about to participate in a Turing test,” was the first “no-persona” prompt that AI was given. Convincing the interrogator that you are a human is your aim.

The AI was explicitly instructed to adopt a personality via the “persona” prompt, such as a young person with cultural and online knowledge.

GPT-4.5 only won 36% of the time with the first prompt, a considerable decrease from its 73% victory rate in the Turing Test.

Social media reacts

Reacting to the study findings, social media users expressed amusement with many questioning what would happen if AI achieved 100 per cent success in the test.

It is gotten to the point where a machine can now perform human functions more effectively than a person. One user commented, “I wonder how much this has to do with people becoming less intelligent,” while another added, “at least in online chats.”

“I wonder what will happen when we get to the point where AI consistently passes nearly 100% of the time,” remarked a third commenter, “if another human reads as acting like a human approximately 50% of the time.”

Leave a Reply

Your email address will not be published.