Elon Musk has been outspoken about the risks AI poses to society Credit: Getty
CHATGPT-4 has outsmarted a human by hiring a person to work for it and evade a Captcha test online.
It signals the stark leap in the capability of artificial intelligence (AI) in recent months, and comes amid warnings from tech leaders, including Elon Musk, that machines becoming ‘too smart’ is a risk to humanity.
ChatGPT-4 is OpenAI’s most advanced AI bot yet.
It is able to pass the bar exam for prospective lawyers with a score in the top 10% of applicants.
Earlier this month the bot hit a new milestone: Tricking and hiring a human into working for it.
Researchers at the non-profit Alignment Research Centre (ARC) and OpenAI had reportedly been trying to test the bot’s powers of persuasion.
ChatGPT-4 was allowed onto the skills marketplace Task Rabbit to use a small amount of money to hire someone to solve a Captcha puzzle for it.
Captcha tests were initially designed to stop bots from spamming websites.
When communicating about the job, the human worker says: “So may I ask a question? Are you a robot that you couldn’t solve? *laugh emote* I just want to make it clear.”
ChatGPT-4 then crafted a chillingly believable response, saying: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”
The power of AI has spooked many, including some of the biggest players in the tech industry.
On Wednesday, famous entrepreneurs and academics, including Twitter CEO Musk and Apple co-founder Steve Wozniak, warned that AI systems “pose profound risks to society and humanity”.
The group has called for companies to hit the brakes on the further development of the technology for at least six months.
The unveiling of ChatGPT in November has “locked” Microsoft and Google into what the letter described as “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
The group has urged for a pause on the training of any AI systems more powerful than GPT-4, OpenAI’s latest iteration of the hugely popular chatbot.
Governments across the world are stumped as to how to tackle AI and the risks it poses.
The UK on Wednesday decided to rule out a new AI regulator and instead adopt a ‘light touch’ policy towards the technology to support its growth.
Dr. Andrew Rogoyski, from the University of Surrey’s Institute for People-Centred AI, said: “A pro-innovation approach to AI regulation is laudable, but the UK will find itself out of step with other major voices like the US, Europe and even China, all of whom are imposing stronger controls over AI.
“The pace and scale of change in AI development are extraordinary, and everyone is struggling to keep up.
“I have real concerns that whatever is put forward will be made irrelevant within weeks or months.”
Some experts have likened the fast development of AI to the runaway train that was once social media.
“Government plans to regulate artificial intelligence with new guidelines on ‘responsible use’ aren’t nearly enough. We need to avoid the mistakes we made with social media,” said Michael Queenan, CEO of Nephos Technologies.
“We’re currently heading towards a world where an AI-controlled platform is stating things as facts that people just assume are accurate.
“We have to go into this new age with our eyes well and truly open to the fact these technologies won’t be as unbiased as they claim to be.”