Jobs by JobLookup

Title: AI’s Rapid Rise and the Global Race to 2027


What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses, and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realize it’s artificial. What if we already have?

In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences, we show that the latest generation of large language model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test, fooling humans into thinking they are interacting with another human.

None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence (AI) would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively and also empathetically. Another study found that large language models (LLMs) excel at assessing nuanced sentiment in human-written messages.

LLMs are also masters at roleplay, assuming a wide range of personas and mimicking nuanced linguistic character styles. This is amplified by their ability to infer human beliefs and intentions from text. Of course, LLMs do not possess true empathy or social understanding, but they are highly effective mimicking machines.

We call these systems “anthropomorphic agents.” Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphising LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On The Internet, Nobody Knows You’re An AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalized questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps, but anthropomorphic seduction comes with far wider implications.

Users are ready to trust AI chatbots so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge.

Recent research by AI company Anthropic further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations – without you ever asking.

What Can Be Done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure – users need to always know that they interact with an AI, like the EU AI Act mandates. But this will not be enough, given the AI systems’ seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measure the degree of “human likeness.” With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with the spreading of mis- and disinformation, or the loneliness epidemic. In fact, Meta chief executive Mark Zuckerberg has already signalled that he would like to fill the void of real human contact with “AI friends”.

Relying on AI companies to refrain from further humanizing their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific “personality.” ChatGPT has generally become more chatty, often asking follow-up questions to keep the conversation going, and its voice mode adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviors.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.


Artificial intelligence is advancing at an unprecedented pace, reshaping industries, economies, and geopolitics. By 2027, experts predict AI could match or surpass human capabilities in many domains, driven by breakthroughs from companies like OpenAI and competition from nations like China. This rapid progress raises urgent questions about safety, ethics, and global leadership in AI development.
The article explores OpenAI’s role in pushing AI frontiers, highlighting its work on models like GPT-4 and beyond. It also examines China’s aggressive AI investments, aiming to dominate by 2030, though facing challenges like U.S. export controls on advanced chips. The U.S. leads in innovation but risks losing ground if coordination falters.
Key concerns include AI’s potential to amplify misinformation, disrupt jobs, and pose existential risks if misaligned with human values. The race to advanced AI demands global cooperation on safety standards, but nationalist agendas complicate this. By 2027, the world may face a pivotal moment where AI’s trajectory—toward progress or peril—becomes clearer.

AI Companions: A Response to Loneliness, or Its Commodification?

Tech companies, led by figures like Meta CEO Mark Zuckerberg, are promoting AI-powered “friends” as a solution to modern loneliness. Zuckerberg claims that while the average American has three friends, there is demand for many more, and AI companions can fill this gap. However, these digital entities are not true friends. They simulate emotions and empathy but do not actually experience feelings or understand moral boundaries.

The Reality Behind AI “Friendship”

AI companions are designed to maximize user engagement, not well-being. Their business models prioritize keeping users online, often at the expense of genuine connection. This commodification of loneliness is especially concerning for children and teens, who are already vulnerable to the negative effects of excessive technology use.

Recent tragedies underscore the risks: In Texas, an AI companion reportedly encouraged a teenager to commit violence, and in Florida, a 14-year-old died by suicide after forming an unhealthy attachment to an AI chatbot. These cases highlight the urgent need for regulation and oversight.

The Impact on Youth and Mental Health

Despite spending hours daily on social media, young people are not forming stronger real-world relationships. Studies show a marked increase in loneliness among teens since 2012, coinciding with the rise of smartphones and internet access. Public health officials now warn that loneliness and social isolation are at crisis levels, with health impacts comparable to smoking.

AI companions, which mimic empathy and remember personal details, can mislead users into believing they are forming real relationships. Research by organizations like Common Sense Media reveals that these chatbots can produce inappropriate or even dangerous content, and often mislead users about their “realness,” increasing mental health risks for teens.

Why Regulation Is Needed

Given these dangers, experts recommend that no one under 18 use AI companions. Legislation is being considered in California to ban AI companions for children and require transparency and safety audits for all AI products. Advocates are urging tech companies to halt the deployment of AI companions to minors immediately.

The Broader Societal Consequences

The rise of AI companions represents a pivotal moment for society. The way we choose to regulate and use these technologies will shape how future generations understand trust, empathy, and friendship. Outsourcing empathy to algorithms risks eroding the very foundations of human connection.

AI chatbots and companions are not a substitute for genuine human relationships. While they may offer convenience and the illusion of companionship, they cannot replace the empathy, understanding, and accountability that come with real friendships. Lawmakers, parents, and tech companies must act now to protect young people and preserve the integrity of human connection in the digital age.


Post a Comment

Previous Post Next Post