In October, Lisa Talia Moretti—an academic specializing in the ethical dilemmas of emerging technologies—saw the job market in her field collapse.
Based in the U.K., Moretti had been guiding companies, from global corporations to mid-sized firms, on how to adopt artificial intelligence in a way that was both humane and profitable. In essence, she worked as an AI ethicist—someone who helps organizations understand “what this technology is and what it can do.”
But as the race to dominate AI accelerates, ethics is increasingly being left behind.
“Most companies are pushing so hard to get more people to use more AI, I don’t think [ethics] is even close to top of mind,” said a principal engineer at a major data firm, speaking anonymously due to company restrictions.
That’s a problem, say researchers who study AI’s societal impact. Fears of job displacement, bias, disinformation, and safety issues are mounting—yet new models are being released at breakneck speed with few guardrails in place.
Back in 2021, the World Economic Forum urged businesses to appoint chief AI ethics officers. And just last month, The New York Times predicted that roles like AI ethicist, trust authenticator, and trust director would become widespread. But for those who pursued advanced degrees in these areas, the reality is bleak.
“I’m speaking with seven to ten people in this space every week,” says Alice Thwaite, a longtime AI ethicist trying to help colleagues find work. “They’re all saying it’s the worst [job market] it’s ever been.”
While ethicists struggle to land jobs, AI continues to slip off its moral guardrails. Chatbots still hallucinate facts. Some have directed teens toward self-harm. Elon Musk’s Grok has reportedly promoted Holocaust denial and antisemitic content. Google’s AI-generated summaries have devastated news publishers’ traffic. Entire workforces are being replaced by automation. And cybercriminals are now using AI to create more convincing scams.
In short: AI is incredibly powerful—but the oversight needed to rein in its downsides is missing from boardrooms and government agencies alike.
“Every AI paper used to begin with, ‘The benefits of AI are going to be huge and numerous,’” says Thwaite. “But everyone invested in that first part—no one invested in the second part.”
This week, the White House released its AI Action Plan to align artificial intelligence with the U.S. economy’s future. Key pillars include slashing “red tape and onerous regulation,” and seeking public input on which rules might be holding AI back.
With government backing, AI tools are set to flood the commercial landscape. Yet companies still face growing concern from employees over how these systems affect their privacy—and whether they embed racial or gender bias. Public trust in AI is falling, too.
That’s where ethicists like Moretti believe they can make a difference—if companies are willing to listen.
Ethicists can help organizations critically assess AI claims, rather than falling for slick sales pitches. “A lot of companies are shown flashy decks with headlines like, ‘Generative AI will supercharge your creativity,’” Moretti says. “But often there’s very little substance behind those claims.”
Take AI agents, for instance. Despite the hype, a May study by Salesforce found that top large language model (LLM) agents had only a 58% success rate on single tasks. That dropped to just 35% when handling multiple tasks in sequence.
An AI ethicist might save your company from investing in a tool that doesn’t actually work. “We’d ask, ‘How is this really performing?’” says Moretti. “We’d run interviews, usability tests, and dig into the actual impact.”
Ethical audits like these can reveal whether AI is truly helping employees and customers—or just adding more complexity and risk.
“Would we accept a website that crashes 65% of the time?” Moretti asks. “Would we keep using email if it failed that often? No. But somehow we’re okay with it from AI.”
In the current rush to adopt artificial intelligence, ethics isn’t just being overlooked—it’s being sidelined entirely. But if companies don’t make space for people like Moretti and Thwaite, the costs of getting it wrong may soon outweigh the hype.