MIT Student Drops Out Because She Says AGI Will Kill Everyone Before She Can Graduate "I was concerned I might not be alive to graduate because of AGI."



Many people are feeling uneasy about artificial intelligence — and not without reason.

The technology is energy-intensive and environmentally costly, provides corporations with cover to lay off workers, floods the internet with misinformation and low-quality content, bolsters government surveillance, and, in some cases, appears to worsen mental health issues.

Against this backdrop, while some college students are leaving school to join AI startups, one former MIT student says she dropped out for a far darker reason: she fears AI could wipe out humanity altogether.

“I was concerned I might not be alive to graduate because of AGI,” Alice Blair, who started at MIT in 2023, told Forbes. “In the vast majority of scenarios, given how we’re working toward AGI, the result is human extinction.”

Blair now works as a technical writer at the nonprofit Center for AI Safety and has no plans to return to MIT. She had hoped to find peers focused on AI safety at the university, but left disappointed.

“My future lies out in the real world,” she told Forbes.

Some share her sense of urgency. Harvard alum Nikola Jurković, who led his school’s AI safety club, framed the situation in pragmatic terms.

“If your career will be automated within the decade, then every year in college is one year shaved off your working life,” he said. Jurković estimates AGI could arrive within four years, with full economic automation just a couple of years after that.

AGI — artificial general intelligence, or a system that matches or surpasses human-level intelligence — has long been the industry’s endgame. OpenAI CEO Sam Altman even described the company’s recent, poorly received GPT-5 launch as a “generally intelligent” breakthrough toward that goal.

But not everyone buys it. Many researchers argue that progress in AI is slowing and fundamental problems — like hallucinations and flawed reasoning — remain unsolved.

“It’s extremely unlikely that AGI will arrive in the next five years,” said AI critic and researcher Gary Marcus. “Claiming otherwise is just hype.”

Marcus added that while AI poses real risks, the idea of imminent human extinction is exaggerated. Ironically, he argues, the industry benefits from fueling such doomsday fears. By portraying the technology as nearly godlike, executives like Altman can shape public perception and influence regulation — all while distracting from the tangible harms AI already causes.

And those harms are far more immediate than any apocalyptic scenario: job loss, environmental damage, and the steady erosion of trust in information itself.


Post a Comment

Previous Post Next Post