Jobs by JobLookup

The “Godfather of AI” Predicted I Wouldn’t Have a Job. He Was Wrong. Nobel Prize winner Geoffrey Hinton said that machine learning would outperform radiologists within five years. That was eight years ago. Now, thanks in part to doomers, we’re facing a historic labor shortage.

 


The Reality of AI in Radiology: A Resident's Perspective

Geoffrey Hinton, recently awarded the Nobel Prize in physics, made waves in 2016 with a bold prediction: radiologists would be obsolete within five years. As a second-year radiology resident, I find myself at the intersection of this prediction and reality. Eight years later, not only has Hinton's prophecy failed to materialize, but we're facing an unprecedented shortage of radiologists, with imaging centers backlogged for months.


The journey to becoming a radiologist is arduous - four years of medical school, a preliminary year in general medicine, four years of radiology residency, and up to two years of fellowship training. This extensive education speaks to the complexity of our work, which makes the prospect of algorithmic replacement both unlikely and deeply unsettling.


The field of radiology has become a focal point for AI development, with over three-quarters of FDA-approved AI-enabled medical devices designed for radiological applications. The Radiological Society of North America even maintains a dedicated journal for radiology AI research. Yet in practice, most AI applications remain theoretical - my own workstation contains just one basic AI tool.


The discourse around AI in radiology often presents a false dichotomy: either AI will completely replace radiologists, or it will prove to be another overhyped technology. Among my colleagues, opinions span the spectrum. Some envision AI as a tool that will enhance our efficiency and reduce errors, while others, haunted by Hinton's predictions, view their careers as living on borrowed time.


But the reality lies in the middle. As Curtis Langlotz aptly noted, "Radiologists who use AI will replace those who don't." Our field has always evolved with technology. Senior radiologists who once used pneumoencephalograms and film view boxes now interpret sophisticated MRIs. The key to our profession's longevity isn't resistance to change but adaptation.


As a young radiologist, I see AI assuming certain tasks while new imaging modalities and procedures emerge for human expertise. AI's potential extends beyond image interpretation to improving scan quality, reduce procedure times, and optimizing workflows. As futurologist Roy Amara observed, we overestimate technology's short-term impact while underestimating its long-term effects.


However, I worry about AI's impact on medical education. The notion that AI should handle "easy" cases overlooks how these foundational cases build the expertise needed for complex interpretations. Just as pilots need experience with routine flights, radiologists need exposure to basic cases. Research shows that less experienced radiologists are more likely to accept incorrect AI interpretations, suggesting we'll need stronger human training, not less.


Hinton has since updated his timeline, predicting AI parity with radiologists in 10-15 years. He's also shifted focus to broader existential AI risks. But like Wile E. Coyote - who inspired Hinton's earlier metaphor about radiologists - perhaps we should simply keep moving forward, adapting to change while maintaining our essential role in patient care.


The future of radiology isn't a battle between AI and physicians. Instead, it's about finding the optimal integration of human expertise and technological advancement. While AI will undoubtedly transform our field, the complexity of medical imaging and diagnosis ensures that skilled radiologists will remain essential to healthcare delivery.

Can you spot an article written by artificial intelligence? It’s not as easy as you might think. Whether you’re able to discover robot-generated content or not, a new study finds the mere suggestion that something was written by AI is enough to anger people now.

Specifically, a team from the University of Florida and the University of Central Florida suggests our prejudices about artificial intelligence might be clouding our judgment. Their research shows that people automatically downgrade stories they believe were written by AI – even when they were actually penned by humans!

The team discovered that the latest version of ChatGPT can produce stories that nearly match the quality of human writing. However, there’s a catch: simply suggesting that AI wrote a story makes people less likely to enjoy reading it.

“People don’t like when they think a story is written by AI, whether it was or not,” explains Dr. Haoran “Chris” Chu, a public relations professor at the University of Florida who co-authored the study, in a media release.

The research, published in the Journal of Communication, involved showing participants different versions of the same stories – some written by humans, others by ChatGPT. To test people’s biases, the researchers cleverly switched up the labels, sometimes correctly identifying the author and other times deliberately mislabeling them.

The study focused on two key aspects of storytelling. The first, called “transportation,” is that familiar feeling of being so absorbed in a story that you forget your surroundings – like when you’re so engrossed in a movie that you don’t notice your uncomfortable theater seat. The second aspect, “counterarguing,” happens when readers mentally pick apart a story’s logic or message.

Person using ChatGPT on their smartphone
The team discovered that the latest version of ChatGPT can produce stories that nearly match the quality of human writing. (Photo by Ascannio on Shutterstock)

While AI-written stories proved just as persuasive as human-written ones, they weren’t quite as successful at achieving that coveted “transportation” effect.

“AI is good at writing something consistent, logical, and coherent. But it is still weaker at writing engaging stories than people are,” Chu notes.

The findings could have important implications for fields like public health communication, where engaging narratives are crucial for encouraging healthy behaviors such as vaccination. However, this new research suggests that being upfront about AI authorship might actually undermine these efforts due to reader bias.

There’s some good news for creative professionals, though.

“AI does not write like a master writer. That’s probably good news for people like Hollywood screenwriters – for now,” Chu concludes.

Post a Comment

Previous Post Next Post