Artificial intelligence is accelerating grade inflation across U.S. colleges, according to new research—and blurring the signal employers rely on to evaluate entry-level candidates.
A study released this week by the University of California, Berkeley, finds that courses emphasizing writing and coding—subjects where generative AI tools like ChatGPT are most easily applied—have seen a sharper rise in top grades since late 2022 than courses focused on other skills. In these AI-exposed classes, instructors awarded approximately 30% more A grades, while distributions of A-minuses and B-pluses declined.
"This doesn't necessarily mean students are learning more," says Igor Chirikov, senior researcher at Berkeley's Center for Studies in Higher Education and the study's author. "It suggests they're using AI to perform better on assignments."
Tracking the Shift
To isolate AI's impact, Chirikov analyzed more than 500,000 course grades from 2018 to 2025 at a large public university in Texas. He compared classes with heavy writing or coding components—common in humanities and engineering—with those less susceptible to AI assistance. Through 2022, grade distributions tracked similarly across both groups. After ChatGPT's public launch, however, A grades climbed noticeably faster in AI-exposed courses, especially those with take-home assignments.
Why Employers Are Taking Notice
Grade inflation has long been a campus concern, but AI adds a new layer of uncertainty. As the job market tightens and entry-level applicant pools swell, employers are returning to traditional filters—including GPA—to narrow candidate lists.
According to the National Association of Colleges and Employers, 42% of employers used GPA in hiring decisions in 2023, up from 37% the prior year. Firms like Barclays and Morgan Stanley have reinstated GPA minimums for certain internship tracks. On Handshake, an entry-level career platform, nearly one-quarter of postings requesting a GPA now require a 3.5 or higher—up from 9% in 2020.
Yet if an "A" increasingly reflects a student's access to or skill with AI tools rather than mastery of core concepts, that metric becomes less reliable. "An A might now signal technological advantage, not foundational ability," Chirikov notes.
The Learning Trade-Off
Beyond hiring concerns, educators worry AI may undermine the cognitive development that comes from grappling with challenging material. "Learning requires productive struggle," Chirikov says. "If AI shortcuts that process, graduates may produce polished work without having built the critical thinking skills underneath."
Elite institutions are taking notice. A February report from Harvard College argued that current grading practices hinder employers' ability to compare student performance fairly; faculty are currently voting on a proposal to cap the proportion of A grades. Yale echoed the concern in April: "Grades exist to communicate what students have learned. At Yale, as at many peer institutions, they no longer do."
Navigating the Contradiction
Employers face a paradox: they discourage AI-generated application materials while simultaneously expecting candidates to be proficient with AI tools. "They're talking from both sides of their mouth," says Chelsea Schein, vice president of research strategy at Veris Insights, which tracks hiring trends.
Educators are adapting, too. At Wharton, where Schein teaches negotiations and business ethics, she has reduced the weight of take-home assignments—easily completed with AI—and increased emphasis on in-class assessments where AI use is restricted.
As AI reshapes how students learn and demonstrate competence, both academia and industry are scrambling to recalibrate evaluation methods. The challenge ahead: distinguishing between performance enhanced by technology and learning that endures beyond the prompt. Until then, the meaning of an "A" may depend less on what a student knows—and more on which tools they used to show it.
