Artificial intelligence can make colonoscopies more accurate — but it may come at a cost: doctors could lose the very skills the technology is designed to support.
A new study published in The Lancet Gastroenterology & Hepatology found that routine use of AI-assisted colonoscopy systems may lead to a roughly 20% drop in experienced gastroenterologists’ ability to detect adenomas — precancerous growths in the colon — when performing the procedure without AI.
The research was conducted at four colonoscopy centers in Poland between September 2021 and March 2022 as part of the ACCEPT project (Artificial Intelligence in Colonoscopy for Cancer Prevention). In late 2021, the centers began routine use of AI systems to detect polyps, after which colonoscopies were randomly assigned to be performed with or without AI assistance.
In total, 1,443 colonoscopies without AI were analyzed: 795 before AI’s introduction and 648 after. All were conducted by 19 experienced physicians, each with more than 2,000 procedures performed. The adenoma detection rate in non-AI exams fell from 28.4% (226 out of 795) before regular AI exposure to 22.4% (145 out of 648) afterward — a 20% relative decrease and an absolute drop of 6 percentage points. By comparison, AI-assisted colonoscopies during the same period had a 25.3% detection rate (186 out of 734).
“Our findings are concerning given the rapid spread of AI in medicine,” said Dr. Marcin Romanczyk of the Medical University of Silesia in Poland. He called for more studies on how AI affects professional skills across medical specialties to address potential unintended consequences.
Colonoscopy is one of the most effective tools for preventing colorectal cancer, allowing doctors to detect and remove adenomas before they turn malignant. AI-assisted systems have been met with enthusiasm in recent years, with several studies showing higher detection rates when they are used. But the new findings raise concerns over “skill erosion” — the gradual loss of expertise when physicians rely too heavily on such support.
The study also questions prior randomized controlled trials that found AI-assisted colonoscopy outperformed non-AI procedures. “It is possible that non-AI colonoscopy in those trials was different from standard non-AI colonoscopy, because physicians might have been negatively influenced by ongoing AI exposure,” said co-author Prof. Yuichi Mori of Oslo University Hospital in Norway.
Caution urged amid AI enthusiasm
In an accompanying editorial, Dr. Omar Ahmad of University College London, who was not involved in the research, wrote that the findings “temper the enthusiasm for rapid AI implementation” and underscore the need to consider unintended clinical consequences. The study, he said, offers “the first real-world clinical evidence of skill erosion, which could negatively affect patient outcomes.”
While acknowledging AI’s potential to improve medical results, Ahmad stressed the importance of preserving core skills needed to perform high-quality endoscopy.
Other experts warned against concluding that AI alone causes skill erosion. Prof. Vent Osmani of Queen Mary University of London noted that after AI’s introduction, the number of colonoscopies nearly doubled, from 795 to 1,382 — a surge in workload that could also explain the drop in detection rates.
He also questioned whether true skill loss could occur in just three months, particularly among doctors with more than 27 years of experience, suggesting instead that behavior patterns may have shifted when AI was unavailable.
Although the study has limitations — including evaluating only one AI system — researchers say it is among the first to suggest that AI exposure could negatively affect medical outcomes. They argue that high-quality research in gastroenterology AI is urgently needed, especially given the technology’s rapid adoption.
“This is a rigorous study highlighting what many AI researchers fear — automation bias,” said Prof. Alan Tucker of Brunel University. “There are many AI systems and technologies out there, and some may be better than others at supporting or explaining decisions.”



