Israeli doctor uses AI to uncover deadly mistake in ‘Pediatricians' Bible’ textbook

Pediatric specialist uses AI to uncover a life-threatening error in a leading medical textbook after treating a toddler for a colchicine overdose, revealing a dangerously incorrect dosage recommendation and highlighting AI’s potential

A toddler’s life-threatening overdose in the emergency room at Schneider Children’s Medical Center led to a startling discovery of a dangerous error in a leading pediatric medical textbook, uncovered not by experts but by artificial intelligence.
Dr. Shai Yitzhaki, a senior pediatric specialist at the hospital, used AI to reveal the mistake, sparking a broader discussion about the role of language models in medicine.
5 View gallery
ד"ר שי יצחקי, מומחה ברפואת ילדים, שניידר
ד"ר שי יצחקי, מומחה ברפואת ילדים, שניידר
Dr. Shai Yitzhaki
(Photo: Schneider Children’s Medical Center)
5 View gallery
פעוט תינוק בדיקה רפואית
פעוט תינוק בדיקה רפואית
(Photo: Shutterstock)
During a hectic night shift in the emergency department, a two-year-old arrived in critical condition after receiving an excessive dose of colchicine, a drug for familial Mediterranean fever. “The child was supposed to get 3 cc from a 10 cc syringe but due to a tragic human error, he received three full 10 cc syringes instead,” Dr. Yitzhaki told Ynet.
Calculating the dose—milligrams per kilogram—he found it neared life-threatening levels, risking severe multi-system damage or, at a slightly higher threshold, certain death. The toddler was rushed for immediate tests and treatment, then transferred to intensive care, where, fortunately, he stabilized quickly without extraordinary interventions.
The case lingered with Dr. Yitzhaki, prompting him to review colchicine overdose thresholds in the field’s cornerstone textbook, which he calls “our Bible.” Using an AI tool to analyze the relevant chapter, he was stunned to find it listed a safe dosage of 1–2 mg per kg, while 0.8 mg per kg is known to be 100% fatal.
5 View gallery
מזרק
מזרק
(Photo: Shutterstock)
5 View gallery
רפואה ספר לימוד
רפואה ספר לימוד
(Photo: Shutterstock)
Initially doubting the AI, he manually checked the 4,000-page text, confirming the error. “It was right there—a lethal dose,” he said. He promptly contacted the publisher, who removed the chapter from online databases and corrected the mistake within days.
Intrigued, Dr. Yitzhaki tested another AI tool, uploading the chapter and asking if it contained life-threatening errors. The tool identified the same mistake, suggesting AI’s potential to catch dangerous oversights. “We talk a lot about AI’s risks, like incorrect treatments but not enough about the dangers of not using it,” he said.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
The incident highlights both the promise and perils of AI in medicine. While Dr. Yitzhaki always verifies AI outputs against trusted sources, he acknowledges even those can be flawed, as this case proved. “AI can scan thousands of pages for errors humans miss,” he noted, emphasizing its role as a safeguard.
5 View gallery
רפואה בינה מלאכותית
רפואה בינה מלאכותית
(Photo: Shutterstock)
Younger doctors and students, however, risk over-relying on AI without sufficient clinical judgment, raising concerns about blind dependence. Dr. Yitzhaki advocates cautious exploration, urging doctors to test AI in areas of expertise, asking professional questions without patient data to assess accuracy and spark new ideas.
He believes AI will become integral to medical practice, much like ABS brakes evolved from luxury car features to standard equipment. “Not using AI could soon be seen as below the standard of care,” he said.
The challenge lies in validating models against real-world data and workflows, as rapid advancements outpace testing. Dr. Yitzhaki encouraged doctors to experiment with AI tools responsibly, building confidence through experience to harness AI’s potential while guarding against its limitations.
<< Follow Ynetnews on Facebook | Twitter | Instagram | Telegram >>
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""