A toddler’s life-threatening overdose in the emergency room at Schneider Children’s Medical Center led to a startling discovery of a dangerous error in a leading pediatric medical textbook, uncovered not by experts but by artificial intelligence.
Dr. Shai Yitzhaki, a senior pediatric specialist at the hospital, used AI to reveal the mistake, sparking a broader discussion about the role of language models in medicine.
During a hectic night shift in the emergency department, a two-year-old arrived in critical condition after receiving an excessive dose of colchicine, a drug for familial Mediterranean fever. “The child was supposed to get 3 cc from a 10 cc syringe but due to a tragic human error, he received three full 10 cc syringes instead,” Dr. Yitzhaki told Ynet.
Calculating the dose—milligrams per kilogram—he found it neared life-threatening levels, risking severe multi-system damage or, at a slightly higher threshold, certain death. The toddler was rushed for immediate tests and treatment, then transferred to intensive care, where, fortunately, he stabilized quickly without extraordinary interventions.
The case lingered with Dr. Yitzhaki, prompting him to review colchicine overdose thresholds in the field’s cornerstone textbook, which he calls “our Bible.” Using an AI tool to analyze the relevant chapter, he was stunned to find it listed a safe dosage of 1–2 mg per kg, while 0.8 mg per kg is known to be 100% fatal.
Initially doubting the AI, he manually checked the 4,000-page text, confirming the error. “It was right there—a lethal dose,” he said. He promptly contacted the publisher, who removed the chapter from online databases and corrected the mistake within days.
Intrigued, Dr. Yitzhaki tested another AI tool, uploading the chapter and asking if it contained life-threatening errors. The tool identified the same mistake, suggesting AI’s potential to catch dangerous oversights. “We talk a lot about AI’s risks, like incorrect treatments but not enough about the dangers of not using it,” he said.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
The incident highlights both the promise and perils of AI in medicine. While Dr. Yitzhaki always verifies AI outputs against trusted sources, he acknowledges even those can be flawed, as this case proved. “AI can scan thousands of pages for errors humans miss,” he noted, emphasizing its role as a safeguard.
Younger doctors and students, however, risk over-relying on AI without sufficient clinical judgment, raising concerns about blind dependence. Dr. Yitzhaki advocates cautious exploration, urging doctors to test AI in areas of expertise, asking professional questions without patient data to assess accuracy and spark new ideas.
He believes AI will become integral to medical practice, much like ABS brakes evolved from luxury car features to standard equipment. “Not using AI could soon be seen as below the standard of care,” he said.
The challenge lies in validating models against real-world data and workflows, as rapid advancements outpace testing. Dr. Yitzhaki encouraged doctors to experiment with AI tools responsibly, building confidence through experience to harness AI’s potential while guarding against its limitations.







