The Knesset’s Health Committee held a discussion on doctors’ use of artificial intelligence tools, revealing a lack of data on the issue. While the Health Ministry said it was not aware of any adverse events caused by AI, it admitted it lacks the budget to address the matter in a meaningful way.
Representatives from the HMOs and the Israeli Medical Association warned during Tuesday's discussion against overregulation that could hinder innovation, urging that doctors be given the space to learn and adopt new tools responsibly.
The discussion was initiated by Knesset lawmakers Matan Kahana (National Unity) and Yaakov Asher (United Torah Judaism), following a request by Rabbi Yossi Erblich, chair of the nonprofit Lema’anchem, who wrote to Health Minister Uriel Buso. Erblich cited multiple incidents where patients’ lives were put at risk due to improper use of AI. The concerns focused mainly on open tools like ChatGPT, rather than regulated, AI-powered medical devices that require Health Ministry approval.
Acting Committee Chair Ron Katz (Yesh Atid) opened the session by warning: “Even if we want to streamline processes and move forward, if there’s even a small chance of harming a patient’s health, we must raise the red flag.”
Dr. Yosef Walfish, chair of the Israeli Medical Association’s ethics bureau, said the organization is currently drafting ethical guidelines for AI use in medicine. “Alongside ethical concerns, we must not lose out because of fear of new technology,” he said. In response to a question on whether patients are informed about the use of AI tools, Walfish clarified that doctors have an ethical duty—rooted in the Patient Rights Law—to explain the rationale behind their decisions.
Liron Zohar, deputy head of the Health Ministry’s tech division, reiterated that the ministry has not identified any adverse events resulting from doctors' use of AI. “It’s possible such cases exist, but our review found none,” she said. Zohar emphasized that AI is a top priority for the ministry’s director-general and that AI is also being introduced into internal operations to improve public services. A cross-departmental working group has been formed, but she acknowledged that the ministry currently has no dedicated budget for AI implementation.
“There’s real concern about introducing change, but this field offers enormous potential—especially in health care,” Zohar added. She said a national AI strategy is being developed, and the ministry hopes to secure a dedicated budget. She also noted that the Health Ministry is participating in global working groups and will soon launch an initiative to define boundaries and guidelines. Regarding open tools like ChatGPT, she stressed that educating medical staff about both the limitations and potential of these technologies is essential.
Adam Arutz, head of information security and cyber protection at Leumit Health Services, said regulation is needed but warned: “This is an unregulated space, and while the Privacy Protection Authority and Justice Ministry are working on it, we must avoid overregulation that could block progress.” He added that Leumit uses internal AI tools with strict data controls and is developing an AI tool for mental health. With patient consent, therapy sessions will be recorded (without identifying information) and summarized by AI.
Still, Arutz warned of key risks: overreliance on AI leading to professional errors, and serious privacy concerns. “Many AI companies have loose privacy policies or even sell user data,” he said. Uploading patient records to uncontrolled tools, he cautioned, could be dangerous.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
Dr. Lilach Tzuler, head of medical technologies and digital health at Clalit Health Services, explained that regulated AI-based devices and software already exist in Israel.
“This is not a chaotic field—there is structure in the health sector, even though it’s still new to all of us,” she said. Regarding open tools like ChatGPT, she added, “Even before ChatGPT we had Google, which we used as doctors—but always with critical judgment.” Medical training, she stressed, includes skepticism and evidence-based decision-making. “We need the flexibility to explore these tools before imposing heavy regulation that could hold Israel back as a global leader.”
Asher Rochberger from Beit Issie Shapiro’s Health Advancement Forum for People with Disabilities shared a personal story highlighting AI’s potential: His wife was diagnosed with a rare syndrome affecting just 50 people in Israel. “At least half the doctors we saw had never heard of it. In such a case, AI—drawing from global research—could help the average doctor understand the condition and provide the best treatment,” he said. He urged a balanced approach between potential benefits and inherent risks.
Concerns over AI 'hallucinations'
Professor Yosef Press, president of Lema’anchem and former director of Schneider Children’s Medical Center, said his organization has recently received inquiries from patients on the topic. “If each of us asks ChatGPT a medical question, half will get different answers,” he warned, pointing out the need for careful oversight, especially since diagnoses vary based on patient age and health background. “AI should not be banned—it’s a technological leap—but doctors should not be issuing discharge letters or recommendations solely based on it.”
Dr. Gadi Neuman, vice president of Lema’anchem and former deputy director of Beilinson Hospital, said AI is a powerful and central tool based on vast data sets. “Its strength lies in data—but that’s also its weakness. Sometimes it hallucinates—making up facts or information.” He explained that these tools often rely on incomplete datasets, which could disproportionately harm marginalized or minority populations. Without regulation, he added, courts may ultimately have to decide whether a doctor’s AI-based decision was justifiable.
Knesset lawmaker Ron Katz, the acting committee chair, proposed developing a homegrown Israeli medical AI tool—“a kind of Waze for health care.” He said, “If we can build a dedicated system based on millions of medical records, it could become the most powerful and profitable AI tool in the world—paying taxes to Israel instead of relying on foreign platforms.”
Given the lack of data from the HMOs and the Health Ministry about the extent of AI use by doctors, Katz asked the ministry to conduct an anonymous survey on the issue, including whether patients are informed when AI tools are used in their care. He also requested the creation of a joint task force with the Israeli Medical Association’s ethics bureau and the nurses' union to formulate usage guidelines and provide recommendations for supportive regulation. The committee asked to receive an update from the Health Ministry within a month.