As the U.S. administration seeks to advance its vision of “Make America Healthy Again” (MAHA), the technology accompanying the initiative appears to be generating primarily political headaches. A particularly embarrassing incident was recorded this week when the project’s artificial intelligence tool, associated with Health Secretary-designate Robert F. Kennedy Jr., provided users with puzzling dietary recommendations, including a suggestion to insert certain food products rectally to “maximize nutrient absorption.”
The incident, widely reported in U.S. and British media, is only the tip of the iceberg in a series of technical and professional failures surrounding the official report of the MAHA commission. The report, presented as a “gold standard” of modern science, was found to be riddled with embarrassing errors that experts say stem from careless use of generative artificial intelligence.
An investigation by leading media outlets, including Britain’s Guardian newspaper and the U.S. outlet NOTUS, found that Kennedy’s report cited medical studies that simply do not exist. At least seven scientific sources referenced in the report turned out to be typical “hallucinations” of language models, complete with nonexistent researchers and plausible-sounding studies that were never written.
Even when the artificial intelligence did not fabricate sources, it erred. Researchers whose names were included in the report expressed astonishment. Epidemiologists and pediatricians said the conclusions attributed to them in the government text were the exact opposite of their actual findings.
In response to the criticism, the White House sought to downplay the issue, calling it a “formatting problem.” The report was quietly updated, however, and the fictitious references were removed from the online version.
It is worth noting that major technology companies are also developing models for physicians. Unlike general-purpose chatbots, however, models developed by Google and Microsoft in partnership with OpenAI are trained on verified medical databases and score above 85 percent on the United States Medical Licensing Examination, or USMLE.
The key difference lies in what are known as “guardrails,” filtering mechanisms designed to prevent systems from providing pseudo-scientific or dangerous advice — an element that appears to have been entirely absent from Kennedy’s platform, which reports say is based on xAI’s Grok.
Still, the breakthrough of large language models in 2022 brought with it what some describe as a troubling democratization of medical advice. Where once only physicians accessed expert systems, today anyone can receive a “diagnosis” from a smartphone. The MAHA case underscores the risks inherent in combining politics, controversial medical ideology and technology that has yet to fully mature.
At a time when China is investing billions of dollars in artificial intelligence to manage hospitals and the European Union is enacting strict AI regulations under its AI Act, the United States finds itself facing diplomatic and professional embarrassment. When a health secretary promotes fictitious data generated by a machine, public trust in the health system is placed in real jeopardy. The technology intended to “make America healthy again” appears itself in urgent need of treatment — and, above all, professional human oversight to prevent digital hallucinations from becoming national policy.


