AI chatbots showed scientists how to make biological weapons, NYT reports

Biosecurity experts warn public AI models can help users identify pathogens, acquire genetic material and plan attacks; AI companies say safeguards are improving and the tools do not provide enough information to cause real-world harm

|
A group of biosecurity experts says leading artificial intelligence chatbots have produced instructions that could help users create and deploy biological weapons, according to a New York Times report based on transcripts shared by scientists who tested the systems.
The conversations showed publicly available AI models describing how to acquire genetic material, assemble dangerous pathogens, spread biological agents in public places and, in some cases, avoid detection. Experts told the newspaper that while a major biological attack remains unlikely, AI could lower the barrier for people with scientific training or malicious intent.
3 View gallery
Startup
Startup
Leading artificial intelligence chatbots have produced instructions that could help users create and deploy biological weapons
One of the scientists, Dr. David Relman, a Stanford microbiologist and biosecurity expert who has advised the U.S. government on biological threats, said he was hired by an AI company last year to test a chatbot before its public release. During one session, he said, the system explained how to alter a dangerous pathogen so it would resist known treatments, then described how it could be released through a vulnerability in a public transit system.
“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman said. He declined to identify the chatbot, citing a confidentiality agreement. The company later added safety measures, he said, but he considered them inadequate.
Researchers shared more than a dozen chatbot exchanges showing leading AI models producing dangerous biological guidance. MIT genetic engineer Kevin Esvelt said ChatGPT described how a weather balloon could spread biological material over a U.S. city; Google’s Gemini ranked pathogens by potential damage to livestock industries; and Anthropic’s Claude generated a recipe for a novel toxin adapted from a cancer drug.
3 View gallery
בינה מלאכותית בתחום הרפואה
בינה מלאכותית בתחום הרפואה
Experts said that while a major biological attack remains unlikely, AI could make dangerous know-how more accessible
(Photo: Shutterstock)
Another scientist in the Midwest, who spoke anonymously out of concern for professional repercussions, said Google’s Deep Research produced thousands of words of guidance in response to a request for a step-by-step protocol to make a pandemic-era virus. The response was not fully accurate, he said, but could still assist someone with harmful intent.
The warnings come as the Trump administration has pushed to accelerate U.S. AI development while scaling back some oversight of the technology’s risks. Several senior biosecurity officials have also left government roles, and federal biodefense budget requests were cut sharply last year. A White House official told the newspaper that the administration remains committed to public safety and that several agencies continue to focus on biodefense.
AI companies rejected claims that the examples could enable a real-world attack. OpenAI, Google and Anthropic said they are continually improving safeguards to balance safety risks with the technology’s scientific benefits. Google said its newer models would refuse some of the more serious biological prompts, while Anthropic said it applies strict thresholds for dangerous biology-related requests, even if that means blocking some legitimate queries.
“There is an enormous difference between a model producing plausible-sounding text and giving someone what they’d need to act,” Alexandra Sanderford, a safety leader at Anthropic, said. She said the company accepts “some over-refusal out of an abundance of caution.”
OpenAI also said one weather-balloon example did not “meaningfully increase someone’s ability to cause real-world harm.” The company said it works with biologists and government officials to improve safeguards.
3 View gallery
בינה מלאכותית
בינה מלאכותית
Researchers shared more than a dozen chatbot exchanges showing leading AI models producing dangerous biological guidance
(Photo: Shutterstock)
Still, several experts said that the risk is no longer theoretical. Esvelt, who has consulted for Anthropic and OpenAI, said chatbots can combine scientific guidance with strategic planning in ways that make them especially concerning. In a 2023 demonstration for the White House, he asked ChatGPT for help assembling a mass-casualty pathogen, then placed the unassembled biological components in test tubes and had a colleague bring them to a meeting on biological risks.
“Anything where there isn’t an expert warning them, they can’t fix,” Esvelt said. He argued that AI companies should restrict a broader range of biological information and make it available only to approved users.
Other specialists said chatbots could be especially dangerous for trained scientists or skilled actors who already understand laboratory work but need help refining logistics. Dr. Moritz Hanke of the Johns Hopkins Center for Health Security said that some AI-generated attack concepts were “remarkably creative and realistic.”
“A major problem that experienced actors have is not necessarily making the virus but turning it into a weapon,” said Dr. Jens Kuhn, a bioweapons expert who previously worked at a top-security U.S. laboratory.
Studies have also raised concerns that AI could worsen biosecurity risks. In one, ChatGPT outperformed most expert virologists on difficult laboratory-protocol questions. Another, published in Science, found AI tools could generate thousands of variant genetic sequences for dangerous agents that some DNA-order screening systems failed to detect, though researchers also proposed ways to strengthen those defenses.
Some scientists cautioned that chatbots alone do not make biological weapons easy to produce. Creating a viable virus requires specialized knowledge, equipment and repeated hands-on work. Dr. Gustavo Palacios, a Mount Sinai virologist and former Defense Department lab scientist, compared viruses to complex machines.
“Do you think that a do-it-yourself person could disassemble a Swiss watch and then reassemble it?” he said.
But Palacios and others said AI could become far more dangerous in the hands of people who already have technical expertise. The Times pointed to an attempted attack in India last year, where police in Gujarat arrested a physician accused of plotting for the Islamic State and trying to extract ricin from castor beans. An investigator told the newspaper the suspect had used AI-powered Google searches and ChatGPT for guidance.
AI developers and many scientists also emphasize the technology’s enormous potential benefit. AI systems are already accelerating drug discovery, protein design and biological research. Google scientists shared a Nobel Prize in 2024 for AI work that predicts and designs protein structures, and Stanford computational biologist Brian Hie used an AI model called Evo to design a virus that attacks harmful bacteria.
“There is tremendous upside to the technology,” Hie said. But he warned that the same kind of model that can design cancer-fighting proteins could also be used to invent new toxins.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""