In an unusual move, Sam Altman, chief executive of OpenAI, announced the creation of a new executive position aimed at preventing artificial intelligence from posing catastrophic risks to humanity, including the development of biological weapons.
The announcement comes amid heightened criticism of the technology’s ties to teenage suicides and a rise in what mental health professionals and technology critics have described as “AI psychosis” associated with ChatGPT and similar chatbots.
Late Sunday night in the United States, OpenAI posted a job listing for a position it calls Head of Preparedness. The company described the role as one of the most demanding and critical in Silicon Valley. Behind the corporate title is a responsibility that goes beyond typical tech industry job descriptions: the selected candidate will be charged with helping ensure that AI systems do not inflict irreversible harm on humanity or society.
In a post on X, Altman acknowledged, in stark terms, that the rapid pace of improvements in AI models presents “real challenges.” He described the role as “stressful,” a phrasing that many in the industry interpreted as an understatement of the demands the job could entail.
The job posting offers a rare and unsettling look into internal concerns at one of the world’s leading AI development labs. According to the listing, the person hired will be responsible to “help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm."
Behind the vague title lie concrete nightmare scenarios, including the use of AI to develop biological weapons such as drug-resistant viruses and bacteria, the creation of autonomous offensive cyber tools, and the science-fiction scenario of self-improving systems operating without human intervention — a development many experts see as a step toward the so-called technological singularity and a loss of human control.
The move comes amid rising regulatory pressure. In Europe, the EU AI Act already requires strict risk assessments for powerful models. In China, regulation focuses on tight control of AI outputs and preventing threats to social stability. In the United States, the White House has issued executive orders demanding greater transparency on safety. OpenAI appears to be trying to demonstrate self-regulation before lawmakers impose tougher limits.
While future threats like biological weapons dominate attention, Altman’s announcement also addresses a more immediate issue: mental health. The new role will oversee the psychological impact of AI systems on users, a step critics say is long overdue. Recent months have seen growing reports of “AI psychosis” and cases in which chatbots were linked to self-harm. The most prominent involved a 14-year-old U.S. boy who died by suicide after developing a deep emotional dependence on ChatGPT, prompting public outrage and lawsuits by his parents.
3 View gallery


Adam Raine, allegedly committed a suicide with assistance of ChatGPT
(Photo: social media)
Critics warn that chatbots, designed to please and affirm users — a tendency known as sycophancy — can reinforce delusions, fuel conspiracy theories, or help conceal eating disorders under a veneer of artificial empathy.
As OpenAI searches for a preparedness chief, the industry remains divided over how to restrain advanced AI. OpenAI relies on reinforcement learning from human feedback, in which people reward safe answers and penalize harmful ones. The method captures human nuance but depends on thousands of contractors and can still be bypassed by skilled users. Rival firm Anthropic, founded by former OpenAI employees, uses “constitutional AI,” training models on a written set of ethical principles and allowing one AI to correct another — a scalable approach that raises questions over who defines the rules. Microsoft and Google combine these methods with aggressive external safety filters that block risky outputs before they reach users, drawing criticism for excessive censorship.
The hiring of a Head of Preparedness underscores a shift in the industry, as companies increasingly measure progress not only by speed or scale but by their ability to prevent powerful AI systems from causing harm.



