Is AI moving too fast? Industry leaders set panic threshold

Anthropic unveils new safeguards under ‘responsible scaling’ policy, setting red lines for models that could aid in weapons, mass automation or uncontrollable tech acceleration—vowing to pause development if risks cross critical thresholds

Anthropic, a major AI contender, has revised its "responsible scaling" policy to determine when its AI models become too powerful, necessitating enhanced safety measures. The key threshold? If an AI model could assist in developing weapons of mass destruction.
According to CNBC, internal tests would trigger additional security protocols if an AI model could enable a medium-resourced state to develop chemical or biological weapons. While it’s reassuring such scenarios are being assessed, the implications remain unsettling.
2 View gallery
דאריו אמודיי בשימוע בסנאט
דאריו אמודיי בשימוע בסנאט
Anthropic CEO Dario Amodei
(Photo: SAUL LOEB / AFP)
Other red flags include the model’s ability to fully automate roles like junior researchers at Anthropic or if its capabilities accelerate technological progress too rapidly.
This policy update comes amid an AI arms race fueled by massive investments and global competition. Anthropic, backed by Amazon and valued at $61.5 billion, trails industry leader OpenAI, which recently closed a record funding round at a $300 billion valuation. Both companies, along with tech giants like Google, Amazon and Microsoft, are vying for dominance in the generative AI market, projected to exceed $1 trillion within a decade. Unexpected competitors, like China’s DeepSeek, which went viral in the U.S., add to the pressure.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
Anthropic appears to be following through on earlier promises, establishing a "risk management council" and internal security team. The company also employs Cold War-style counter-surveillance techniques, scanning offices for hidden espionage devices.
2 View gallery
סם אלטמן נושא דברים בעת הצגת פרויקט "סטארגייט"
סם אלטמן נושא דברים בעת הצגת פרויקט "סטארגייט"
Sam Altman leading the AI race?
(Photo: Reuters)
In summary, Anthropic is attempting to define clear boundaries for dangerous AI capabilities, particularly those linked to weapons of mass destruction, while reinforcing its internal security. Whether these measures will be sufficient to control the rapid advancement of AI technology remains the trillion-dollar question.
<< Follow Ynetnews on Facebook | Twitter | Instagram | Telegram >>
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""