Anthropic, a major AI contender, has revised its "responsible scaling" policy to determine when its AI models become too powerful, necessitating enhanced safety measures. The key threshold? If an AI model could assist in developing weapons of mass destruction.
According to CNBC, internal tests would trigger additional security protocols if an AI model could enable a medium-resourced state to develop chemical or biological weapons. While it’s reassuring such scenarios are being assessed, the implications remain unsettling.
Other red flags include the model’s ability to fully automate roles like junior researchers at Anthropic or if its capabilities accelerate technological progress too rapidly.
This policy update comes amid an AI arms race fueled by massive investments and global competition. Anthropic, backed by Amazon and valued at $61.5 billion, trails industry leader OpenAI, which recently closed a record funding round at a $300 billion valuation. Both companies, along with tech giants like Google, Amazon and Microsoft, are vying for dominance in the generative AI market, projected to exceed $1 trillion within a decade. Unexpected competitors, like China’s DeepSeek, which went viral in the U.S., add to the pressure.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
Anthropic appears to be following through on earlier promises, establishing a "risk management council" and internal security team. The company also employs Cold War-style counter-surveillance techniques, scanning offices for hidden espionage devices.
In summary, Anthropic is attempting to define clear boundaries for dangerous AI capabilities, particularly those linked to weapons of mass destruction, while reinforcing its internal security. Whether these measures will be sufficient to control the rapid advancement of AI technology remains the trillion-dollar question.