As AI agents gain autonomy, new security risks emerge across enterprises

Autonomous AI systems now execute code, access data and trigger other agents with little oversight, creating hidden attack chains that traditional defenses can’t detect; Noma Security’s new Agentic Risk Map aims to visualize and contain those threats before they cascade

As autonomous AI agents increasingly power enterprise workflows, security experts are warning that traditional tools are no longer enough to handle the growing risks.
These agents—unlike conventional large language models—operate independently across digital systems, accessing databases, executing code, triggering other agents and making real-time decisions that can cascade across departments and infrastructure.
2 View gallery
בינה מלאכותית
בינה מלאכותית
(Illustration: Shutterstock)
Through Model Context Protocol (MCP) servers, these agents connect to a widening universe of third-party tools and services, forming complex webs of inter-agent relationships. Security teams are finding it nearly impossible to map where a single compromise might lead—a problem amplified by “agent sprawl,” where autonomous agents are deployed across organizations without centralized oversight.
A compromised customer support agent, for example, could potentially initiate unauthorized financial transactions, exfiltrate sensitive data or send malicious emails to employees or customers—all without human intervention. With multiple systems involved and no clear visibility into how agents interact, organizations risk losing control over their digital environments.
In response to this growing challenge, Noma Security, a startup focused on AI security and governance, has launched a new tool called the Agentic Risk Map (ARM). The technology, described as the first of its kind, visually maps the potential “blast radius” of a compromised AI agent, giving organizations a clearer understanding of how risk spreads through interconnected systems.
“Security teams are flying blind when it comes to AI agent risks,” said Niv Braun, CEO and co-founder of Noma Security. “These agents don't just touch one system; they span departments, tools and workflows. A seemingly harmless Customer Support Agent, if compromised, can cascade into unauthorized money transfers, sensitive data exfiltration and malicious emails sent to customers or employees for lateral movement. Organizations need more than point solutions. They need complete visibility, proactive risk management, and runtime protection working together.”
2 View gallery
Noma Security
Noma Security
Noma Security
(Photo: Omer Hacohen)
The ARM tool is part of Noma’s broader platform for securing agentic AI. The system operates in three phases: discovering deployed agents (including shadow AI), assessing and managing their security posture and providing real-time monitoring and threat containment.
The platform supports a wide array of tools and agent frameworks used in enterprise settings, such as Microsoft Copilot Studio, ServiceNow, Salesforce Agentforce, Azure AI Foundry, Google Vertex AI and GitHub Copilot. It integrates with cloud providers, low-code platforms and agent development SDKs, giving organizations end-to-end visibility into their AI infrastructure.
Noma’s ARM visualizes how agents are connected, what permissions they have and which systems they touch—information security teams can use to enforce tighter controls, prevent cascading failures and run red-team tests before deployment. The platform continuously monitors agent behavior and alerts teams to unusual patterns, such as unauthorized tool invocations, suspicious data access or prompt injection attempts.
With backing from investors including Ballistic Ventures, Glilot Capital, Databricks Ventures and SVCI, Noma Security is used by several Fortune 500 companies and has been recognized by industry analysts as a leader in AI security and risk management.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""