Could so-called AI agents really work alongside us to the degree some suggest? Nvidia CEO Jensen Huang believes his company might, one day, have 50,000 employees working in tandem with 100 million AI agents. However far off that vision may be, Nvidia will be in good company if agentic AI really can live up to the hype.
Picture all the Fortune 500 companies deploying fleets of their own AI agents – the number could be in the billions. But even with agentic robot "chiefs" autonomously managing many of those agents, it’s people who will still need to take responsibility for them to ensure they adhere to the standards their "employers" expect of them.
Perhaps one way forward is to subject AI agents to the same kind of performance reviews that humans must undergo. AI agents could be measured against the core competencies of effectiveness, security and compliance to identify where they deliver value and where there are areas for improvement.
For Israeli readers, the idea is not abstract: tech startups already rely heavily on KPIs to guide rapid iterations. Bringing the same discipline to AI agents ensures they remain accountable and productive, rather than becoming uncontrolled variables inside the enterprise.
AI agents: managing a hundred million workers
As millions of AI agents enter the workforce, organizations need to develop frameworks that enable their teams to manage them at scale. Regardless of whether they have hundreds or millions of agents, human teams must remain in control. These human managers, therefore, need to be equipped to manage every stage of the AI agent lifecycle, from design and deployment to monitoring, retraining and ultimately, retirement.
The human workforce must also remain accountable for the actions of these AI agents, which means being able to monitor and manage the sources of data they draw from. Enterprises should therefore simplify how agents access a centrally managed, governed and secured source of data that has been approved for use. Such action could enable teams to deploy agents faster, while ensuring they comply with privacy and security standards.
In Israel, sectors such as fintech, cybersecurity and healthtech are already at the forefront of advanced AI adoption. That makes the need for strong governance particularly urgent. For example, fintech firms leveraging AI to make portfolio decisions or cyber startups deploying AI-driven detection systems must apply rigorous oversight to avoid compliance and security gaps.
With this in mind, a mature Application Programming Interface (API) strategy is essential to the management of an agentic AI workforce. In the same way, the human workforce is given access to the tools they need to do their jobs – through a user account – AI agents need controlled access to systems and data to function effectively. APIs provide the tools that organizations need to support this, by granting AI agents controlled access to the data they need to make decisions – and the systems they need to complete the tasks they’re assigned.
Watch out for zombies: keeping AI agents in check
As APIs proliferate to support the agentic workforce, one of the biggest risks organizations face is the rise of shadow or "zombie" APIs. These are undocumented, unmanaged integrations that have been created to temporarily provide access to a siloed data store or by teams working outside the visibility of IT, in ignorance of the risk.
Unmonitored APIs are rapidly becoming one of the most common access vectors for attacks against applications. They can also lead to inconsistent data flows, unauthorized access to sensitive information and spiraling technical debt. In the same way that organizations have processes to block unauthorized human workers from their environments, it’s essential they can find shadow or zombie APIs – and show them the door to prevent them from being used by AI agents.
The most effective way to banish zombie APIs is by creating a central hub for all APIs and AI agents. A central hub helps to build a single source of truth, which organizations can use to review and manage the performance and behavior of every AI agent in their environment. As such, they can instantly identify every API that agents interact with and determine which data they are accessing. That makes life easier for uncovering and blocking any unauthorized behavior.
Such a centralized API and AI management hub should also incorporate robust governance and security controls. Access policies can be enforced to ensure data pipelines are secured and AI agents adhere to the same privacy guidelines their human counterparts follow. Low-code integration tools are also essential, enabling teams to use standardized templates and natural language prompts to create agents with security, compliance and efficiency built-in.
Regulation and governance in the Israeli context
Globally, regulators are paying attention to AI oversight, and Israel is no exception. The Privacy Protection Authority is increasingly active in shaping data governance and is expected to tighten its supervision over how AI interacts with personal information. Just as the EU has been advancing comprehensive AI and data regulations, Israel will likely require businesses to demonstrate that AI systems – including AI agents – operate within strict compliance frameworks. For enterprises in sensitive sectors such as healthcare or finance, this regulatory direction is not optional but essential.
For Israeli executives, the message is clear: AI agents must be treated like any other employees. They require supervision, measurable outcomes and structured feedback loops. Israeli startups are known for their speed of execution and their reliance on KPIs to push products forward. Extending this culture of rapid, metrics-driven evaluation to AI agents will help ensure these systems add measurable value rather than create unforeseen risks.
By implementing these controls while the agentic workforce is in its infancy, IT leaders can get on the front foot and ensure AI impacts their organization for the better. Enterprises that establish strong governance now – especially in highly regulated environments like Israel – will be far better positioned to harness AI’s potential responsibly and competitively.
- Markus Mueller is Field CTO for API Management at Boomi. Ariel Pohoryles is Director of Product Marketing for Data Management at Boomi.



