Shadow AI puts sensitive data at risk as Israeli firms struggle to keep up with new privacy standards

Analysis: As generative AI spreads in the workplace, many Israeli firms lack oversight of sensitive data use—risking legal, reputational and trust fallout; a new privacy law amendment aims to boost protections, but critics call it needless red tape

Nimrod Vax|
Amendment 13 to the Privacy Protection Law aims to tighten supervision over how organizations in Israel manage and secure personal data. It sets a new bar for information security and privacy standards and imposes heavy sanctions on violators. With the use of advanced technologies, organizations can comply with the law, streamline processes, maintain public trust and even gain business advantage.
The problem is that the law assumes managers know what data they hold, where it is located and how it is being used. This assumption is entirely wrong. In reality, artificial intelligence is entering organizations through the back door, exposing us all to the leakage of sensitive information.
1 View gallery
(Photo: Shutterstock)
Generative AI has spread at a staggering pace across every type of organization, from large financial institutions and government agencies to retailers and small startups. Employees use ChatGPT, Gemini or Copilot to shorten processes, draft emails, analyze data and create presentations. Sometimes this happens officially and under management. More often, it takes place “through the back door," without the knowledge of leadership or approval from IT and security teams. This phenomenon has earned the name "shadow AI"—the use of AI tools within an organization without governance or review.
This isn’t necessarily malicious. Most employees act in good faith, simply trying to boost productivity. But the consequences can be critical: sensitive data leaves the organization’s boundaries with no way to ensure how it will be stored, processed or reused.
Take, for example, a global technology company where a development team uploaded internal source code to an external AI service to solve a bug. That code was later stored on third-party servers and became accessible to developers outside the organization. In another case, a marketing employee at a financial institution uploaded an Excel sheet containing customer data into an analytics tool, not realizing the data was stored outside company servers and potentially exposed to unauthorized parties.
Amendment 13 demands that organizations maintain complete control over their data. But such control requires full transparency when it comes to AI usage. Without real-time monitoring, there’s no way to know whether, where and how sensitive data is being fed into external tools. The gap doesn’t just pose legal risks; it can erode customer trust and damage reputation.
Nimrod Vax Nimrod Vax Photo: Roy Shor
The good news is that solutions to manage this risk already exist and are advancing as AI itself advances. Organizations can and should adopt them now. This includes mapping AI activity across the organization in real time; identifying which tools are in use, what data flows into them, and in what context; automatically scrubbing sensitive data before it is fed into AI tools, via tokenization or redaction; and tagging data intelligently by sensitivity, relevant regulation or permissible use so that policies are embedded directly into workflows. Most importantly, employees must be trained, not with generic “don’t do this” warnings, but with clear instructions on how to use approved tools properly.
The challenge is not the availability of technology but the ability of organizations to adapt, integrate these solutions into existing systems and procedures, and ensure they are embedded in daily practice. One point must be clear: artificial intelligence is not the enemy. On the contrary, it is a growth and innovation engine. But like any powerful technology, it demands careful risk management. Organizations that understand this early and implement the available tools today will be the ones that maintain their competitive edge while meeting both regulatory demands and customer expectations.
In an era where AI crosses organizational boundaries in every direction, from an employee’s phone to the public cloud, data control is not a luxury. It is a condition for survival.
  • The author is CPO and co-founder of BigID, Connecting the Dots Across Data & AI.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""