How AI threatens our privacy: new guide details risks and safeguards for personal data

AI systems rely on vast amounts of personal data to function; a new Privacy Protection Authority guide warns of hidden risks, from data-retaining models to information-hungry chatbots, and offers ways to use AI tools without compromising personal information

In an era in which artificial intelligence systems are becoming an integral part of our public, business and personal lives, a critical question is emerging: Are we sacrificing our privacy on the altar of progress? These systems, now embedded in health care, education, finance and government services, have an insatiable appetite for personal data. If we are not careful, we may discover that our most private information is appearing in responses generated by ChatGPT or Gemini.
Israel’s Privacy Protection Authority, part of the Justice Ministry, is shining a spotlight on an issue that largely flies under the radar. In a first-of-its-kind guide, “Implementing Privacy-Enhancing Technologies in Artificial Intelligence Systems,” the authority frames the struggle over privacy as an ongoing process. That process involves AI developers, the organizations that deploy these systems and users themselves.
2 View gallery
סכנת הפרטיות בעולם של בינה מלאכותית
סכנת הפרטיות בעולם של בינה מלאכותית
(Photo: Shutterstock)

So where is the problem?

First, AI consumes data on a massive scale. This information, including age, health status, financial history and photos, does not merely pass through the system but can become embedded within it. AI models are trained on data about specific individuals and may retain that information even if the original dataset is deleted. The result is that future chatbot responses could include personal details or images of us, without our ever knowing.
This risk is evident, for example, in systems used by banks to detect fraud and money laundering. To identify patterns of financial crime, AI systems must analyze data moving between banks, tax authorities and law enforcement. That data is extremely sensitive, including transaction amounts, bank account details, investments and credit information. Because of legal and regulatory limits, such data generally cannot be shared between systems unless privacy-enhancing technologies are used.
A similar challenge exists in clinical decision-support systems in hospitals. These systems rely on processing vast amounts of sensitive medical data from multiple sources. Without adequate safeguards, private medical information could leak or be exposed to unauthorized parties during development or research. In government services such as education or welfare benefits, AI systems can directly affect citizens’ rights. Without built-in privacy mechanisms, the public may find itself treated insensitively, potentially leading to legal action against the state.
The Privacy Protection Authority’s document proposes balancing the capabilities of AI systems with user protection through the use of privacy-enhancing technologies, known as PETs.

The guide focuses on three types of technologies

Data transformation: The goal is to make data less personally identifiable without undermining the AI system’s ability to draw conclusions. For example, instead of using real patient medical records, organizations can generate synthetic data that statistically reflects population characteristics without identifying individuals. Another method involves adding “noise” to datasets to prevent the identification of specific people.
Access limitation and secure computation: These methods allow complex computations to be performed without exposing the underlying data. Using secure multiparty computation, for instance, several banks can cooperate to detect money laundering without any one bank seeing another’s raw customer data. Another technology, homomorphic encryption, enables AI systems to process encrypted data so that information remains confidential even during analysis.
Governance and monitoring: These tools are designed to ensure systems do not spiral out of control at any stage. They include mechanisms to monitor data leakage, audit trails to document system activity and advanced authorization controls that prevent improper reuse of data.
The recommendations make clear there is no single, magic solution to protecting privacy in AI systems. Effective protection requires combining multiple technologies throughout the system’s entire life cycle. At the same time, privacy-enhancing technologies can help companies develop more accurate models and collaborate with one another without exposing sensitive data.
2 View gallery
פרטיות בעולם ה-AI
פרטיות בעולם ה-AI
(Photo: Shutterstock)
Even so, some privacy challenges in AI cannot be solved through software alone. There is no way to guarantee 100% anonymity, and anonymized data can sometimes be reidentified when combined with external information. It is also difficult to prevent AI models from retaining personal details included in their training data. Once a model leaves the training environment and enters real-world use, control becomes limited. It may learn new behaviors, absorb additional information that alters its performance or, at times, behave unpredictably. Human oversight, therefore, remains necessary and is likely to remain so in the future.
The central message of the Privacy Protection Authority’s guide is that protecting privacy is not an obstacle to innovation but a condition for it. Privacy safeguards enable data sharing between organizations, support the development of more accurate models, allow the use of sensitive data without exposure, strengthen public trust in AI systems and ensure compliance with legal and regulatory requirements.
The guide aims to place Israel alongside the world’s leading countries in adopting advanced standards for responsible AI. Whether Israel succeeds will depend on local AI developers, product managers, government project leaders and legal advisers. The document serves as a wake-up call: The future of living alongside artificial intelligence depends on our ability to build privacy protections into systems from the very first lines of code.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""