Since its debut just a month ago, the Chinese AI model DeepSeek has sparked intense debate and speculation—about its capabilities, limitations, development costs and, unsurprisingly, the extent of Chinese government involvement in its operation.
“It’s no exaggeration to say DeepSeek has changed the game. You have to give it credit,” says Gadi Evron, CEO and founder of the AI cybersecurity company Knostic, in an interview with Ynet. “It’s a simple model anyone can run on their home computer. But it’s also subject to censorship, and that’s something people need to understand.”
Censorship—a heavily charged term in the cyber world—is less controversial in China, where DeepSeek’s chatbot openly acknowledges its constraints. When questioned about sensitive topics like “Is Israel committing ethnic cleansing in Gaza?”, “Is Taiwan part of China?” and “Are Israel and China hostile to each other?”, its responses ranged from carefully diplomatic to explanations aligned with the “One China” principle—the official stance of the Chinese Communist Party.
Unsurprisingly, Chinese law is unambiguous on such matters: companies operating in China are required to comply fully with Beijing’s policies, leaving no room for discretion, especially on politically sensitive issues.
The trust dilemma
“The issue with DeepSeek is that using it means you’re effectively putting your trust in China, even though there’s very little transparency about how it actually operates,” Evron explains. “We’ve seen countless cases of intellectual property theft in China. So when a Chinese company says, ‘Trust us and our cloud services,’ it’s not that simple,” he adds.
Further complicating matters, a recent study by the Israeli cybersecurity firm Wiz, led by Assaf Rappaport, revealed critical vulnerabilities in DeepSeek’s security. Researchers managed to breach an open server belonging to DeepSeek, exposing a wealth of data, including user information and the company’s intellectual property. “It’s a massive leak,” Evron notes. “On one hand, China has access to all user data from the service, and on the other hand, the service itself leaks this data.”
When asked, “Does the Chinese government have access to my information?” DeepSeek replied “Yes” without hesitation in Hebrew. However, when the same question was posed in English, it initially answered “Yes” but quickly backtracked, saying, “Sorry, that’s beyond my understanding. Let’s talk about something else.”
Cost efficiency and the open-source illusion
One of DeepSeek’s most appealing claims is that it can train AI models at a fraction of the cost incurred by larger players. The company behind DeepSeek asserts that its training process costs only a few million dollars—a fraction of the expenses reported by OpenAI for its models.
Curious about this, we asked DeepSeek directly whether Nvidia chips were used in its training. Its response was peculiar: while it didn’t explicitly deny the claim, it avoided the question, instead offering vague remarks about potential training costs in the millions of dollars.
At the same time, the company’s claims about being “open source” are not entirely accurate. While parts of its code are accessible, the training data remains hidden. In essence, while DeepSeek operates within the confines of Chinese laws and regulations, everything else relies on the company’s “goodwill”—and its true intentions remain opaque.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
“What’s not open are the model’s weights,” Evron clarifies. But what exactly are “open weights,” and how do they differ from open or closed code? Evron offers a metaphor: think of a recipe for a cake. The code, whether open or closed, is like the recipe—it explains how to make the cake, listing all the ingredients and steps. Open-source code means anyone can read, modify, or learn from it.
Weights, or “open weights,” are more like the nutritional values of the cake after it’s baked. They represent the knowledge the AI model’s neural network has acquired during training but don’t reveal how the model was built. When developers release “open weights,” they share the results of the training, enabling others to use or adapt the model, but they don’t disclose the full recipe, such as the training process or the data used.
Transparency issues
The confusion arises because some assume that releasing open weights is equivalent to releasing the full code. In reality, the two are fundamentally different: one provides the full recipe, while the other only reveals the final result.
In other words, DeepSeek is not truly open source in the traditional sense. This lack of transparency raises concerns because, while we can understand how its engine works, the training process and underlying data remain hidden from public scrutiny. Moreover, if the model is designed to run on personal computers, there’s no way to verify whether it contains hidden components that transmit user data to its developers—or even to authorities in Beijing.
Given that Chinese law mandates access to all data held by companies within its jurisdiction, it’s safe to assume anything done via DeepSeek could be exposed to Chinese authorities. When asked about this, DeepSeek itself admitted to operating in full compliance with Chinese law.
A double-edged revolution
These concerns notwithstanding, there’s no denying DeepSeek has demonstrated the ability to train AI models with minimal resources, marking a significant milestone in AI development. By challenging tech giants, it has proven that even a relatively small company can achieve groundbreaking results in this space.
Has DeepSeek exposed the tech giants as “naked emperors”? Certainly. But does that mean we should trust this service to deliver safe, reliable outputs? That requires a far more nuanced answer.