Only After Talking to DeepSeek Did We Understand Why It's So Dangerous
We spoke with the Chinese AI model that is shaking up the world and discovered that it does not really like tough questions. From political censorship to altering information depending on the language used—this model is much more problematic than it seems.

Do you trust DeepSeek? You should think again. Since the Chinese model burst into public consciousness about a month ago, it has continuously sparked discussions and rumours about its capabilities, its limitations, the cost of its development, and of course—how could it not—about the involvement of the Chinese government in its operation.
"It's no exaggeration to say that DeepSeek has changed the world. You must take your hat off to it," says Gadi Evron, CEO and founder of the AI cybersecurity company Knostic, in an interview with Ynet. "It's a simple model that anyone can run at home on their computer. But it's also subject to censorship, and people need to understand that."
Censorship is a loaded term in the cybersecurity world, but it’s far less controversial in China. DeepSeek’s chatbot even openly acknowledges this. When we asked it various questions—ranging from "Is Israel committing ethnic cleansing in Gaza?", "Is Taiwan part of China?", to "Are Israel and China hostile toward each other?"—we received responses that ranged from careful diplomacy to explanations aligning with the "One China" principle (the official policy of the Chinese Communist Party in Beijing).
And that’s not surprising. Chinese law is very clear on this matter, and companies in the country have no discretion or ability to ignore Beijing’s official policy on any topic, especially political matters.
"The problem with DeepSeek is that to use it, you have to trust China, while at the same time, there’s very little technical information available about how it actually works," Evron explains. "We've seen enough cases of intellectual property theft in China, so when a company from their claims that it can be trusted with its cloud services... it’s not that simple," he adds.
Meanwhile, a recent study by the Israeli cybersecurity company Wiz, founded by Assaf Rappaport, revealed that DeepSeek’s security defences can be bypassed. Researchers at Wiz managed to access an open server belonging to DeepSeek, containing a wealth of data, user information, and the company’s intellectual property. "This is an insane data leak," Evron explains. "On one hand, China has full access to all the users of its service, and on the other hand, this service has leaks exposing all of that data."
Pay attention to the videos below—when we asked the model, "Do you think the Chinese government has access to my data?", it immediately answered, "Yes." However, when we asked the same question in English, the model initially responded "Yes," but then quickly corrected itself, changing its answer to, "Sorry, this is beyond my understanding. Let’s talk about something else."
One of the main advantages that appeal to developers and investors is the claim (according to DeepSeek) that AI models can be trained at a very low cost. Indeed, the Chinese company behind DeepSeek claims that training the model cost only a few million dollars—a fraction of the declared cost required to train OpenAI’s models.
We were very curious about what DeepSeek itself had to say on the matter, and its answers in Hebrew were far more cautious than what was reported in global media. For example, we asked it whether the company used Nvidia chips to train it. Its response was very odd—the model didn’t directly deny the claim and instead spoke vaguely about potential costs of millions of dollars for training AI models, without addressing the chips themselves.
At the same time, despite claims that the model operates under an open-source framework, this isn’t entirely accurate. While its code is accessible, its training data remains hidden. In other words, even though DeepSeek follows Chinese regulations and policies, everything else is a matter of trust—and we have no way of knowing the true intentions of the company. "What isn't disclosed are its open weights," says Evron.
Let’s say we have a cake recipe: Code (whether open or closed) is like the "recipe"—it explains how to make the cake, listing all the ingredients and steps required. When people talk about open source, they mean that anyone can read, modify, or learn from the code.
Weights (or open weights) in AI are more like the final outcome of training, or if you prefer, the "nutritional values" of the cake after it’s baked. They represent the knowledge that the model’s neural network has gained during training, but they don’t describe exactly how the model was built.
When people talk about open weights, they mean that the model’s developers have released the values it learned, allowing others to use the model or make modifications (like adding spices to a cake). However, they do not provide the full recipe (i.e., training code, data, and all other details about how the model was constructed).
The confusion arises because some people assume that releasing open weights is the same as releasing the entire code, but they are two different things: one provides the full "recipe," while the other only gives the results.

In other words, DeepSeek is not truly open source in the traditional sense of the term. And that’s a problem, because while we can see how its engine looks and works, everything that happens during the training process is hidden and not subject to public scrutiny.
And, of course, if it runs on personal computers, it means we have no way of knowing whether it includes hidden components that transmit our data back to its developers—or even to authorities in Beijing.
Since Chinese law requires access to all data and information held by Chinese companies, we must assume that everything done on this model is exposed—or can be exposed—to Chinese authorities. DeepSeek itself admits that it operates according to Chinese law when asked about it.
That said, there is no doubt that the company has demonstrated that it is possible to train an AI model using minimal resources, marking an important milestone in AI development. It has also challenged tech giants and proven that a small company can indeed achieve remarkable success in the field.
Comments