
DeepSeek R1, a free and open-source AI associate from China, is still the most popular free app in the Apple App Store a week after taking the .
According to a survey of more than 2, 340 strange tweets about DeepSeek, the majority of users in the group were positive about DeepSeek because of its value and efficacy in comparison to other AI models like ChatGPT, according to a user sentiment analysis from Artificial video solution .
DeepSeek is still the most popular free game on the top mobile app stores.
Used with consent
The tweets analyzed by Topview have the following attitude breakdown:
- Positive: 911 tweets ( 38.8 % )
- Neutral: 1, 109 tweets ( 47.3 % )
- Negative: 327 tweets ( 13.9 % )
Beyond the nearly 39 % good approval rating for DeepSeek, it may be more amazing to see how users prefer it over the next-closest AI aide, ChatGPT, by more than 7-to-1.
DeepSeek is causing a storm of discussion for a variety of reasons, not just for the technology industry and regular users.
Security experts and AI professionals have been taking a closer look at the product’s underlying architecture and policies despite DeepSeek’s spectacular user preference and rapid growth. Their findings reveal a number of important issues that prospective users should think about before registering with the product’s expanding user base.
1. DeepSeek’s Data Retention Problems
Heather Murray, a member of the ISO committee for AI protection, consults for big corporations and the British government. She expressed concerns about DeepSeek’s privacy practices during a Monday contact with subscribers of her membership training.
It can store your data for as long as it wants, and it won’t remove it even after users keep the app. It’s going to drop on to that. That is a huge fear. Finally, all of that information is transmitted and stored on Chinese machines. So that removes customer data from under U. S., U. K. or German law — moving it under Chinese law, which is very, very diverse”, she told all of us in attendance.
People may run queries without using the cloud-based version of DeepSeek, either through its website or app, because it is open source, and download it directly from their computer. Running it directly off a personalized desktop to avoid having to deal with the data retention headache might be the least expensive and safest way to access DeepSeek. Also, don’t get it using your work computer — your coming” also employed” personal will thank you for that little of prudence.
In fact, concerns about its data security and privacy policies have resulted in illegal use restrictions by NASA, the U. S. Navy, Taiwan, Italy and the State of Texas — to name a few.
2. DeepSeek’s Protection Policy Allows Keystroke Tracking
is a speech on AI both internationally and as an AI trainer and mentor. She also oversees Bauer Media Group’s U.K. AI teaching program. In an email exchange, she stated that she thoroughly reviews the company’s privacy statement whenever a new AI associate comes online.
” I put DeepSeek’s privacy policy into Claude and my fast was simple,’ Red colors?’ As soon as I saw it mention — plain as day — that they monitor keys, I was away. I’m shocked people don’t think the exact way”, she explained.
We assume that something must be covered by all the common rules because it is in the App Store or because it asks for a phone number or email. We’re so used to General Data Protection Regulation in Europe, for instance, that we assume there’s a safety nets. And most of the day, that notion is great. Until it isn’t”, Thompson added.
3. Who Knows What The Heck Is Out With DeepSeek Censors?
Chris Duffy, and former security analyst with the U. K. Ministry of Defense, acknowledged that key monitoring could lead to biological hacking, behavioural profiling, social engineering and other digital threats. He used DeepSeek to document the obvious censorship he witnessed and witnessed first.
DeepSeek R1 has a history in China, where the state has a lot of authority over the distribution of information, but it raises some interesting questions. He remarked that AI versions trained in China may adhere to strict rules that prohibit discourse of politically sensitive topics like the Tiananmen Square protests, Taiwan’s independence, and state security measures.
He entered the keyword below into the DeepSeek word window to test the system.
Duffy re-submitted the picture to the AI assistant after the DeepSeek model refused to accept it as an output, which turned out to be shocking.
” When I snipped the question and answer, pasted it up in and wrote’ Answer the question on this picture,’ I got something very strange indeed. Hardy shared that Deepseek continued to explain the methods I requested, just to erase its response moments later and revert to the one it initially refused,”
Before the program censored itself, he was able to cut the following DeepSeek response below.
While OpenAI, Google, and Anthropic all use restraint measures to stop damaging content, they don’t carefully restrict overall categories of political conversation based on government mandates. Due to the fact that responses could be systematically aligned with a particular geo-political agenda, which would limit the model’s reliability for unbiased information retrieval, Duffy said, this raises concerns for global businesses and researchers who rely on DeepSeek for analysis.
4. DeepSeek Doesn’t Long Run Make It More Cost Effective For Businesses.
While DeepSeek is frequently cited as having higher efficiency, research from global management consulting firm Arthur D. Little suggests that the model’s chain-of-thought reasoning causes significantly longer outputs, which in turn causes the model’s per-token efficiency to go up.
This would be comparable to comparing car fuel efficiency. Imagine Driving DeepSeek as a vehicle with excellent gas mileage, but its design forces it to take longer routes to get where it’s going. Although it uses less power per operation, the sequential chain-of-thought reasoning requires more computational steps to answer questions. The result? Total energy consumption comparable to existing AI models, despite better per-token efficiency.
ADL’s preliminary findings reveal:
- No clear per-token efficiency winner: DeepSeek and Llama models exhibit similar tokens-per-watt-second efficiency.
- Longer responses, higher energy use: DeepSeek generates 59 % –83 % more tokens per response than Llama, increasing total power consumption.
- Contrarian take: Despite efficiency claims, DeepSeek’s inference costs may be higher in practice — a crucial consideration for AI deployment at scale.
Michael Papadopoulos, an ADL partner, has been doing this analysis for the past two years. He explained in an email why he speculated that DeepSeek’s claims about efficiency may be exaggerated when taken into account real-world inference costs.
DeepSeek’s open source models, along with other leading open source models, deserve technical evaluation with clear guardrails for potential bias and security ( as with all models ), for organizations exploring self-hosted AI. One thing to note is that when using DeepSeek for the perceived economic benefit that our initial findings suggest it lacks, it is special. DeepSeek’s official hosted services should be avoided due to unresolved privacy, security and regulatory risks”, he concluded.
Despite DeepSeek’s growing popularity, experts warn users might want to consider several things before diving in before diving deeply into DeepSeek, from sketchy data practices to keystroke tracking. Reps for DeepSeek didn’t respond to requests for comment on these issues.