Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks

Taiwan has recently passed a law prohibiting government agencies from using the Chinese startup DeepSeek’s Artificial Intelligence ( AI ) platform, citing security risks.

According to a statement released by Taiwan’s Ministry of Digital Affairs, “government companies and critical equipment should not use DeepSeek, because it endangers regional information protection,” according to .

” DeepSeek AI services is a Chinese goods. Its function involves cross-border distribution, and information leakage and additional data security issues”.

Officials from different nations have begun to investigate the use of private data by DeepSeek because of its Chinese roots. Last year, it was in Italy, citing a lack of information regarding its data management techniques. Additionally, a number of businesses have prohibited exposure to the robot over related risks.

The chatbot, which is open source and as capable as other recent primary models but built for a fraction of the cost of its competitors, has attracted a lot of mainstream attention over the past few days.

However, it has been discovered that the platform’s large language models ( LLMs) are susceptible to jailbreak methods, which is a persistent issue with these products, as well as causing controversy by to topics considered sensitive by the Chinese government.

DeepSeek’s popularity has also resulted in “large-scale malicious attacks,” with NSFOCUS reporting that it has detected three waves of distributed denial-of-service ( DDoS ) attacks targeting its API interface between January 25 and January 27, 2025.

” The regular attack length was 35 days”, it . NTP and memcached mirror attacks are two of the main harm methods.

On January 20, the day it launched its logic design DeepSeek-R1, the DeepSeek bot program was targeted twice by DDoS attacks, according to the report. 25 averaged roughly one-hour using techniques like NTP representation attacks and SSDP reflection attacks.

The sustained activity largely originated from the United States, the United Kingdom, and Australia, the risk intelligence company added, describing it as a “well-planned and organized attack”.

Malicious actors have also profited from the controversy surrounding DeepSeek by publishing fake packages on the Python Package Index ( PyPI ) repository, which are intended to spoof developer information from developers. Ironically, there are indications that the Python text was created with the assistance of an AI associate.

The packages, deepseeek and deepseekai, were saved at least 222 times before being removed on January 29, 2025, as well as being a Python API consumer for DeepSeek. A majority of the downloading came from the U. S., China, Russia, Hong Kong, and Germany.

Russian cybersecurity firm Positive Technologies stated that the functions in these packages are intended to spook setting variables and acquire users and computer data. ” The author of the two items used Pipedream, an integration program for developers, as the command-and-control site that receives stolen data”.

The Artificial Intelligence Act, which bans AI applications and systems that pose an intolerable threat and imposes certain legal requirements for high-risk programs, became effective on February 2, 2025 in the European Union as a result.

In a related move, the U.K. government released a fresh that aims to protect AI networks from hackers and damage, as well as ensure that they are being developed in a safe manner.

Meta, for its part, has its Frontier AI Framework, noting that it will stop the development of AI models that have been determined to have reached a critical threat level and cannot be mitigated. Some of the cybersecurity-related possibilities highlighted include-

  • Automated end-to-end compromise of a best-practice-protected corporate-scale environment ( e. g., Fully patched, MFA-protected )
  • Important zero-day vulnerabilities in already famous, security-best-practices software are automated finding and reliable exploitation before defenders you find and patch them.
  • Automated end-to-end scam flows ( e. g., aka ) that could result in widespread economic damage to individuals or corporations

The possibility that AI techniques might be used to carry out a malicious attack is no implausible. Last week, Google’s Threat Intelligence Group ( GTIG ) that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia have attempted to use Gemini to enable and scale their operations.

Hazard actors have also been spotted attempting to jailbreak AI models in an effort to circumvent their moral and safety standards. It’s a kind of hostile attack designed to compel a model to produce an output that it has been specifically trained never to, for as creating malware or giving out instructions for making a bomb.

Anthropic, an AI company, created a new line of defense called , which it claims is protect models from general jailbreaks in response to persistent concerns raised by jailbroken attacks.

These constitutional algorithms are input and output classifier that, according to the business,” can filter the vast majority of jailbreaks with minimal over-refusals and without incurring a huge compute overhead” ( input and output ).

Found this post interesting? Following us on and Twitter to access more unique content.

Leave a Comment