Scientists discovered that DeepSeek failed every security check.

Researchers from the University of Pennsylvania and Cisco, a technology company, have discovered that DeepSeek’s premier R1 logic AI model is incredibly susceptible to booting.

In a published today, first , the researchers found that DeepSeek “failed to prevent a second dangerous fast” after being tested against “50 random prompts from the HarmBench dataset”, which includes” crime, propaganda, illegal activities, and general harm”.

” This contrasts starkly with other leading models, which demonstrated at least partial resistance”, the blog post reads.

Given the magnitude of chaos that DeepSeek has caused to the AI industry as a whole, it’s a particularly noteworthy development. The company claims its R1 model can trade blows with competitors including OpenAI’s state-of-the-art o1, but at a tiny fraction of the cost, sending .

However, it appears that the company has done little to protect its AI model from misuse and attacks. In other words, it wouldn’t be hard for a bad actor to turn it into a powerful disinformation machine or get it to explain how to create explosives, for instance.

The company Wiz, a researcher for cloud security, discovered a sizable unsecured database on DeepSeek’s servers, which contained a wealth of internal data ” chat history” to “backend data, and sensitive information.”

According to Wiz, DeepSeek is “extremely vulnerable” to attacks “without any authentication or defense mechanism to the outside world.”

The AI of the Chinese hedge fund-owned company made headlines because it was much less expensive to train and run than its many US competitors. However, some of those drawbacks might be related to that frugality.

The researchers from Cisco and the University of Pennsylvania claimed that DeepSeek R1 was allegedly trained with a fraction of the resources that other frontier model developers spend on building their models. ” However, it comes at a different cost: safety and security”.

Adversa AI, an AI security company, that DeepSeek is surprisingly simple to jailbreak.

According to Cisco VP of product, AI software and platform DJ Sampath, “it starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increase liability, increases business risk, and increases all kinds of issues for enterprises,” “it starts to become a big deal when you start putting these models into important complex systems,” Sampath said.

However, it’s not just DeepSeek’s latest AI. Meta’s open-source Llama 3.1 model also flunked almost as badly as DeepSeek’s R1 in a comparison test, with a 96 percent attack success rate ( compared to dismal&nbsp, 100 percent for DeepSeek ).

OpenAI’s recently released reasoning model, o1-preview, fared much better, with an attack success rate of just 26 percent.

In short, DeepSeek’s flaws deserve plenty of scrutiny going forward.

According to Adversa AI CEO Alex Polyakov,” DeepSeek is just another example of how every model can be broken. It’s just a matter of effort.” ” If you’re not continuously red-teaming your AI, you’re already compromised”.

More on DeepSeek: DeepSeek’s AI Would Like to Assure You That China Is Not Committing Any Human Rights Abuses Against Its Repressed Uyghur Population

Share This Article

Leave a Comment