Analysts at Cisco identify the dangers that are threatening AI models.

This week, Cisco security researchers released a list of threats they are seeing from bad actors attempting to infect or attack the big language model, the most prevalent component of AI.

Security experts are familiar with some methods for hiding information or problems from anti-spam systems, not to mention “disclosing the nature of the articles displayed to the receiver from anti-spam systems.” In a website blog about current and upcoming AI threats, Martin Lee, a safety architect with Cisco Talos, wrote that spammers have used writing rules to conceal their true message from anti-spam analysis for decades. &nbsp,” But, we have seen boost in the use of such methods during the second quarter of 2024″.

Being able to conceal and defy information from machine analysis or people oversight is likely to be a more significant vector of AI system attack, according to Lee. ” Fortunately, spam detection devices like Cisco Email Threat Defense have already implemented the methods to find this kind of obfuscation. In contrast, the presence of attempts to conceal articles in this way makes it clear that a message is harmful and can be categorized as spam, Lee wrote.

Leave a Comment