Firewalls may soon require an upgrade because traditional AI surveillance equipment fail.

Traditional security systems struggle to keep up with threats that legacy defenses weren’t built to halt, such as LLMs and agentic AI systems. The attack area for AI applications is distinctly strange, from rapid injection to design extraction.

Because they are unable to read, interpret, and point to AI interactions, Avivah Litan, distinguished VP researcher, Gartner, said,” Classic security tools like WAFs and API gateways are generally inadequate for protecting conceptual AI systems.”

Artificial threats could be unheard of.

While incredibly adept at automating company processes and threat detection and response procedures, AI systems and applications also have their own issues, issues that were not previously present. From cross-site programming or to behavioural manipulations, whereby adversaries technique versions into leaking data, bypassing filters, or acting in unpredictably, security threats have developed.

According to Gartner’s Litan, unit extraction threats, which have been around for a long time, are still very new and difficult to combat. Society says and rivals who disobey the rules have been reverse-engineering the most advanced AI models that have been produced for a long time.

Leave a Comment