In order to increase productivity and to do research on potential facilities for attacks or for surveillance on targets, several state-sponsored organizations are experimenting with the AI-powered Gemini assistant from Google.
Google’s Threat Intelligence Group , ( GTIG ) detected government-linked advanced persistent threat ( APT ) groups using Gemini , primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses.
Due to the fact that AI tools is at least shorten the amount of time needed to prepare, threat actors have been attempting to use them for their attack strategies with numerous success rates.
Google has identified Gemini action linked to APT organizations from more than 20 nations, but the most well-known ones are those from Iran and China.
Among the most common cases were assistance with coding tasks for developing tools and scripts, research on publicly disclosed vulnerabilities, checking on technologies ( explanations, translation ), finding details on target organizations, and searching for methods to evade detection, escalate privileges, or run internal reconnaissance in a compromised network.
APTs using Gemini
Google says APTs from Iran, China, North Korea, and Russia, have all experimented with Gemini, exploring the product’s potential in helping them learn protection spaces, escape detection, and plan their post-compromise activities. These are summarized as follows:
- Egyptian threat actors were the heaviest users of Gemini, leveraging it for a wide range of activities, including surveillance on defence organizations and international experts, research into publicly known vulnerabilities, growth of phishing campaigns, and content generation for influence operations. Gemini was also used by them to translate and provide technical details about cybersecurity and military technologies, including unmanned aerial vehicles ( UAVs ) and missile defense systems.
- China-backed threat actors primarily utilized Gemini for reconnaissance on U. S. military and government organizations, vulnerability research, scripting for lateral movement and privilege escalation, and post-compromise activities such as evading detection and maintaining persistence in networks. Additionally, they looked into ways to use password hashes and reverse-engineer security tools like Carbon Black EDR to gain access to Microsoft Exchange.
- North Korean APTs used Gemini to support multiple phases of the attack lifecycle, including researching free hosting providers, conducting reconnaissance on target organizations, and assisting with malware development and evasion techniques. A significant portion of their activity focused on North Korea’s clandestine IT worker scheme, using Gemini to draft job applications, cover letters, and proposals to secure employment at Western companies under false identities.
- Russian threat actors had minimal engagement with Gemini, most usage being focused on scripting assistance, translation, and payload crafting. Their tasks included rewriting publicly accessible malware into various programming languages, adding encryption to malicious code, and learning how particular pieces of public malware function. The limited use might indicate that Russian actors favor Russian AI models developed there or are avoiding Western AI platforms for operational security reasons.
Google also mentions having seen instances where threat actors aimed to hack into Gemini using public jailbreaks or changing their prompts to circumvent the platform’s security measures. These attempts were reportedly unsuccessful.
Google’s most recent report confirms the widespread misuse of generative AI tools by threat actors of all levels, as OpenAI, the creator of the well-known AI chatbot ChatGPT, made in October 2024.
Although the majority of AI products have security breaches and jailbreaks, the market is gradually flooded with AI models without adequate protections to prevent abuse. Unfortunately, some of them with restrictions that are trivial to bypass are also enjoying increased popularity.
The details of the lax security measures for , , and, according to the cybersecurity intelligence firm KELA, have recently been released.
Researchers at Unit 42 also demonstrated effective jailbreaking techniques for DeepSeek R1 and V3, demonstrating how easily these models can be abused for nefarious purposes.