News

AI Research: Threat Actors 'Not Yet' Using LLMs in Novel Ways

Cybersecurity research from AI leaders Microsoft and OpenAI reveals how generative tech is being put to use by known threat actors, including nation-state-affiliated groups from countries including China, Iran, North Korea and Russia.

To help understand how AI can be potentially misused in the hands of threat actors, the companies published research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLMs) and fraud.

One takeaway from the research is that generative AI provided by LLMs for now seems to be just another tool in the bad guys' arsenal, not a groundbreaking game-changer that tilts the scale in their favor.

"Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI," Microsoft said in a post today (Feb. 14).

OpenAI's own post said: "Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks." OpenAI's models, of course, power ChatGPT and the flood of AI "Copilot" assistants that Microsoft is sticking into most if not all of its software and service offerings.

While criminals haven't leveraged generative AI to the extent that one might think at this stage, the white hats apparently haven't been any more successful in wielding the new tech.

"Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries," Microsoft said.

The research has already resulted in some tangible results as OpenAI said it disrupted five malicious actors affiliated with China, Iran, North Korea and Russia, terminating their OpenAI accounts.

Again, there were no novel or unique techniques identified, with OpenAI saying its services were primarily used for querying open source information, translating, finding coding errors and running basic coding tasks.

Specifically, instead of new cutting-edge tactics never before seen, LLM-related actions tended toward the mundane, including:

  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.

"Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape," Microsoft explained. "Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI. However, Microsoft and our partners continue to study this landscape closely."

Microsoft said that even though the joint research has not identified significant attacks employing the LLMs the company monitors closely, it feels it's important to publish the research to expose early-stage, incremental moves that well-known threat actors have been observed attempting to use and share information on blocking and counter efforts with the defender community.

Microsoft also today announced four principles shaping the company's policy and actions mitigating the risks associated with the use of its AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs) and cybercriminal syndicates being tracked:

  • Identification and action against malicious threat actors' use: Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources.
  • Notification to other AI service providers: When we detect a threat actor's use of another service provider's AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies.
  • Collaboration with other stakeholders: Microsoft will collaborate with other stakeholders to regularly exchange information about detected threat actors' use of AI. This collaboration aims to promote collective, consistent, and effective responses to ecosystem-wide risks.
  • Transparency: As part of our ongoing efforts to advance responsible use of AI, Microsoft will inform the public and stakeholders about actions taken under these threat actor principles, including the nature and extent of threat actors' use of AI detected within our systems and the measures taken against them, as appropriate.

OpenAI, meanwhile, shared its multi-pronged approach to combating malicious state-affiliate actors' use of its platform:

  • Monitoring and disrupting malicious state affiliated actors. We invest in technology and teams to identify and disrupt sophisticated threat actors' activities. Our Intelligence and Investigations team -- working in concert with our Safety, Security, and Integrity teams -- investigates malicious actors in a variety of ways, including using our models to pursue leads, analyze how adversaries are interacting with our platform, and assess their broader intentions. Upon detection, OpenAI takes appropriate action to disrupt their activities, such as disabling their accounts, terminating services, or limiting access to resources.
  • Working together with the AI ecosystem. OpenAI collaborates with industry partners and other stakeholders to regularly exchange information about malicious state-affiliated actors' detected use of AI. This collaboration reflects our voluntary commitment to promote the safe, secure and transparent development and use of AI technology, and aims to promote collective responses to ecosystem-wide risks via information sharing.
  • Iterating on safety mitigations. Learning from real-world use (and misuse) is a key component of creating and releasing increasingly safe AI systems over time. We take lessons learned from these actors' abuse and use them to inform our iterative approach to safety. Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards.
  • Public transparency. We have long sought to highlight potential misuses of AI and share what we have learned about safety with the industry and the public. As part of our ongoing efforts to advance responsible use of AI, OpenAI will continue to inform the public and stakeholders about the nature and extent of malicious state-affiliated actors' use of AI detected within our systems and the measures taken against them, when warranted. We believe that sharing and transparency foster greater awareness and preparedness among all stakeholders, leading to stronger collective defense against ever-evolving adversaries.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube