News

Malware, Scams, Influence Ops: OpenAI's War on Malicious AI

OpenAI has published a detailed report outlining how its security teams are identifying and countering the malicious use of AI models -- including on cloud platforms -- across a range of cyber and social threats.

OpenAI is using its own AI capabilities, combined with human expertise, to combat the misuse of its models in a growing array of malicious campaigns, according to its newly published report, Disrupting Malicious Uses of AI: June 2025.

In the three months since its previous update, the company says it has detected and disrupted activity including:

  • Cyber operations targeting cloud-based infrastructure and software.
  • Social engineering and scams scaling through AI-assisted content creation.
  • Influence operations attempting to manipulate public discourse using AI-generated posts on platforms like X, TikTok, Telegram and Facebook.

The report details 10 case studies where OpenAI banned user accounts and shared findings with industry partners and authorities to strengthen collective defenses.

Here's how the company detailed the tactics, techniques, and procedures (TTPs) as presented in the discussion of one representative case -- a North Korea-linked job scam operation using ChatGPT to generate fake résumés and spoof interviews:

Activity LLM ATT&CK Framework Category
Automating to systematically fabricate detailed résumés aligned to various tech job descriptions, personas, and industry norms. Threat actors automated generation of consistent work histories, educational backgrounds, and references via looping scripts. LLM Supported Social Engineering
Threat actors utilized the model to answer employment-related, likely application questions, coding assignments, and real-time interview questions, based on particular uploaded resumes. LLM Supported Social Engineering
Threat actors sought guidance for remotely configuring corporate-issued laptops to appear as though domestically located, including advice on geolocation masking and endpoint security evasion methods. LLM-Enhanced Anomaly Detection Evasion
LLM assisted coding of tools to move the mouse automatically, or keep a computer awake remotely, possibly to assist in remote working infrastructure setups. LLM Aided Development

Beyond the employment scam case, OpenAI's report outlines multiple campaigns involving threat actors abusing AI in cloud-centric and infrastructure-based attacks.

Cloud-Centric Threat Activity
Many of the campaigns OpenAI disrupted either targeted cloud environments or used cloud-based platforms to scale their impact:

  • A Russian-speaking group (Operation ScopeCreep) used ChatGPT to assist in the iterative development of sophisticated Windows malware, distributed via a trojanized gaming tool. The campaign leveraged cloud-based GitHub repositories for malware distribution and used Telegram-based C2 channels.
  • Chinese-linked groups (KEYHOLE PANDA and VIXEN PANDA) used ChatGPT to support AI-driven penetration testing, credential harvesting, network reconnaissance, and automation of social media influence. Their targets included US federal defense industry networks and government communications systems.
  • An operation dubbed Uncle Spam, also linked to China, generated polarizing US political content using AI and pushed it via social media profiles on X and Bluesky.
  • Wrong Number, likely based in Cambodia, used AI-generated multilingual content to run task scams via SMS, WhatsApp, and Telegram, luring victims into cloud-based crypto payment schemes.
    SMS randomly sent to an OpenAI investigator, generated using ChatGPT.
    [Click on image for larger view.] SMS randomly sent to an OpenAI investigator, generated using ChatGPT. (source: OpenAI).

Defensive AI in Action
OpenAI says it is using AI as a "force multiplier" for its investigative teams, enabling it to detect abusive activity at scale. The report also highlights how using AI models can paradoxically expose malicious actors by providing visibility into their workflows.

"AI investigations are an evolving discipline," the report notes. "Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses."

The company calls for continued collaboration across the industry to strengthen defenses, noting that AI is only one part of the broader internet security ecosystem.

For cloud architects, platform engineers and security professionals, the report is a useful read. It illustrates not only how attackers are using AI to speed up traditional tactics, but also how cloud-based services are central both to their targets and to the infrastructure of modern threat campaigns.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube