An employee of a multinational corporation joined a Zoom call in January 2024 that seemed to include the company’s CFO and a number of coworkers. Everyone on the call appeared to be in agreement. The voices sounded correct. The requests were reasonable. In order to complete what he was informed was a private transaction, he wired $25 million. He was the only person on the call who wasn’t AI-generated. A deepfake was the CFO. The coworkers were deepfakes. Recursive neural networks had been trained to make the entire meeting indistinguishable from reality, including the ambient background noise, the natural-sounding pauses, and the genuine visual presence of people he might have spoken to previously. The money was gone by the time anyone realized what had happened.
It is no longer an exception. It’s a sneak peek. A deepfake-enabled incident happened somewhere in the world every five minutes in 2024, according to the Identity Fraud Report by Entrust and Onfido. According to CrowdStrike’s 2025 Global Threat Report, vishing attacks—voice phishing, in which criminals use AI-synthesized audio to impersonate trusted individuals—rose 442 percent in the second half of 2024 alone. Someone with a laptop and access to tools that cost less per attack than a cup of coffee can now assemble what once required a skilled social engineer, weeks of reconnaissance, and a convincing pretext more quickly, at scale, and in dozens of languages.
| Key Research (NYU Tandon, 2025) | “Ransomware 3.0” / “PromptLock” — LLMs autonomously execute all 4 ransomware phases; cost per attack: ~$0.70 using commercial API |
| Phishing Simulation Result | AI-crafted phishing emails tricked 35% of recipients (Arsen cybersecurity simulation) |
| Malware Variants (Gen AI) | AI can generate 10,000 malware variants with identical functionality; 88% evade detection |
| Daily New Malware Samples | ~450,000/day (AV-TEST Institute) |
| Vishing Attack Increase | +442% in second half of 2024 (CrowdStrike 2025 Global Threat Report) |
| Deepfake Attack Frequency | One incident every 5 minutes in 2024 (Entrust/Onfido Identity Fraud Report) |
| Largest Documented Deepfake Loss | $25 million — finance employee wired funds after fake Zoom meeting with AI-generated CFO (Jan 2024) |
| Criminal AI Tools | GhostGPT (no safety restrictions); self-hosted Ollama open-source models (no provider guardrails) |
| SentinelOne Assessment (Dec 2025) | LLMs are “operational accelerators, not a revolution” — competent crews faster; novices more dangerous |
| Reference | NYU Tandon — LLMs Execute Complete Ransomware Attacks ↗ |
In August 2025, NYU Tandon School of Engineering provided the most technical accounting of this direction. There, researchers developed a functional proof-of-concept they named Ransomware 3.0. When cybersecurity company ESET discovered it uploaded to VirusTotal during the research team’s testing, they publicly identified it as PromptLock. At first, they thought they were looking at active criminal malware. The system was a contained laboratory prototype that generated customized attack scripts at runtime by embedding written instructions into computer programs and contacting open-source AI language models.
From identical starting prompts, each execution generated a different piece of code. For defenders, this is the most important detail: traditional security software uses known signatures or behavioral patterns to identify malware. There is no corresponding signature for a system that produces new code on each run. Every time it runs, it generates something the security software has never seen before. Additionally, a full attack cycle, including system mapping, file identification, data exfiltration, and ransom note generation, cost about 23,000 AI tokens, or roughly $0.70 when using commercial API pricing. That expense is completely eliminated by open-source models.
GhostGPT, an AI tool used by criminals, has been making the rounds in underground forums since at least late 2024. In contrast to ChatGPT, Claude, or Gemini, GhostGPT has no security limitations, such as a refusal to create malware, a barrier against producing phishing content, or a reluctance to assist in identifying software vulnerabilities. In February 2025, the FBI and CISA released a joint advisory regarding it. In the meantime, open-source frameworks like Ollama, which lack provider telemetry, abuse monitoring, and content filters, are increasingly being used by cybercriminals to host their own AI models locally. The research team at SentinelOne, which has been closely monitoring LLM adoption among ransomware operators, observes that more advanced criminal crews are already moving toward locally hosted, self-modified open-source models because they are aware that using commercial APIs leaves traces. These are not fortunate individuals; rather, they are actively engineering around the barriers.
What SentinelOne refers to as a “operational accelerator rather than a revolution” is a crucial distinction that should not be disregarded. LLMs are not producing essentially new attack categories, which is the main finding. They are compressing time and democratizing capability. An LLM can now be instructed to prioritize all of a company’s stolen data in any language, exposing the most financially sensitive documents for extortion. Previously, a Russian-speaking operator was unable to recognize that a file named “Fatura” contained a Turkish invoice or that “Rechnung” is the German word for an invoice. In order to create convincing ransom notes, a crew that previously required a professional writer can now produce those notes in the victim company’s native tongue, matching tone and register to make them more coercive. The task can now be broken down into seemingly harmless prompts, distributed over several AI sessions, and assembled offline by a low-skilled actor who lacks the technical expertise to put together ransomware-as-a-service infrastructure. Criminals with skill get quicker. Novices start to pose a threat. Every day, about 450,000 new malware samples are registered by the AV-TEST institute.
Additionally, the criminal economy centered around these instruments has become more dispersed and more difficult to disrupt. Under persistent law enforcement pressure, a proliferation of small, transient crews operating under names like Termite, Punisher, The Gentlemen, and Obscura have replaced the era of large, branded ransomware cartels, such as LockBit, Conti, and REvil, which operated with nearly corporate visibility. They emerge, assault, disintegrate, and reappear. It’s now really hard to give credit. More noise is being produced by state-aligned actors who are increasingly working in criminal affiliate ecosystems. Security researchers are attempting to map an organizational picture that is fast-moving and fractured, which works well for AI-assisted attack automation: smaller crews with lower overhead can sustain more attacks across more targets with fewer personnel.
It’s difficult to ignore how the offense and defense have become more asymmetrical. Everything must be protected by defenders. Attackers only have to locate one vulnerability. AI provides attackers with more opportunities to investigate, more languages to investigate, and quicker tools to determine which opportunities are worthwhile. Although the defense is catching up—behavioral analytics, anomaly detection, and AI-driven threat detection are all getting better—the gap still exists, and it’s unclear when it will be closed. The CISO of the insurance company If, Peter Granlund, has stated clearly in public statements that he anticipates AI-powered attack bots that can breach organizational defenses in a matter of minutes, as opposed to the hours or days that most existing incident response frameworks assume, within a few years. At the very least, it’s unclear if the security sector has developed the infrastructure necessary to react that quickly.
