In early 2024, a finance employee at multinational engineering firm Arup joined what appeared to be a routine video call with the company’s CFO and several senior colleagues. Every person on the screen looked real. Every voice sounded authentic. The employee approved a series of transfers totalling $25.6 million. Every face on that call was a deepfake. Every voice was AI-generated. Not a single real person was present.
This was not a failure of technology. The company’s systems worked exactly as designed. The failure was that the controls surrounding those systems were built for a world where seeing and hearing a person meant that person was real. That world no longer exists.
The AI fraud paradox is this: the same technology that businesses are deploying to detect fraud is simultaneously being used by criminals to commit fraud that is orders of magnitude more sophisticated than anything that came before. And the uncomfortable conclusion is that AI fraud prevention cannot rely on AI detection alone. It requires stronger, more rigid internal controls at the structural level, not fewer.
How AI Has Changed the Fraud Landscape
The fraud landscape of 2026 looks nothing like it did three years ago. AI-related fraud complaints exceeded $893 million in losses in 2025 alone, and that figure captures only reported incidents. Deloitte’s Center for Financial Services projects that AI-enabled fraud losses could reach $40 billion annually by 2027.
The change is not incremental. It is categorical. Three developments have converged to create an entirely new threat environment.
Voice cloning now requires as little as three seconds of audio to produce a convincing replica of any individual’s voice. A podcast appearance, a conference recording, or a LinkedIn video provides enough material. Deepfake-enabled voice attacks rose 680% in a single year, and the tools to produce them are freely available, requiring no technical expertise.
Video deepfakes have moved from detectable curiosities to real-time, interactive fakes that can sustain a full video call. The Arup case demonstrated that multi-person deepfake video calls are not theoretical. They are operational and they work.
AI-generated business email compromise has eliminated the traditional red flags. The grammatical errors, generic phrasing, and clumsy impersonation that trained employees to spot phishing emails no longer exist. Large language models produce emails that match the vocabulary, sentence rhythm, and communication style of the person being impersonated. They reference real project names, real invoice numbers, and real internal terminology.
Why AI Detection Alone Is Not Enough
The dominant narrative in AI fraud prevention is that AI is the answer to AI-enabled fraud. Deploy smarter detection models. Train machine learning systems on more data. Build AI that can spot the AI. This narrative is not wrong, but it is dangerously incomplete.
AI detection operates in a perpetual arms race. Every improvement in detection capability is met by an improvement in generation capability. The model that detects a deepfake today will fail against the deepfake generated tomorrow. Detection tools are essential, but they are reactive by nature. They identify threats after they have been created. They do not prevent the underlying business process from being exploited.
The structural weakness that AI-enabled fraud exploits is not a technology gap. It is a controls gap. The Arup attack succeeded not because the company lacked AI detection tools, but because its approval process allowed a multi-million-pound transfer to be authorised on the basis of a video call. The attack vector was the business process itself: a human being made an approval decision based on sensory input that had been fabricated.
This is the paradox. Businesses are investing heavily in AI systems to detect fraud while leaving the approval processes that fraud exploits largely unchanged. The front door has a state-of-the-art alarm system. The back door is still unlocked.
The Case for More Rigid Internal Controls
The counterintuitive response to AI-enabled fraud is not more flexibility. It is more rigidity. When the inputs to a decision, the email, the phone call, the video conference, can no longer be trusted at face value, the only reliable defence is a process that does not depend on those inputs.
This means structured, rule-based approval workflows that enforce controls automatically, regardless of how convincing the request appears. An invoice above a certain threshold cannot be approved by a single person, no matter how urgent the request or how senior the voice on the phone. A change to a supplier’s bank details triggers a mandatory verification step that cannot be bypassed by a persuasive email. A payment to a new vendor requires multi-step approval that no phone call, video call, or email can circumvent.
These controls work precisely because they are rigid. A deepfake CFO on a video call cannot override an approval rule embedded in a system. An AI-generated email that perfectly mimics a finance director’s writing style cannot skip a step in an automated workflow. The system does not care how convincing the impersonation is. It enforces the rules regardless.
This is fundamentally different from training employees to spot fakes. Employee training was a reasonable defence when fakes were detectable. In 2026, they frequently are not. The defence must be structural, not perceptual.
What Structural AI Fraud Prevention Looks Like
A business with structural AI fraud prevention does not rely on any single person’s ability to verify that a request is genuine. It embeds the verification into the process itself.
Segregation of duties is the first layer. No single individual can create a vendor, approve an invoice from that vendor, and initiate payment. These functions are separated across different people and enforced by the system. An attacker who successfully impersonates one person cannot complete the fraud because the process requires actions from multiple individuals who are independently verified by the system, not by each other.
Automated threshold routing is the second layer. Every financial document is routed based on rules, not requests. An invoice for $50,000 does not go to whoever the email says it should go to. It routes to the approver designated by the system based on the amount, the vendor category, and the cost centre. The routing cannot be overridden by a phone call or an email.
Bank detail change verification is the third layer. When a supplier’s payment details are modified, the system flags the change and requires verification through a separate channel before any payment is processed. This is the specific control that prevents payment diversion fraud, the fastest-growing category of AI-enabled financial crime.
Immutable audit trails are the fourth layer. Every approval decision, every routing step, every exception is logged permanently. The trail cannot be edited or deleted by anyone, including system administrators. This means that even if a fraud succeeds, the forensic evidence exists to trace exactly what happened, who was involved, and where the controls failed.
Duplicate detection is the fifth layer. AI-generated BEC attacks frequently use invoice data from previous legitimate transactions to create convincing fakes. Automated systems that flag invoices matching existing entries by vendor, amount, or reference number catch these duplicates at the approval stage, before any payment is initiated. In a manual environment, the duplicate enters the system and is only discovered during reconciliation, if it is discovered at all.
The Deepfake-Proof Approval Chain
Consider how the Arup attack would have played out under a rigid, automated control environment. The deepfake CFO on the video call requests an urgent transfer. In a manual process, the finance employee approves it because the request came from the CFO. In an automated process, the request cannot proceed because it does not match the conditions defined in the approval workflow.
A transfer of $25.6 million would require multi-step approval from designated individuals who are identified by the system, not by their face or voice on a screen. The request would need to reference an approved purchase order or contract. The receiving bank details would be verified against the stored supplier record. Any deviation from stored details would trigger an additional verification step through a separate, pre-registered channel.
The deepfake is irrelevant in this scenario. The attacker cannot manipulate the system because the system does not accept verbal or visual instructions. It accepts structured data that passes through predefined rules. The sophistication of the impersonation does not matter because the impersonation is not the mechanism of approval. For organisations building financial controls designed to withstand this new threat environment, the principle is clear: remove human perception from the approval chain wherever possible and replace it with system-enforced logic.
Why Businesses Resist Rigid Controls (and Why They Shouldn’t)
The most common objection to rigid automated controls is that they slow things down. Executives want the ability to approve an urgent payment with a phone call. Finance teams want the flexibility to process an exception without going through a multi-step workflow. This desire for speed and flexibility is understandable. It is also exactly what AI-enabled fraud exploits.
Every exception process, every manual override, every shortcut that bypasses the automated workflow is a potential entry point for an attacker. The urgency that the deepfake CFO creates on the video call is designed to trigger exactly these shortcuts. The attacker knows that the target organisation has controls. The attack is designed to make the human bypass them.
Rigid controls do not slow down legitimate transactions. A well-designed automated workflow processes a standard invoice approval in minutes. What it does slow down is the fraudulent transaction that depends on human judgment being manipulated. That is precisely the point.
The New Baseline for Financial Security
The AI fraud landscape has fundamentally changed what constitutes adequate financial controls. The controls that were reasonable in 2020, email approvals, verbal authorisations, single-person sign-offs, are now demonstrably inadequate. Auditors, insurers, and regulators are beginning to reflect this reality in their expectations.
The new baseline is automated, system-enforced controls that do not depend on human sensory verification. It is segregation of duties that is structural, not policy-based. It is approval routing that follows rules, not requests. It is bank detail verification that operates through independent channels. And it is an audit trail that is immutable, comprehensive, and generated automatically.
Cyber insurers are already adjusting their expectations. Policies that previously covered social engineering losses are now requiring evidence that the insured organisation had automated approval controls, segregation of duties, and bank detail verification in place at the time of the incident. Businesses that still rely on manual approvals and verbal authorisations are finding that their claims are being challenged or denied on the grounds that their controls were inadequate for the known threat environment. The legal and insurance landscape is catching up with the technology, and the businesses that have not upgraded their controls accordingly are exposed on multiple fronts.
AI fraud prevention in 2026 is not about building better AI to catch the criminals. It is about building business processes that the criminals cannot exploit, regardless of how sophisticated their tools become. The paradox is real: the smarter the technology gets, the more rigid the human controls around it need to be. The businesses that understand this will be the ones that survive the next generation of financial crime. The ones that rely on detection alone are placing a bet that their AI will always be smarter than the attacker’s. History suggests that is not a bet worth taking.
