Faking the Figures (and the Faces): Deepfake Financial Frauds

The proliferation of deepfake creation software on the Dark Web is fueling a surge in AI-assisted financial fraud, raising urgent questions about how to combat this growing threat.

Recently, a chilling example unfolded when a Hong Kong-based employee in a multinational corporation’s finance department fell victim to a sophisticated deepfake scam. The employee received what appeared to be a legitimate request from the company’s UK-based CFO to execute a transaction. Despite initial doubts, the employee was swayed when a video conference call confirmed the CFO’s identity.

What followed was a series of transactions totaling a staggering $25.5 million, orchestrated by scammers using deepfake technology to impersonate company executives convincingly.

While high-quality deepfakes were once the domain of skilled professionals, they have become accessible to a broader audience with simple tools available for download at low costs.

Face swapping, for instance, has become commonplace, with over 100 tools on the market for creating basic face swaps. Additionally, Dark Web services like OnlyFake offer realistic fake IDs for a mere $15 each, signaling a significant shift in document forgery methods.

Yet, it’s not just the underground market capitalizing on deepfake advancements. Legitimate companies also leverage this technology for various applications, including multimedia production and entertainment.

The escalating sophistication of deepfake technology poses a significant challenge to detection efforts. Traditional methods of identifying imperfections in fake images or voices are becoming less reliable as technology improves rapidly.

Detection technology struggles to keep pace with deepfake advancements, but in the meantime, leveraging basic metadata for scrutiny remains a pragmatic strategy. While deepfake attackers excel at creating convincing illusions, they often leave behind telltale signs that can be detected with scrutiny.

Also, instead of solely relying on detection, companies should focus on preventing synthetic content from reaching employees altogether. Verifying caller identities and requiring transaction geolocation can help flag suspicious activity before it’s too late.

Skip to content