The advancement of deepfake technology has generated serious challenges for digital security. In Brazil, this type of fraud has spread rapidly: in October 2024, the Civil Police of the Federal District launched Operation “DeGenerative AI,” aimed at dismantling a gang specialized in bank account hacking using artificial intelligence applications.
The investigated group carried out over 550 attempts to hack bank accounts of digital bank customers through coordinated attacks, the use of third-party data, and deepfake, where they managed to reproduce customers’ images to validate account opening procedures and new device activation. The gang moved approximately R$ 110,000,000 through personal and corporate accounts in activities suggesting Money Laundering — the damage would have been worse if not for the banks’ fraud prevention audits, which managed to block most of the frauds.
Deepfake technology is constantly evolving — and tends to grow even further: according to Deloitte research, fraud software can be found on the deep web with prices ranging from $20 to thousands of dollars, showcasing the power of the global fraud economy, a term used by Javelin Strategy & Research to describe the increasing criminal activities conducted on a global scale, including various types of fraud.
According to the Financial Fraud Report by idwall, highly complex frauds increased by 16% when comparing the first quarter of 2023 to 2024. But when it comes to high complexity, which frauds should companies be alert to?
There are two most frequent types: the creation of users and documents with synthetic data, where fraudsters generate fake documents and faces from real data, making the fraud more convincing and harder to detect; and selfie manipulation, where a legitimate document is combined with a photo generated by deepfake to bypass facial recognition systems. These frauds can occur at various stages of the digital journey, such as new customer registration, device or password changes, and requests for new products and credit, for example.
Creating effective digital security solutions is as complex as preventing fraud — especially considering that the Brazilian market has unique characteristics, such as diverse mobile models and operating systems, older mobile devices in use, and a portion of the population with limited internet access, making it harder to implement advanced security technologies.
However, even amid adversity, it is essential to ensure a high level of protection against fraudsters who constantly refine their techniques; therefore, many companies have begun testing their tools using some methods already employed by fraudsters, such as 2D and 3D masks, to simulate faces and attempt to bypass authentication systems. Additionally, requiring certifications ensuring that the biometric validation used is efficient in detecting deepfakes — like the iBeta 2 seal — is crucial for companies to adopt reliable and secure technology.
However, biometric verification alone is not sufficient to detect deepfakes: a multi-layered approach is necessary. To confirm user data accuracy with greater precision, this technology must be combined with other tools, such as document forensics, OCR (optical character recognition), and background check. Integrating these validation tools can prevent a user from being accepted in the company’s onboarding process using fake data or someone else’s documents, for example.
With the advancement of generative AI tools and sophisticated techniques that make fraud easier and cheaper to commit, deepfake-derived frauds are likely to escalate further, moving from illegality to “retail.” In this scenario, companies need to invest as soon as possible in solutions that connect technology, automation, and intelligence, opting for centralized solutions that integrate all user’s registration, document, and biometric data in a single environment.