The adoption of biometrics has exploded in Brazil in recent years – 82% of Brazilians already use some biometric technology for authentication, driven by convenience and the pursuit of greater security in digital services. Whether accessing banks via facial recognition or using fingerprints to authorize payments, biometrics has become the ‘new CPF’ in terms of personal identification, making processes faster and more intuitive.
However, a growing wave of fraud has exposed the limits of this solution: in January 2025 alone, 1.24 million fraud attempts were recorded in Brazil, a 41.6% increase compared to the previous year – equivalent to one scam attempt every 2.2 seconds. A large portion of these attacks specifically target digital authentication systems. Data from Serasa Experian shows that in 2024, fraud attempts against banks and cards grew by 10.4% compared to 2023, accounting for 53.4% of all frauds recorded that year.
If not prevented, these frauds could have caused an estimated loss of R$ 51.6 billion. This increase reflects a shift in the landscape: scammers are evolving their tactics faster than ever. According to a Serasa survey, half of Brazilians (50.7%) were victims of digital fraud in 2024, a jump of 9 percentage points from the previous year, and 54.2% of these victims suffered direct financial losses.
Another analysis points to a 45% increase in digital crimes in 2024 in the country, with half of the victims effectively deceived by scams. Faced with these numbers, the security community questions: if biometrics promised to protect users and institutions, why do fraudsters always seem to be one step ahead?
Scams bypass facial and fingerprint recognition
Part of the answer lies in the creativity with which digital gangs circumvent biometric mechanisms. In recent months, emblematic cases have emerged. In Santa Catarina, a fraudulent group defrauded at least 50 people by clandestinely obtaining facial biometric data from customers – a telecommunications employee simulated phone line sales to capture selfies and documents from customers, later using this data to open bank accounts and take out loans in the victims’ names.
In Minas Gerais, criminals went further: they posed as postal workers to collect fingerprints and photos of residents, with the express purpose of bypassing bank security. In other words, scammers not only attack the technology itself but also exploit social engineering – tricking people into handing over their own biometric data without realizing it. Experts warn that even systems considered robust can be deceived.
The problem is that the popularization of biometrics has created a false sense of security: users assume that, because it’s biometric, authentication is foolproof.
In institutions with less rigorous barriers, scammers succeed using relatively simple means, such as photos or molds to mimic physical characteristics. The so-called ‘silicone finger scam,’ for example, became well-known: criminals place transparent films on ATM fingerprint readers to steal customers’ prints and then create a fake silicone finger with that print, making unauthorized withdrawals and transfers. Banks claim they already employ countermeasures – sensors capable of detecting heat, pulse, and other characteristics of a live finger, rendering artificial molds useless.
Still, isolated cases of this scam show that no biometric barrier is entirely safe from attempts to bypass it. Another concerning vector is the use of social engineering tricks to obtain selfies or facial scans from customers themselves. The Brazilian Federation of Banks (Febraban) sounded the alarm about a new type of fraud where scammers request ‘confirmation selfies’ from victims under false pretenses. For example, pretending to be bank or INSS employees, they ask for a face photo ‘to update records’ or release a nonexistent benefit – in reality, they use this selfie to impersonate the customer in facial verification systems.
A simple oversight – like taking a photo at the request of a supposed delivery person or health agent – can provide criminals with the biometric ‘key’ to access others’ accounts.
Deepfakes and AI: the new frontier of scams
If deceiving people is already a widely used strategy, more advanced criminals are also deceiving machines. This is where deepfake threats – advanced manipulation of voice and image by artificial intelligence – and other digital forgery techniques come into play, which saw a leap in sophistication from 2023 to 2025.
Last May, for example, the Federal Police launched Operation ‘Face Off’ after identifying a scheme that defrauded around 3,000 accounts on the Gov.br portal using fake facial biometrics. The criminal group applied highly sophisticated techniques to impersonate legitimate users on the platform gov.br, which centralizes access to thousands of public digital services.
Investigators revealed that the scammers used a combination of manipulated videos, AI-altered images, and even hyper-realistic 3D masks to fool the facial recognition mechanism. In other words, they simulated the facial features of third parties – including deceased individuals – to assume identities and access financial benefits linked to those accounts. With artificial movements like blinking, smiling, or turning their heads perfectly synchronized, they even bypassed the liveness detection functionality, which was developed precisely to detect whether there’s a real person in front of the camera.
The result was unauthorized access to funds that should only be redeemed by the rightful beneficiaries, as well as the illicit approval of payroll loans in the Meu INSS app using these fake identities. This case starkly exposed that yes, it is possible to bypass facial biometrics – even in large and theoretically secure systems – when the right tools are available.
In the private sector, the situation is no different. In October 2024, the Civil Police of the Federal District conducted Operation ‘DeGenerative AI,’ dismantling a gang specialized in hacking digital bank accounts through AI apps. The criminals attempted over 550 bank account intrusions using leaked personal data and deepfake techniques to reproduce customers’ images, thereby validating procedures to open new accounts in victims’ names and enabling mobile devices as if they were theirs.
It is estimated that the group managed to move around R$ 110 million in personal and corporate accounts, laundering money from various sources, before most frauds were blocked by banks’ internal audits.
Beyond biometrics
For the Brazilian banking sector, the escalation of these high-tech scams raises a red flag. Banks heavily invested in the last decade to migrate customers to secure digital channels, adopting facial and fingerprint biometrics as barriers against fraud.
However, the recent wave of scams suggests that relying solely on biometrics may not be enough. Scammers exploit human flaws and technological gaps to impersonate consumers, and this demands that security be designed with multiple levels and authentication factors, no longer a single ‘magic’ factor.
Faced with this complex scenario, experts converge on one recommendation: adopt multi-factor authentication and multi-layered security approaches. This means combining different technologies and verification methods so that if one factor fails or is compromised, others prevent fraud. Biometrics itself remains an important piece – after all, when well-implemented with liveness verification and encryption, it significantly hinders opportunistic attacks.
However, it must work alongside other controls: one-time passwords or PINs sent to mobile phones, user behavior analysis – so-called behavioral biometrics, which identifies typing patterns, device usage, and can sound the alarm when noticing a customer ‘acting differently than usual’ – and intelligent transaction monitoring.
AI tools are also being used in favor of banks, identifying subtle signs of deepfake in videos or voices – for example, analyzing audio frequencies to detect synthetic voices or looking for visual distortions in selfies.
Ultimately, the message for bank managers and information security professionals is clear: there is no silver bullet. Biometrics brought a higher level of security compared to traditional passwords – so much so that scams largely shifted to deceiving people rather than breaking algorithms.
However, fraudsters are exploiting every loophole, whether human or technological, to thwart biometric systems. The appropriate response involves cutting-edge technology in constant updates and proactive monitoring. Only those who can evolve their defenses at the same pace as new scams emerge will be able to fully protect their customers in the era of malicious artificial intelligence.
By Sylvio Sobreira Vieira, CEO & Head Consulting at SVX Consulting.