InícioArtigosBiometrics Are Not Enough: How Advanced Fraud Is Challenging Banks

Biometrics Are Not Enough: How Advanced Fraud Is Challenging Banks

The adoption of biometrics has exploded in Brazil in recent years – 82% of Brazilians already use some form of biometric technology for authentication, driven by convenience and the search for greater security in digital services. Whether accessing banks via facial recognition or using fingerprints to authorize payments, biometrics have become the “new CPF” in terms of personal identification, making processes faster and more intuitive..  

However, a growing wave of fraud has exposed the limits of this solution: in January 2025 alone, 1.24 million fraud attempts were recorded in Brazil, a 41.6% increase over the previous year—equivalent to one attempted scam every 2.2 seconds. Many of these attacks target digital authentication systems. Data from Serasa Experian shows that in 2024, attempted fraud against banks and cards increased 10.4% compared to 2023, representing 53.4% ​​of all frauds recorded that year..  

Had they not been prevented, these frauds could have caused an estimated loss of R$51.6 billion. This increase reflects a changing landscape: scammers are evolving their tactics faster than ever. According to a Serasa survey, half of Brazilians (50.7%) were victims of digital fraud in 2024, a 9 percentage point jump compared to the previous year, and 54.2% of these victims suffered direct financial loss..  

Another analysis points to a 45% increase in digital crimes in Brazil by 2024, with half of all victims effectively being duped by these scams. Given these figures, the security community asks: if biometrics promised to protect users and institutions, why do fraudsters always seem to be one step ahead?

Scams bypass facial and digital recognition

Part of the answer lies in the creativity with which digital gangs circumvent biometric mechanisms. In recent months, emblematic cases have emerged. In Santa Catarina, a fraudulent group defrauded at least 50 people by clandestinely obtaining customers’ facial biometric data. A telecommunications employee simulated the sale of phone lines to capture customers’ selfies and documents, then used this data to open bank accounts and take out loans in the victims’ names..  

In Minas Gerais, criminals went further: they pretended to be postal delivery people to collect fingerprints and photos of residents, with the express goal of circumventing bank security. In other words, the scammers not only attack the technology itself, but also exploit social engineering—inducing people to hand over their own biometric data without realizing it. Experts warn that even systems considered robust can be fooled..  

The problem is that the popularization of biometrics has created a false sense of security: users assume that, because it is biometric, authentication is infallible..  

In institutions with less stringent barriers, scammers succeed using relatively simple means, such as photos or molds to imitate physical characteristics. The so-called “silicone finger scam,” for example, has become well-known: criminals stick transparent films on ATM fingerprint readers to steal the customer’s print and then create a fake silicone finger with that fingerprint, making unauthorized withdrawals and transfers. Banks claim to already employ countermeasures—sensors capable of detecting heat, pulse, and other characteristics of a living finger, rendering artificial molds useless..  

Even so, isolated cases of this scam highlight that no biometric barrier is completely safe from attempts to circumvent it. Another worrying factor is the use of social engineering schemes to obtain selfies or facial scans from customers themselves. The Brazilian Federation of Banks (Febraban) has sounded the alarm about a new type of fraud in which scammers request “confirmation selfies” from victims under false pretenses. For example, pretending to be bank or INSS employees, they ask for a facial photo “to update registration” or release a nonexistent benefit – in reality, they use this selfie to impersonate the customer in facial verification systems..  

A simple oversight – such as taking a photo while responding to a request from a supposed delivery person or health worker – can provide criminals with the biometric “key” to access other people’s accounts..  

Deepfakes and AI: the new frontier of coups

While deceiving people is already a widely used strategy, the most advanced criminals are also deceiving machines. This includes deepfakes—advanced voice and image manipulation using artificial intelligence—and other digital spoofing techniques that have seen a leap in sophistication from 2023 to 2025..  

Last May, for example, the Federal Police launched Operation “Face Off” after identifying a scheme that defrauded approximately 3,000 Gov.br accounts using fake facial biometrics. The criminal group used highly sophisticated techniques to impersonate legitimate users on the platform. gov.br, which concentrates access to thousands of digital public services.

Investigators revealed that the scammers used a combination of manipulated videos, AI-altered images, and even hyper-realistic 3D masks to fool facial recognition systems. In other words, they simulated the facial features of others—including deceased individuals—to assume identities and access financial benefits linked to those accounts. Using perfectly synchronized artificial movements like blinking, smiling, or turning the head, they were even able to bypass the liveness detection feature, which was developed specifically to detect whether a real person was in front of the camera..  

The result was improper access to funds that should only be claimed by the true beneficiaries, in addition to the illicit approval of loans granted through the Meu INSS app using these fake identities. This case clearly demonstrated that, yes, it is possible to bypass facial biometrics—even in large, theoretically secure systems—when equipped with the right tools..  

In the private sector, the situation is no different. In October 2024, the Civil Police of the Federal District conducted Operation “DeGenerative AI,” dismantling a gang specializing in hacking digital bank accounts using artificial intelligence apps. The criminals made more than 550 attempts to hack into customers’ bank accounts, using leaked personal data and deepfake techniques to impersonate account holders and validate new account opening procedures in the victims’ names and enable mobile devices to appear as their own..  

It is estimated that the group managed to move around R$110 million in personal and corporate accounts, laundering money from various sources, before most of the frauds were stopped by internal bank audits..  

Beyond biometrics

For the Brazilian banking sector, the rise of these high-tech scams is a warning sign. Banks have invested heavily over the last decade to migrate customers to secure digital channels, adopting facial and fingerprint biometrics as barriers against fraud..  

However, the recent wave of scams suggests that relying solely on biometrics may not be enough. Scammers exploit human error and technological loopholes to impersonate consumers, and this requires security to be considered across multiple layers and authentication factors, rather than a single “magic” factor.

Faced with this complex scenario, experts agree on a recommendation: adopt multifactor authentication and multilayered security approaches. This means combining different technologies and verification methods so that if one factor fails or is compromised, others prevent fraud. Biometrics itself remains an important component – ​​after all, when well implemented with liveness verification and encryption, it significantly hinders opportunistic attacks..  

However, it must work in conjunction with other controls: one-time passwords or PINs sent to the cell phone, user behavior analysis – so-called behavioral biometrics, which identifies typing patterns, device usage and can sound the alarm when noticing a customer “acting differently than normal” – and intelligent transaction monitoring..  

AI tools are also being used to banks’ advantage, identifying subtle signs of deepfake in videos or voices – for example, analyzing audio frequencies to detect synthetic voices or looking for visual distortions in selfies..  

Ultimately, the message for bank managers and information security professionals is clear: there’s no silver bullet. Biometrics have brought a higher level of security compared to traditional passwords—so much so that scams have largely shifted to deceiving people, no longer breaking algorithms..  

However, fraudsters are exploiting every loophole, whether human or technological, to thwart biometric systems. The appropriate response involves constantly updated cutting-edge technology and proactive monitoring. Only those who can evolve their defenses as quickly as new scams emerge will be able to fully protect their customers in the age of malicious artificial intelligence.

By Sylvio Sobreira Vieira, CEO & Head Consulting at SVX Consultoria.

MATÉRIAS RELACIONADAS

DEIXE UMA RESPOSTA

Por favor digite seu comentário!
Por favor, digite seu nome aqui

RECENTES

MAIS POPULARES

[elfsight_cookie_consent id="1"]