Home Articles Biometrics is not enough: how advanced fraud is challenging banks

Biometrics isn't enough: how advanced fraud is challenging banks.

The adoption of biometrics has exploded in Brazil in recent years – 82% of Brazilians already use some form of biometric technology for authentication, driven by convenience and the search for greater security in digital services. Whether accessing banks via facial recognition or using fingerprints to authorize payments, biometrics has become the "new CPF" (Brazilian taxpayer ID) in terms of personal identification, making processes faster and more intuitive.  

However, a growing wave of fraud has exposed the limitations of this solution: in January 2025 alone, 1.24 million attempted frauds were recorded in Brazil, a 41.6% increase compared to the previous year – equivalent to one attempted scam every 2.2 seconds. A large portion of these attacks specifically target digital authentication systems. Data from Serasa Experian shows that in 2024, attempted frauds against banks and credit cards grew by 10.4% compared to 2023, representing 53.4% ​​of all frauds recorded that year.  

If these frauds had not been prevented, they could have caused an estimated loss of R$ 51.6 billion. This increase reflects a changing landscape: fraudsters are evolving their tactics faster than ever before. According to a survey by Serasa, half of Brazilians (50.7%) were victims of digital fraud in 2024, a jump of 9 percentage points compared to the previous year, and 54.2% of these victims suffered direct financial loss.  

Another analysis points to a 45% increase in digital crimes in the country in 2024, with half of the victims actually being deceived by the scams. Given these numbers, the security community is questioning: if biometrics promised to protect users and institutions, why do fraudsters always seem to be one step ahead?

Scams circumvent facial and fingerprint recognition.

Part of the answer lies in the creativity with which digital gangs circumvent biometric mechanisms. In recent months, emblematic cases have emerged. In Santa Catarina, a fraudulent group defrauded at least 50 people by clandestinely obtaining facial biometric data from clients – a telecommunications employee simulated sales of telephone lines to capture selfies and documents from clients, later using this data to open bank accounts and take out loans in the victims' names.  

In Minas Gerais, criminals went even further: they pretended to be postal delivery workers to collect fingerprints and photos from residents, with the express purpose of bypassing bank security. In other words, the scammers not only attack the technology itself, but also exploit social engineering – inducing people to hand over their own biometric data without realizing it. Experts warn that even systems considered robust can be fooled.  

The problem is that the popularization of biometrics has created a false sense of security: users assume that, because it is biometric, the authentication is infallible.  

In institutions with less stringent security measures, fraudsters succeed using relatively simple methods, such as photos or molds to mimic physical characteristics. The so-called "silicone finger scam," for example, has become well-known: criminals attach transparent films to fingerprint readers on ATMs to steal the customer's fingerprint and then create a fake silicone finger with that fingerprint, making unauthorized withdrawals and transfers. Banks claim to already employ countermeasures – sensors capable of detecting heat, pulse, and other characteristics of a living finger, rendering artificial molds useless.  

Still, isolated cases of this scam demonstrate that no biometric barrier is entirely safe from attempts to circumvent it. Another worrying factor is the use of social engineering tricks to obtain selfies or facial scans from customers themselves. The Brazilian Federation of Banks (Febraban) has sounded the alarm about a new type of fraud in which scammers request "confirmation selfies" from victims under false pretenses. For example, pretending to be bank or INSS (Brazilian Social Security Institute) employees, they ask for a photo of the face "to update registration" or release a non-existent benefit – in reality, they use this selfie to impersonate the customer in facial verification systems.  

A simple oversight – such as taking a photo at the request of a supposed delivery person or health worker – can provide criminals with the biometric "key" to access other people's accounts.  

Deepfakes and AI: the new frontier of scams

While deceiving people is already a widely used strategy, more sophisticated criminals are now also deceiving machines. This is where the threats of deepfake – advanced manipulation of voice and image by artificial intelligence – and other digital forgery techniques come in, techniques that have seen a leap in sophistication from 2023 to 2025.  

Last May, for example, the Federal Police launched Operation "Face Off" after identifying a scheme that defrauded approximately 3,000 accounts on the Gov.br portal using fake facial biometrics. The criminal group employed highly sophisticated techniques to impersonate legitimate users on the gov.br , which centralizes access to thousands of digital public services.

Investigators revealed that the scammers used a combination of manipulated videos, AI-altered images, and even hyper-realistic 3D masks to deceive the facial recognition mechanism. In other words, they simulated the facial features of third parties – including deceased individuals – to assume identities and access financial benefits linked to those accounts. With perfectly synchronized artificial movements of blinking, smiling, or turning their heads, they even managed to circumvent the liveness detection functionality, which was developed precisely to detect if there is a real person in front of the camera.  

The result was unauthorized access to funds that should only be redeemed by the rightful beneficiaries, as well as the illicit approval of payroll loans on the Meu INSS app using these false identities. This case forcefully demonstrated that yes, it is possible to bypass facial biometrics – even in large and theoretically secure systems – when the right tools are available.  

In the private sector, the situation is no different. In October 2024, the Civil Police of the Federal District conducted Operation "DeGenerative AI," dismantling a gang specializing in hacking into digital bank accounts using artificial intelligence apps. The criminals made over 550 attempts to hack into customers' bank accounts, using leaked personal data and deepfake techniques to reproduce the account holders' images and thus validate procedures for opening new accounts in the victims' names and activate mobile devices as if they belonged to them.  

It is estimated that the group managed to move around R$ 110 million through accounts belonging to individuals and legal entities, laundering money from various sources, before most of the fraud was stopped by internal bank audits.  

Beyond biometrics

For the Brazilian banking sector, the escalation of these high-tech scams raises a red flag. Banks have invested heavily in the last decade to migrate customers to secure digital channels, adopting facial and fingerprint biometrics as barriers against fraud.  

However, the recent wave of scams suggests that relying solely on biometrics may not be enough. Scammers exploit human error and technological loopholes to impersonate consumers, and this demands that security be designed with multiple levels and authentication factors, no longer relying on a single "magic" factor.

Given this complex scenario, experts agree on one recommendation: adopt multi-factor authentication and multi-layered security approaches. This means combining different technologies and verification methods so that if one factor fails or is compromised, others prevent fraud. Biometrics itself remains an important element – ​​after all, when well implemented with liveness verification and encryption, it greatly hinders opportunistic attacks.  

However, it must work in conjunction with other controls: one-time passwords or PINs sent to the mobile phone, analysis of user behavior – so-called behavioral biometrics, which identifies typing patterns, device usage, and can sound an alarm when it notices a customer "acting differently than normal" – and intelligent transaction monitoring.  

AI tools are also being used to help banks, identifying subtle signs of deepfake in videos or voices – for example, analyzing audio frequencies to detect synthetic voices or looking for visual distortions in selfies.  

Ultimately, the message for bank managers and information security professionals is clear: there is no silver bullet. Biometrics has brought a superior level of security compared to traditional passwords – so much so that scams have largely shifted to deceiving people, rather than breaking algorithms.  

However, fraudsters are exploiting every loophole, whether human or technological, to thwart biometric systems. The appropriate response involves constantly updated cutting-edge technology and proactive monitoring. Only those who can evolve their defenses at the same speed as new scams emerge will be able to fully protect their clients in the age of malicious artificial intelligence.

By Sylvio Sobreira Vieira, CEO & Head of Consulting at SVX Consultoria.

E-Commerce Update
E-Commerce Updatehttps://www.ecommerceupdate.org
E-Commerce Update is a leading company in the Brazilian market, specializing in producing and disseminating high-quality content about the e-commerce sector.
RELATED ARTICLES

Leave a Reply

Please type your comment!
Please type your name here.

RECENT

MOST POPULAR

[elfsight_cookie_consent id="1"]