Google search engine
InícioArtigosBiometrics is not enough: how advanced frauds are challenging banks

Biometrics is not enough: how advanced frauds are challenging banks

Publicado em

Biometric adoption has exploded in Brazil in recent years – 82% of Brazilians already use some form of biometric technology for authentication, driven by convenience and the pursuit of greater security in digital services. Whether it’s accessing banks via facial recognition or using fingerprint to authorize payments, biometrics has become the “new CPF” in terms of personal identification, making processes quicker and more intuitive.  

However, a growing wave of fraud has exposed the limits of this solution: in January 2025 alone, 1.24 million fraud attempts were recorded in Brazil, a 41.6% increase compared to the previous year – equivalent to a scam attempt every 2.2 seconds. A significant portion of these attacks targets digital authentication systems. Data from Serasa Experian shows that in 2024, fraud attempts against banks and cards increased by 10.4% compared to 2023, representing 53.4% of all frauds reported in the year.  

If these frauds had not been prevented, they could have caused an estimated loss of R$ 51.6 billion. This increase reflects a changing landscape: fraudsters are evolving their tactics faster than ever before. According to a Serasa survey, half of Brazilians (50.7%) fell victim to digital fraud in 2024, a 9-percentage-point jump from the previous year, with 54.2% of these victims experiencing direct financial losses.  

Another analysis shows a 45% increase in digital crimes in the country in 2024, with half of the victims effectively falling for the scams. Faced with these numbers, the security community questions: if biometrics promised to protect users and institutions, why do fraudsters seem to always be one step ahead?

Attacks bypass facial and digital recognition

Part of the answer lies in the creativity with which digital gangs bypass biometric mechanisms. In recent months, emblematic cases have emerged. In Santa Catarina, a fraudulent group harmed at least 50 people by clandestinely obtaining facial biometric data from customers – a telecommunications employee simulated sales of phone lines to capture selfies and documents from customers, later using this data to open bank accounts and take out loans in the victims’ names.

In Minas Gerais, criminals went further: they pretended to be Post Office couriers to collect fingerprints and photos of residents with the express purpose of circumventing bank security. In other words, scammers not only attack the technology itself but also exploit social engineering – inducing people to unwittingly provide their own biometric data. Experts warn that even systems considered robust can be deceived.

The problem is that the popularization of biometrics has created a false sense of security: users assume that, because it is biometric, authentication is infallible.

In institutions with less stringent barriers, scammers succeed using relatively simple means, such as photos or molds to mimic physical features. The so-called ‘silicone finger scam’, for example, has become known: criminals place transparent films on ATM fingerprint readers to steal the customer’s print and then create a fake silicone finger with that print, making unauthorized withdrawals and transfers. Banks claim to already employ countermeasures – sensors capable of detecting heat, pulse, and other characteristics of a live finger, rendering artificial molds useless.

Nevertheless, isolated cases of this scam show that no biometric barrier is completely safe from attempted circumvention. Another worrisome vector is the use of social engineering tricks to obtain selfies or facial scans from clients themselves. The Brazilian Federation of Banks (Febraban) sounded the alarm for a new type of fraud where scammers request ‘confirmation selfies’ from victims under false pretenses. For instance, pretending to be bank or INSS employees, they ask for a photo of the face ‘to update records’ or release a nonexistent benefit – in reality, they use this selfie to impersonate the client in facial verification systems.

A simple oversight – like taking a photo in response to a supposed delivery person or health agent’s request – can provide criminals with the ‘biometric key’ to access other people’s accounts.

Deepfakes and AI: the new frontier of scams

If fooling people is already a commonly used strategy, more advanced criminals are also deceiving machines. This is where deepfake threats come in – advanced manipulation of voice and image by artificial intelligence – and other digital forgery techniques that saw a sophistication leap from 2023 to 2025.

Last May, for instance, the Federal Police launched Operation ‘Face Off’ after identifying a scheme that defrauded around 3,000 accounts on the Gov.br portal using fake facial biometrics. The criminal group applied highly sophisticated techniques to impersonate legitimate users on the platform gov.br, which provides access to thousands of digital public services.

Researchers revealed that scammers used a combination of manipulated videos, images altered by AI, and even hyper-realistic 3D masks to deceive the facial recognition mechanism. In other words, they simulated the facial features of third parties – including deceased individuals – to assume identities and access financial benefits linked to those accounts. With perfectly synchronized artificial eye blinking, smiling, or head turning movements, they were able to even bypass the liveness detection functionality, which was developed to detect if there is a real person in front of the camera.  

The result was the improper access to funds that should have been claimed only by the true beneficiaries, as well as the unauthorized approval of consigned loans in the Meu INSS app using these false identities. This case starkly exposed that indeed, it is possible to circumvent facial biometrics – even in large and theoretically secure systems – when equipped with the right tools.  

In the private sector, the situation is no different. In October 2024, the Civil Police of the Federal District led the ‘DeGenerative AI’ operation, dismantling a gang specialized in hacking digital bank accounts through artificial intelligence apps. The criminals made over 550 attempts to breach customers’ bank accounts, using leaked personal data and deepfake techniques to replicate the image of account holders and thus validate procedures for opening new accounts in the victims’ names and enable mobile devices as if they were theirs.  

It is estimated that the group managed to move around R$ 110 million in accounts of individuals and legal entities, laundering money from various sources before most of the frauds were stopped by internal bank audits.  

Beyond biometrics

For the Brazilian banking sector, the escalation of these high-tech scams raises a red flag. Banks have heavily invested in the last decade to migrate customers to secure digital channels, adopting facial and digital biometrics as barriers against fraud.  

However, the recent wave of scams suggests that relying solely on biometrics may not be enough. Scammers exploit human flaws and technological loopholes to impersonate consumers, demanding security to be thought of in multiple levels and authentication factors, no longer a single ‘magic’ factor.

Faced with this complex scenario, experts converge on a recommendation: adopting multi-factor authentication and multi-layered security approaches. This means combining different technologies and verification methods, so that if one factor fails or is compromised, others prevent fraud. Biometrics itself remains an important piece – after all, when well implemented with liveness verification and encryption, it greatly hinders opportunistic attacks.  

However, it should work together with other controls: one-time passwords or PINs sent to the cellphone, user behavior analysis – known as behavioral biometrics, which identifies typing patterns, device usage, and can sound the alarm when a customer is ‘acting differently than usual’ – and intelligent transaction monitoring.  

AI tools are also being used in favor of banks, identifying subtle signs of deepfake in videos or voices – for example, analyzing audio frequencies to detect synthetic voices or looking for visual distortions in selfies.  

Ultimately, the message for bank managers and information security professionals is clear: there is no silver bullet. Biometrics has brought a higher level of security compared to traditional passwords – to the extent that scams have largely shifted to deceiving people rather than breaking algorithms.  

However, fraudsters are exploiting every loophole, whether human or technological, to thwart biometric systems. The appropriate response involves cutting-edge technology constantly updated and proactively monitored. Only those who can evolve their defenses at the same speed as new scams emerge will be able to fully protect their customers in the era of malicious artificial intelligence.

By Sylvio Sobreira Vieira, CEO & Head Consulting at SVX Consultoria.

Últimas Matérias

Bitget accelerates expansion in Latin America with strong growth in Brazil 

Bitget, one of the leading cryptocurrency exchanges and Web3 companies in the world, is...

Five tips to sell more on Black Friday through WhatsApp

Black Friday is one of the most anticipated events of the year for commerce,...

E-commerce growth boosts logistics automation and strengthens demand for solutions from Eagle Systems

The Eagle Systems, a leading company in the manufacture of storage structures and integrator...

Mari Maria Makeup debuts on TikTok Shop and reaches 220,000 online viewers

Mari Maria Makeup made its debut on TikTok Shop with a special live stream...