InícioNewsFear of cyberattacks holds back AI, but platforms like Google's Gemini offer...

Fear of cyberattacks holds back AI, but platforms like Google’s Gemini offer a safe path, L8 Group experts point out

36% of Brazilian companies still hesitate to invest in Artificial Intelligence due to fear of cyberattacks and data leaks, according to a study by TOTVS. However, the advancement of inherently secure platforms like Google’s Gemini, hosted on one of the world’s most robust cloud infrastructures, is changing this paradigm. For experts at L8 Group, a reference in technology and cybersecurity, the key to secure innovation lies in knowing how to use the protection resources these platforms already offer.

“The fear of exposing sensitive data and opening new doors to threats is the main barrier to AI adoption. However, the choice of technological platform is a decisive factor in mitigating these risks. Companies’ fear is understandable, but it stems from a view that AI is a vulnerable black box. This is not true when we talk about models like Gemini. It is not an isolated tool; it operates within the Google Cloud ecosystem, which already has world-class security layers,” explains Guilherme Franco, CTO of L8.

This means customer data is protected by advanced encryption, strict privacy policies that prevent its use for training public models, and an arsenal of control tools. According to Franco, security is not an add-on; it is the foundation, which can be further customized when companies already use Google Workspace, integrating with data retention policies like Vault, for example.

For companies looking to invest in AI securely using Gemini, L8 Group emphasizes that success depends on correct configuration and maximizing the security resources available on the Google Cloud platform. Check out some points raised by cybersecurity expert Guilherme Franco:

  1. Secure Infrastructure by Default:Gemini benefits from the same infrastructure that protects Gmail, Search, and YouTube. This includes protection against denial-of-service (DDoS) attacks, intrusion detection, and a private, encrypted global network.
  2. Data and Access Control (IAM and VPC-SC):It is possible to precisely define who can access AI models and data through Google Cloud Identity and Access Management (IAM). Additionally, with VPC Service Controls, companies can create a virtual security perimeter to prevent data leaks, ensuring sensitive information does not leave the controlled environment.
    1. For Google Workspace users, Gemini respects the same access levels already predefined for company content, such as Google Drive, without requiring extra configurations.
    2. The same can be extended to users who use platforms other than Google Workspace, such as Microsoft, by using Google Agentspaces with advanced IAM.
  3. Privacy and Confidentiality:Google guarantees, by contract, that corporate data entered into Gemini via Google Cloud is not used to train general-access models. Control and ownership of the data remain entirely with the client company.
  4. Security and Responsible AI Filters:The Gemini platform itself has built-in security filters (safety filters) to mitigate the generation of inappropriate, dangerous, or biased content, protecting not only data but also the brand’s reputation.
  5. “Local” Data:It is possible to use tools like NotebookLM, among others, which infer content only by reading files chosen by the user, without using an external research base like the internet, reducing hallucinations and ensuring greater privacy.

To conclude, the expert warns: “The question is no longer ‘if’ we will adopt AI, but ‘how’ we will do it securely and scalably. Platforms like Gemini solve much of the security complexity at the base. Our work at L8, for example, is to act as the strategic partner that customizes and implements these protection layers: IAM, VPC, data governance; according to the reality and needs of each business. We transform the raw power of AI into a secure, future-ready competitive advantage. More importantly, building projects that are truly functional, as a recent MIT study revealed that 95% of AI projects fail,” adds Franco.

He also warns that, regarding cybersecurity, in addition to the well-known term ShadowIT, there is also ShadowAI, where users use unapproved and insecure AI tools. “Other platforms train their AIs based on what users type, including confidential data, violating GDPR. Look at the recent case of Grok, which leaked over 370,000 private conversations. To help discover and block the use of ShadowIT and ShadowAI, L8 Group offers solutions that provide visibility and control over what is being accessed, according to cybersecurity policies,” he concludes.

MATÉRIAS RELACIONADAS

DEIXE UMA RESPOSTA

Por favor digite seu comentário!
Por favor, digite seu nome aqui

RECENTES

MAIS POPULARES

[elfsight_cookie_consent id="1"]