36% of Brazilian companies still hesitate to invest in Artificial Intelligence for fear of cyberattacks and data leaks, according to a study by TOTVS. However, the advancement of inherently secure platforms, such as Google's Gemini, hosted on one of the world's most robust cloud infrastructures, is changing this paradigm. For experts at L8 Group, a leading company in technology and cybersecurity, the key to secure innovation lies in knowing how to utilize the security features that these platforms already offer.
“The fear of exposing sensitive data and opening new avenues for threats is the main barrier to AI adoption; however, the choice of technology platform is a decisive factor in mitigating these risks. The companies' fear is understandable, but it stems from a perception that AI is a vulnerable black box. This is not true when it comes to models like Gemini. It is not a standalone tool; it operates within the Google Cloud ecosystem, which already has world-class security layers,” explains Guilherme Franco, CTO of L8.
This means that customer data is protected by advanced encryption, strict privacy policies that prevent its use for training public models, and an arsenal of control tools. According to Franco, security is not an add-on; it is the foundation, which can be further customized when companies already use Google Workspace, for example, by integrating with Vault's data retention policies.
For companies that wish to invest in AI securely using Gemini, L8 Group highlights that success depends on proper configuration and making the most of the security features available on the Google Cloud platform. Here are some points raised by cybersecurity specialist, Guilherme Franco:
- Secure Infrastructure by Default: Gemini benefits from the same infrastructure that protects Gmail, Search, and YouTube. This includes protection against denial-of-service (DDoS) attacks, intrusion detection, and a private, encrypted global network.
- Data and Access Control (IAM and VPC-SC): It is possible to precisely define who can access AI models and data through Google Cloud Identity and Access Management (IAM). Additionally, with VPC Service Controls, companies can create a virtual security perimeter to prevent data exfiltration, ensuring that sensitive information does not leave the controlled environment.
- For Google Workspace users, Gemini respects the same access levels that were previously defined for accessing company content, such as in Google Drive, without the need for extra configuration.
- The same can be extended to users on platforms other than Google Workspace, such as Microsoft, by using Google Agentspaces with advanced IAM.
- Privacy and Confidentiality: Google contractually guarantees that corporate data entered into Gemini via Google Cloud is not used to train general access models. Control and ownership of the data remain entirely with the client company.
- Safety and Responsible AI Filters: The Gemini platform has built-in security filters (safety filters) to mitigate the generation of inappropriate, harmful, or biased content, protecting not only data but also brand reputation.
- “Local” data: It's possible to use tools like NotebookLM, among others, that infer content solely by reading the files a user chooses, without using an external research base like the internet, reducing hallucinations and ensuring greater privacy.
To conclude, the specialist warns: “The question is no longer ‘if’ we will adopt AI, but rather ‘how’ we will do it securely and scalably. Platforms like Gemini solve much of the underlying security complexity. Our work at L8, for example, is to act as the strategic partner that customizes and implements these layers of protection—IAM, VPC, data governance—in accordance with the reality and needs of each business. We transform the raw power of AI into a secure, future-proof competitive advantage. Most importantly, it's about building projects that are actually functional, as a recent MIT study revealed that 95% of AI projects fail,” Franco adds.
He also warns that, when it comes to cybersecurity, in addition to the well-known term ShadowIT, there is also ShadowAI, in which users utilize unapproved and insecure AI tools. “Other platforms train their AIs based on what the user types, including confidential data, violating the LGPD. Take the recent case of Grok, which leaked more than 370,000 private conversations. To help discover and block the use of ShadowIT and ShadowAI, L8 Group offers solutions that provide visibility and control over what is being accessed, in accordance with cybersecurity policies,” he concludes.

