36% of Brazilian companies are still hesitant to invest in Artificial Intelligence for fear of cyberattacks and data leakage, according to a study by TOTVS. However, the advancement of secure platforms by nature, such as Gemini by Google, hosted in one of the most robust cloud infrastructures in the world, is changing this paradigm. For experts from L8 Group, a reference company in technology and cybersecurity, the key to secure innovation is knowing how to use the protection resources that these platforms already offer.
“The fear of exposing sensitive data and opening new doors to threats is the main barrier to the adoption of AI, however, the choice of the technological platform is a decisive factor to mitigate these risks.The fear of companies is understandable, but it is born from a view that AI is a vulnerable black box. This is not true when we talk about models such as Gemini. It is not an isolated tool; it operates within the Google Cloud ecosystem, which already has layers of world-class security”, explains Guilherme Franco, CTO of L8.
This means that customer data is protected by advanced encryption, strict privacy policies that prevent it from being used for training public models, and an arsenal of control tools.Franco said, security is not an additional, it is the foundation, which can be further customized when companies already use Google Workspace, integrating with Vault data retention policies, for example.
For companies that want to invest in AI safely using Gemini, L8 Group points out that success depends on the correct configuration and maximum use of the security resources available on the Google Cloud platform.Check out some points that cybersecurity expert Guilherme Franco raised:
- Secure Infrastructure by Standard: Gemini benefits from the same infrastructure that protects Gmail, Search, and YouTube.This includes protection against denial of service (DDoS) attacks, intrusion detection, and a private, encrypted global network.
- Data Control and Access (IAM and VPC-SC): You can precisely define who can access AI models and data through Google Cloud Identity and Access Management (IAM). Additionally, with VPC Service Controls, companies can create a virtual security perimeter to prevent data leakage, ensuring sensitive information does not leave the controlled environment.
- In the case of Google Workspace users, Gemini respects the same levels of access that were already previously defined in access to company content, such as Google Drive for example, without the need for extra settings.
- The same can be extended to users using platforms other than Google Workspace, such as Microsoft, when using Google Agentspaces with advanced IAM.
- Privacy and Confidentiality: Google warrants, by contract, that corporate data entered into Gemini via Google Cloud is not used to train general access models.The control and ownership of the data remains fully with the client company.
- Security and Responsible AI Filters: The Gemini platform itself has built-in security filters (safety filters) to mitigate the generation of inappropriate, dangerous or biased content, protecting not only data but also brand reputation.
- Data“local”: It is possible to use tools such as NotebookLM, among others, that infer the content only by reading the files that the user chooses, without using an external search base, such as the internet, reducing hallucinations and ensuring greater privacy.
Finally, the expert warns: “The issue is no longer (if we adopt AI, to be (HOW WE WILL do it in a safe and scalable way. Platforms like Gemini solve much of the complexity of security at the base. Our work at L8, for example, is to act as the strategic partner that customizes and implements these layers of protection: IAM, VPC, data governance; according to the reality and needs of each business. We transform the raw power of AI into a secure and ready for the future competitive advantage. More importantly, we have disclosed projects that are really functional, this is because the study of MITP 9 projects 1.
He also warns that, with regard to the issue of cybersecurity, in addition to the already known term ShadowIT, there is also ShadowAI, in which users use unapproved and insecure AI tools.“Other platforms train their AIs based on what the user types, including sensitive data, hurting the LGPD.See the recent case of Grok, which has leaked more than 370 thousand private conversations. To help discover and stop the use of ShadowIT and ShadowAI, L8 Group offers solutions that give visibility and control to what is being accessed, according to the policies of”, concludes with the policies of the company.

