36% of Brazilian companies still hesitate to invest in Artificial Intelligence due to fear of cyberattacks and data leaks, according to a study by TOTVS. However, the advancement of inherently secure platforms, such as Google’s Gemini, hosted on one of the most robust cloud infrastructures in the world, is changing this paradigm. According to experts at L8 Group, a leading company in technology and cybersecurity, the key to secure innovation lies in knowing how to use the protection features that these platforms already offer.
“The fear of exposing sensitive data and opening new doors to threats is the main barrier to the adoption of AI, however, the choice of the technological platform is a decisive factor in mitigating these risks. The companies’ concern is understandable, but it stems from a perception that AI is a vulnerable black box. This is not true when we talk about models like Gemini. It is not a standalone tool; it operates within the Google Cloud ecosystem, which already has world-class security layers,” explains Guilherme Franco, CTO of L8.
This means that customer data is protected by advanced encryption, strict privacy policies preventing its use for training public models, and an arsenal of control tools. According to Franco, security is not an add-on, it is the foundation, which can be further customized when companies already use Google Workspace, integrating with data retention policies from Vault, for example.
For companies wishing to invest in AI securely using Gemini, L8 Group emphasizes that success depends on proper configuration and maximizing the security resources available on the Google Cloud platform. Check out some points raised by cybersecurity expert Guilherme Franco:
- Secure Infrastructure by Default: The Gemini benefits from the same infrastructure that protects Gmail, Search, and YouTube. This includes protection against Distributed Denial of Service (DDoS) attacks, intrusion detection, and a global private and encrypted network.
- Data and Access Control (IAM and VPC-SC): It is possible to precisely define who can access AI models and data through Google Cloud Identity and Access Management (IAM). Additionally, with VPC Service Controls, companies can create a virtual security perimeter to prevent data leakage, ensuring that sensitive information does not leave the controlled environment.
- For Google Workspace users, Gemini respects the same levels of access that were previously defined in company content accesses, like Google Drive for example, without the need for additional configurations.
- This can be extended to users who use platforms other than Google Workspace, like Microsoft, by using Google Agentspaces with advanced IAM.
- Privacy and Confidentiality: Google contractually guarantees that corporate data entered into Gemini via Google Cloud is not used to train general access models. The control and ownership of the data remain entirely with the client company.
- Security and Responsible AI Filters: The Gemini platform itself has integrated security filters (safety filters) to mitigate the generation of inappropriate, dangerous, or biased content, protecting not only data but also the brand’s reputation.
- Local data: It is possible to use tools like NotebookLM, among others, which infer content just by reading the files the user chooses, without using an external search base like the internet, reducing hallucinations and ensuring greater privacy.
To conclude, the specialist warns: “The issue has shifted from ‘if’ we will adopt AI, to ‘how’ we will do it safely and scalably. Platforms like Gemini solve much of the security complexity at the base. Our work at L8, for example, is to act as the strategic partner that customizes and implements these protection layers: IAM, VPC, data governance; according to the reality and needs of each business. We turn the raw power of AI into a secure and future-ready competitive advantage. More importantly, building projects that are truly functional, because a recent MIT study revealed that 95% of AI projects fail”, adds Franco.
He further warns that, regarding cybersecurity, in addition to the well-known term Shadow IT, there is also ShadowAI, where users utilize unapproved and insecure AI tools. “Other platforms train their AIs based on what the user types, including confidential data, violating LGPD. Look at the recent case of Grok, which leaked over 370,000 private conversations. To help discover and stop the use of Shadow IT and ShadowAI, L8 Group offers solutions that provide visibility and control over what is being accessed, according to cybersecurity policies,” he concludes.