36% of Brazilian companies still hesitate to invest in Artificial Intelligence due to fears of cyberattacks and data leaks, according to a study by TOTVS. However, the advancement of inherently secure platforms like Google’s Gemini, hosted on one of the most robust cloud infrastructures in the world, is changing this paradigm. For experts at L8 Group, a reference in technology and cybersecurity, the key to secure innovation lies in knowing how to use the protection resources these platforms already offer.
“The fear of exposing sensitive data and opening new doors for threats is the main barrier to AI adoption. However, the choice of technological platform is a decisive factor in mitigating these risks. Companies’ fear is understandable, but it stems from a view that AI is a vulnerable black box. This is not true when we talk about models like Gemini. It is not an isolated tool; it operates within the Google Cloud ecosystem, which already has world-class security layers,” explains Guilherme Franco, CTO at L8.
This means customer data is protected by advanced encryption, strict privacy policies that prevent its use for training public models, and an arsenal of control tools. According to Franco, security is not an add-on; it is the foundation and can be further customized when companies already use Google Workspace, integrating with data retention policies like Vault, for example.
For companies that want to invest in AI securely using Gemini, L8 Group highlights that success depends on proper configuration and maximizing the use of available security resources on the Google Cloud platform. Check out some points raised by cybersecurity expert Guilherme Franco:
- Secure Infrastructure by Default: Gemini benefits from the same infrastructure that protects Gmail, Search, and YouTube. This includes protection against DDoS attacks, intrusion detection, and a private, encrypted global network.
- Data and Access Control (IAM and VPC-SC): You can precisely define who can access AI models and data through Google Cloud Identity and Access Management (IAM). In addition, with VPC Service Controls, companies can create a virtual security perimeter to prevent data leaks, ensuring sensitive information does not leave the controlled environment.
- For Google Workspace users, Gemini respects the same access levels already defined for corporate content, such as Google Drive, without requiring extra configurations.
- The same can be extended to users who use non-Google Workspace platforms, like Microsoft, by using Google Agentspaces with advanced IAM.
- Privacy and Confidentiality: Google contractually guarantees that corporate data entered into Gemini via Google Cloud is not used to train general-access models. Control and ownership of the data remain entirely with the client company.
- Security and Responsible AI Filters: The Gemini platform itself has built-in security filters (safety filters) to mitigate the generation of inappropriate, dangerous, or biased content, protecting not only data but also the brand’s reputation.
- “Local” Data: You can use tools like NotebookLM, among others, which infer content by only reading files chosen by the user, without using an external research base like the internet, reducing hallucinations and ensuring greater privacy.
Finally, the expert warns: “The question is no longer ‘if’ we will adopt AI, but ‘how’ we will do it securely and scalably. Platforms like Gemini solve much of the security complexity at the foundation. Our work at L8, for example, is to act as the strategic partner that customizes and implements these protection layers—IAM, VPC, data governance—according to the reality and needs of each business. We turn raw AI power into a secure, future-ready competitive advantage. More importantly, we build projects that are actually functional, as the recent MIT study showed that 95% of AI projects fail,” adds Franco.
He further warns that, in terms of cybersecurity, besides the well-known term ShadowIT, there is also ShadowAI, where users use unapproved and insecure AI tools. “Other platforms train their AIs based on user input, including confidential data, violating GDPR. Look at the recent case with Grok, which leaked over 370,000 private conversations. To help detect and block the use of ShadowIT and ShadowAI, L8 Group offers solutions that provide visibility and control over what is being accessed, in line with cybersecurity policies,” he concludes.