It's a fact: companies in Brazil have incorporated Artificial Intelligence into their business strategies—at least 98% of them, according to research conducted at the end of 2024. The problem, however, is that only 25% of organizations declared themselves prepared to implement AI. The remainder suffer from infrastructure limitations, data management, and a shortage of specialized talent. But this doesn't mean that the remaining 75% are waiting for ideal conditions to advance their projects: on the contrary, these companies are continuing to implement the technology.
The problem is that only one in five companies is able to integrate AI into their business—according to a recently released global report prepared by Qlik in partnership with ESG. Furthermore, only 47% of companies reported implementing data governance policies. These figures are global—and it wouldn't be surprising if the Brazilian statistics were even higher. And even though AI is currently applied in silos, and the technology's "entry point" is usually customer service, financial, regulatory, and reputational risks still exist.
Companies that choose to implement AI without proper preparation face many obstacles. Case studies have shown that poorly managed algorithms can perpetuate biases or compromise privacy, resulting in reputational and financial damage. AI governance is not just a technological issue, but also one of execution and due diligence: without a well-defined strategy, risks grow in line with opportunities—from privacy breaches and data misuse to opaque or biased automated decisions that generate distrust.
Regulatory Pressure and Compliance: Foundations of AI Governance
The need to establish AI governance didn't just arise from the business front: new regulations are emerging, and progress has been rapid, including in Brazil.
In December 2024, the Federal Senate approved Bill 2338/2023 , which proposes a regulatory framework for AI with guidelines for responsible use. The bill adopts a risk-based approach , similar to that of the European Union, classifying AI systems according to their potential to harm fundamental rights. Applications posing excessive risk, such as autonomous weapon algorithms or mass surveillance tools, will be prohibited , generative and general-purpose AI systems will be required to undergo prior risk assessments before reaching the market.
There are also transparency requirements, for example, requiring developers to disclose whether they used copyrighted content when training models. At the same time, there are discussions about assigning the National Data Protection Authority (ANPD) a central role in coordinating AI governance in the country, leveraging the existing data protection framework. These legislative initiatives signal that companies will soon have clear obligations regarding the development and use of AI—from reporting practices and mitigating risks to accounting for algorithmic impacts.
In the United States and Europe, regulators have increased scrutiny of algorithms, particularly after the popularization of generative AI tools, which sparked public debate. The AI ACT has already entered into force in the EU, and its implementation is scheduled to end on August 2, 2026, when most of the standard's obligations become applicable, including requirements for high-risk AI systems and general-purpose AI models.
Transparency, ethics and algorithmic accountability
Beyond the legal aspect, AI governance encompasses ethical and responsibility principles that go beyond simply "compliance with the law." Companies are realizing that, to gain the trust of customers, investors, and society as a whole, transparency about how AI is used is essential. This entails adopting a series of internal practices, such as prior assessment of algorithmic impact, rigorous data quality management, and independent model auditing.
It is also critical to implement data governance policies that carefully filter and select training data, avoiding discriminatory biases that may be embedded in the collected information.
Once an AI model is operational, the company must conduct periodic testing, validation, and audits of its algorithms, documenting decisions and criteria used. This record has two benefits: it helps explain how the system works and enables accountability in the event of a failure or improper outcome.
Governance: innovation with competitive value
A common misconception is that AI governance limits innovation. On the contrary, a good governance strategy enables safe innovation, unlocking AI's full potential responsibly. Companies that structure their governance frameworks early can mitigate risks before they become problems, avoiding rework or scandals that would delay projects.
As a result, these organizations reap greater value faster from their initiatives. Market evidence reinforces this correlation: a global survey found that companies with active leadership oversight of AI governance report superior financial impacts from the use of advanced AI.
Furthermore, we are at a time when consumers and investors are increasingly aware of the ethical use of technology – and demonstrating this commitment to governance can differentiate a company from the competition.
In practical terms, organizations with mature governance report improvements not only in security but also in development efficiency – executives point to reductions in AI project cycle time thanks to clear standards from the outset. That is, when privacy, explainability, and quality requirements are considered early on in the design phase, costly corrections are avoided later.
Governance, then, acts as a guide for sustainable innovation, guiding where to invest and how to scale solutions responsibly. And by aligning AI initiatives with the company's corporate strategy and values, governance ensures that innovation always serves the larger business and reputational objectives, rather than following an isolated or potentially harmful path.
Developing an AI governance strategy is, above all, a strategic move for competitive positioning. In today's ecosystem, where countries and companies are locked in a technological race, those who innovate with confidence and credibility lead the way. Large companies that establish efficient governance systems are able to balance risk mitigation with maximizing AI's benefits, rather than sacrificing one for the other.
Finally, AI governance is no longer optional but a strategic imperative. For large companies, creating a governance strategy now means defining the standards, controls, and values that will guide the use of artificial intelligence in the coming years. This involves everything from complying with emerging regulations to creating internal ethics and transparency mechanisms, aiming to minimize risk and maximize value in a balanced manner. Those who act promptly will reap the rewards in consistent innovation and a solid reputation, positioning themselves ahead in an increasingly AI-driven market.