StartArtigosThe advancement of AI requires a governance strategy.

The advancement of AI requires a governance strategy.

It is a fact: companies in Brazil have incorporated artificial intelligence into their business strategy – at least 98% of them, according to surveys conducted at the end of 2024. The problem, however, is that only 25% of organizations have declared themselves prepared to implement AI. The rest suffers from infrastructure limitations, data management, and a shortage of specialized talents. But that does not mean that these other 75% are waiting for ideal conditions to advance their projects: on the contrary, these companies continue with the deployment of the technology.

The problem is that only one in five companies is able to integrate AI into the business — according to the global report prepared by Qlik, in partnership with ESG, and released recently. Additionally, only 47% of companies stated that they implement data governance policies. These numbers are worldwide – and it would not be a surprise if Brazilian statistics were even larger. And even though today AI is applied in silos, and that the entry point of the technology is usually customer service, financial, regulatory and reputational risks continue to exist.

There are quite a few obstacles faced by companies that chose to implement AI without proper preparation. Case studies have already shown that poorly governed algorithms can perpetuate biases or compromise privacy, resulting in reputational and financial damage. AI governance is not only a technological issue, but also a matter of execution and due diligence: without a clearly defined strategy, risks grow in direct proportion to the opportunities—ranging from privacy violations and data misuse to opaque or biased automated decisions that generate distrust.

Regulatory pressure and compliance: foundations of AI governance

The need to establish AI governance has not arisen solely from the business side: new regulations are emerging, and the pace of progress has been rapid, including in Brazil.

In December 2024, the Federal Senate approved Bill 2338/2023.that proposes a regulatory framework for AI with guidelines for responsible use. The text adopts a risk-based approach,Similar to that of the European Union, classifying AI systems according to the potential harm to fundamental rights. High-risk applications, such as autonomous weapon algorithms or mass surveillance tools, will be prohibited.,while systems ofGenerative AIAnd general-purpose ones must undergo prior risk assessments before reaching the market.

There are also transparency requirements, for example, requiring developers to disclose whether they used copyrighted content in the training of models. In parallel, there is discussion of assigning the National Data Protection Authority (ANPD) a central role in coordinating AI governance in the country, leveraging the existing data protection framework. These legislative initiatives signal that companies will soon have clear obligations regarding the development and use of AI – from reporting on practices and mitigating risks, to being accountable for algorithmic impacts.

In the United States and Europe, regulatory bodies have heightened scrutiny of algorithms, especially after the popularization of generative AI tools, which have sparked public debates. The AI Act has already entered into force in the EU – and its implementation is due to be completed by August 2, 2026, when the majority of the regulation's obligations become applicable, including requirements for high-risk AI systems and general-purpose AI models.  

Transparency, ethics and algorithmic accountability

Beyond the legal aspect, AI governance encompasses ethical principles and accountability that go beyond "complying with the law". Companies are realizing that, to gain the trust of customers, investors, and society, transparency about how AI is used is necessary. This implies adopting a series of internal practices, such as the preliminary assessment of algorithmic impact, rigorous data quality management, and independent auditing of models.

It is also critical to implement data governance policies that filter and carefully select the training data, avoiding discriminatory biases that may be embedded in the information collected.  

From the moment an AI model is in operation, the company must conduct periodic tests, validations, and audits of its algorithms, documenting the decisions and criteria used. This record brings two benefits: it helps explain how the system operates, and enables attribution of responsibility in the event of any failure or improper result.

Governance: innovation with competitive value

A common misconception is that AI governance limits innovation. On the contrary, a good governance strategy enables safe innovation, unleashing the full potential of AI in a responsible manner. Companies that establish their governance frameworks early can mitigate risks before they become problems, avoiding rework or scandals that would delay projects.

As a result, these organizations reap more value from their initiatives and do so faster. Market evidence reinforces this correlation: a global survey identified that companies with active leadership oversight in AI governance report higher financial impacts from the use of advanced AI.

In addition, we are at a moment when consumers and investors are increasingly attentive to the ethical use of technology – and demonstrating this commitment to governance can differentiate a company from its competitors.  

In practical terms, organizations with mature governance report improvements not only in security, but also in development efficiency — executives point to reductions in AI project cycle times thanks to clear standards from the outset. That is, when privacy, explainability, and quality requirements are already taken into account at the design phase, costly corrections are avoided later.

Governance, then, acts as a guide for sustainable innovation, directing where to invest and how to scale solutions in a responsible way. And by aligning AI initiatives with the corporate strategy and the company's values, governance ensures that innovation is always serving the broader business and reputation objectives, and not following an isolated or potentially harmful path.  

Developing an AI governance strategy is, above all, a strategic move for competitive positioning. In the current ecosystem, where countries and companies are locked in a technological race, those who innovate with confidence and credibility lead. Large companies that establish efficient governance systems are able to balance risk mitigation with maximizing the benefits of AI, rather than sacrificing one for the other.  

Finally, AI governance is no longer optional and has become a strategic imperative. For large companies, creating a governance strategy means defining now the standards, controls, and values that will guide the use of artificial intelligence in the coming years. This involves everything from complying with emerging regulations to creating internal ethics and transparency mechanisms, with the aim of minimizing risks and maximizing value in a balanced way. Those who act promptly will reap rewards from consistent innovation and a solid reputation, positioning themselves ahead in a market increasingly driven by AI.

Claudio Costa
Claudio Costa
Claudio Costa is Head of the Business Consulting Business Unit at Selbetti.
RELATED ARTICLES

LEAVE A RESPONSE

Please enter your comment!
Please enter your name here

RECENTES

MOST POPULAR

[elfsight_cookie_consent id="1"]