The year 2023, in a way, was the regulatory year of Artificial Intelligence (AI). In May, the G7 Summit highlighted the importance of promoting guardrails for advanced AI systems on a global basis.
In August, China enacted a law specifically associated with Generative AI to mitigate critical harm to individuals, maintain social stability and ensure its long-term international regulatory leadership.
In the wake of this process, it was up to the US, in the figure of its then president Biden, to issue an executive order that could guide the application of AI to the field of reliability, security and protection of the fundamental elements of American sovereignty.
However, the icing on the cake was largely the EU AI Act, pre-approved in December 2023 and sanctioned in early 2024. Deeply debated and quite comprehensive, the Act achieves the status of an internationally oriented regulation designed as a legal framework for the development and application of AI systems for the bloc's member countries.
In Brazil, Law 2,338 of Artificial Intelligence marks an inflection in the regulation of emerging technologies in the country. On a large scale, the law has positive aspects, but also denotes a certain fragility in strategic areas for the development of our leadership in the field of AI.
At the center of Brazilian regulation is provisions of the General Data Protection Law (LGPD), emphasizing the protection of personal data with an emphasis on privacy. The law thus intends to ensure that AI does not compromise individual rights. The LGPD also wants to encourage innovation, offering some tax incentives and subsidies for companies that invest in AI research and development. This aspect aims to position Brazil as a hub of technological innovation, stimulating competitiveness and the creation of startups in the AI sector. Regarding social impacts, digital inclusion and the ethical use of AI to reduce inequalities are contemplated by promoting educational programs and equitable work for the creation of vulnerable populations.
However, there are negative points to highlight. The first of them revolves around an excessive bureaucracy, such as the requirement for multiple evaluations and certifications that may overload companies especially startups and small companies (with additional costs and lengthy processes. This bureaucratic aspect may discourage innovation and the adoption of new technologies. Although the law has interesting intentions, some critics cite ambiguity in certain provisions, enabling conflicting interpretations and legal uncertainty. There is a lack of clarity regarding specific responsibilities and penalties that will hinder its practical application. There are still concerns about the potential use of AI regulation for state control purposes.
We are, in any case, facing an important milestone in the regulation of AI. Such a regulatory component is necessary to bring balance between rights protection, encouraging innovation and promoting social inclusion.However, the effectiveness of the law will depend on its practical implementation and the ability to mitigate the associated risks. Transparency, regulatory clarity and constant surveillance of civil society will be essential to ensure that the benefits outweigh the challenges.

