The year 2023, in a way, it was the regulatory year of Artificial Intelligence (AI). Still in May, the G7 Summit highlighted the importance of promoting guardrails for advanced AI systems on a global basis
In August, it was China's turn to enact a law specifically related to Generative AI, with the aim of mitigating essential losses to individuals, maintain social stability and ensure its long-term international regulatory leadership
In the wake of this process, it fell to the USA, in the figure of its then president Biden, issue an executive order that outlined guiding the application of AI in the field of reliability, security and the protection of the fundamental elements of American sovereignty
However, the cherry on the cake was, to a large extent, the EU AI Act, pre-approved in December 2023 and sanctioned in early 2024. Deeply debated and quite comprehensive, The Act achieves the status of a regulation with international vocation designed as a legal framework for the development and application of AI systems for the member countries of the bloc
In Brazil, the law 2.338 of Artificial Intelligence, marks a turning point in the regulation of emerging technologies in the country. On a large scale, the law has positive aspects, but it also denotes a certain fragility in strategic areas for the development of our leadership in the field of AI
At the center of Brazilian regulation are provisions of the General Data Protection Law (LGPD), emphasizing the protection of personal data with a focus on privacy. The law intends, in this way, ensure that AI does not compromise individual rights. The LGPD also aims to encourage innovation, offering some tax incentives and subsidies for companies that invest in AI research and development. This aspect aims to position Brazil as a hub of technological innovation, stimulating competitiveness and the creation of startups in the AI sector. Regarding social impacts, digital inclusion and the ethical use of AI to reduce inequalities are addressed through the promotion of educational and training programs for vulnerable populations, preparing the workforce for the era of artificial intelligence. The idea is to mitigate the negative social impacts of automation, promoting a more equitable transition
However, there are negative points to highlight. The first of them revolves around excessive bureaucracy, how the requirement for multiple evaluations and certifications that may burden companies – especially startups and small businesses – with additional costs and lengthy processes. This bureaucratic aspect can discourage innovation and the adoption of new technologies. Although the law has interesting intentions, some critics cite ambiguity in certain provisions, enabling conflicting interpretations and legal uncertainty. There is a lack of clarity regarding specific responsibilities and penalties that will hinder its practical application. There are still concerns about the potential use of AI regulation for state control purposes. This aspect raises questions about the protection of civil liberties and the limits of state intervention
We are, in any case, in the face of an important milestone in AI regulation. This regulatory component is necessary to bring balance between the protection of rights, incentive for innovation and promotion of social inclusion. However, the effectiveness of the law will depend on its practical implementation and the ability to mitigate associated risks. Transparency, regulatory clarity and constant oversight from civil society will be essential to ensure that the benefits outweigh the challenges