The public debate about artificial intelligence (AI) often gets lost in extremes: the euphoria of total automation or the fear of professionals being replaced. The true urgency, however, lies in human supervision. Probability-based AI models have inherent margins of error, but they are increasingly used in critical contexts, from finance to health, without proper curation. This practice is not only risky, but also technically incorrect. Without rigorous validation, blind trust in AI can lead to serious failures, with ethical, legal, and operational impacts. Human supervision is not an accessory: it is the foundation for responsible and sustainable use of technology.
The limits of AI are evident in practical applications. A study by Stanford University and GitHub Copilot (2023) revealed that 45% of AI-generated code contains vulnerabilities or violates best development practices. Even when AI seems to work, issues persist: the solution may not be secure, may not comply with regulatory standards, and may not align with business objectives. Without rigorous testing and continuous validations, any response will be mere speculation.
The belief in AI infallibility is fueled by commercial speeches and unrealistic expectations, but it ignores a fundamental truth: technology depends on humans to interpret, adjust, and correct its outputs. In regulated sectors, such as legal, the lack of oversight can violate laws like the General Data Protection Law (LGPD), which requires transparency in automated decisions. According to the McKinsey report (2023), few companies appear to be fully prepared for the widespread use of GenAI, or more precisely, for the risks that these tools may pose to businesses. Only 21% of respondents who reported adopting artificial intelligence say that their organizations have guidelines guiding the use of these tools by teams. In health, the World Health Organization (WHO, 2023) warns that AI systems without human supervision can generate incorrect guidance, violate personal data, and spread misinformation.
Supervision, however, faces significant challenges.The belief that artificial intelligence is infallible reflects a distortion fueled by both commercial discourse and unrealistic expectations, and the shortage of professionals is also critical. In a recent survey by Bain & Company in Brazil, 39% of executives cited the lack of internal expertise as the main barrier to accelerating the implementation of generative AI, surpassing even concerns about data security.
It's not about denying the advances of technology, which are substantial, but about recognizing that it still depends, and will continue to depend, on professionals capable of interpreting, adjusting, and, when necessary, correcting its outputs. Especially in regulated or high-impact sectors, such as finance, legal, or healthcare, the lack of technical and ethical oversight can lead to serious legal and operational consequences. The Brasscom study highlights this shortage; Brazil trains only 53,000 IT professionals per year, while between 2021 and 2025, a total of 797,000 talents will be needed.
Global initiatives point the way to improvements The UN methodology for the ethical use of AI recommends human supervision throughout the system's lifecycle, from design to operation. Companies like Salesforce illustrate this in practice: their Einstein platform uses ethics committees to audit algorithms. This approach shows that supervision is not only technical but also strategic, requiring transparency, accountability, and investment in training.
AI has the power to transform industries, but without human oversight, its potential is overshadowed by ethical, legal, and operational risks. Cases such as financial frauds and possible medical errors show that blind trust in technology is unsustainable, while an example like Salesforce proves that robust governance can maximize benefits and minimize failures. In 2025, the debate on AI should prioritize oversight as a pillar of responsible innovation, addressing challenges such as costs, talent shortages, and cultural resistance. Leaders, companies, and regulators have the responsibility to build systems that combine the power of AI with human sensitivity, ensuring that technology amplifies progress, not problems. The future of AI is not in blind automation, but in intelligent collaboration, and it is up to us to shape it with clarity, ethics, and commitment.