Google search engine

The real debate about AI: human oversight is essential

The public debate on artificial intelligence (AI) often gets lost in extremes: the euphoria with total automation or the fear of professional replacement. The real urgency, however, lies in human supervision. AI models, based on probabilities, have inherent error margins but are increasingly used in critical contexts, from finances to health, without proper curation. This practice is not just risky; it is technically flawed. Without rigorous validation, blind trust in AI can lead to serious failures with ethical, legal, and operational impacts. Human oversight is not an accessory; it is the foundation for responsible and sustainable technology use.

The limits of AI are evident in practical applications. A study by Stanford University and GitHub Copilot (2023) revealed that 45% of AI-generated code contains vulnerabilities or violates good development practices. Even when AI seems to work, issues persist: the solution may not be secure, may not comply with regulatory standards, and may not align with business objectives. Without rigorous testing and continuous validation, any response will be mere guesswork.

The belief in the infallibility of AI is fueled by commercial discourse and unrealistic expectations but ignores a fundamental truth: technology relies on humans to interpret, adjust, and correct its outputs. In regulated sectors such as legal, the lack of supervision may violate laws like the General Data Protection Law (LGPD), which requires transparency in automated decisions. According to a McKinsey report (2023), few companies seem to be fully prepared for widespread use of GenAI, or more precisely, for the risks these tools can bring to businesses. Only 21% of respondents who reported adopting artificial intelligence state that their organizations have guidelines that guide the use of these tools by teams. In health, the World Health Organization (WHO, 2023) warns that AI systems without human supervision can lead to incorrect guidance, personal data breaches, and spread misinformation.

However, supervision faces significant challenges. The belief that artificial intelligence is infallible reflects a distortion fueled by both commercial discourse and unrealistic expectations, and the shortage of professionals is also critical, in a recent survey by Bain & Company in Brazil, 39% of executives cited the lack of internal expertise as the main barrier to accelerating the implementation of generative AI, surpassing even concerns about data security.

It is not about denying the advances of technology, which are substantial, but about recognizing that it still depends, and will continue to depend, on professionals capable of interpreting, adjusting, and, when necessary, correcting its outputs. Especially in regulated or high-impact sectors, such as financial, legal, or health, the absence of technical and ethical supervision can lead to serious legal and operational consequences. The study by highlights this scarcity; Brazil only trains 53 thousand IT professionals per year, while the demand between 2021 and 2025 will require a total of 797 thousand talents.

Global initiatives point the way to improvements. The UN’s methodology for ethical use of AI recommends human supervision throughout the life cycle of systems, from design to operation. Companies like Salesforce illustrate this in practice: its Einstein platform uses ethics committees to audit algorithms. This approach shows that supervision is not only technical but also strategic, requiring transparency, accountability, and investment in training.

AI has the power to transform industries, but without human supervision, its potential is overshadowed by ethical, legal, and operational risks. Cases like financial fraud and possible medical errors show that blind trust in technology is unsustainable, while examples like Salesforce prove that robust governance can maximize benefits and minimize failures. By 2025, the AI debate should prioritize supervision as a pillar of responsible innovation, addressing challenges such as costs, talent shortages, and cultural resistance. Leaders, companies, and regulators have a responsibility to build systems that combine the power of AI with human sensitivity, ensuring that technology amplifies progress, not problems. The future of AI is not in blind automation but in intelligent collaboration, and it is up to us to shape it with clarity, ethics, and commitment.