By Rodrigo Cerveira
Think about artificial intelligence, and what's the first image that comes to mind? For many, the answer still evokes a cliché, almost plastic, futuristic vision: robots, holographic interfaces, and cars that converse with a metallic voice. That was the era of explicit AI, a technology present in the final form of things, a signature of plastic modernity in what we imagined the future to be. However, that era has come to an end. We are entering the era of ubiquitous AI, that is, a resource integrated into daily life, allowing for more informed and efficient decision-making at all levels.
This is the era in which artificial intelligence ceases to be the final product and becomes a means to an end, an invisible and omnipresent layer that optimizes processes in all areas. It is no longer the autonomous car itself, but the brain that recalculates the route in milliseconds to avoid traffic. In medicine, for example, there are already AI systems that identify bone fractures with greater precision than radiologists or detect early signs of diseases such as Alzheimer's years before the first symptoms appear. AI has become a connective tissue of our digital reality, less perceptible, but immensely more impactful.
This omnipresence, however, brings with it fundamental questions. The first is about its reliability. As we delegate more tasks to these systems, to what extent can we trust them? The same AI that can save lives by analyzing an imaging exam can, in other contexts, “hallucinate”—a term that describes its tendency to fill in gaps with extremely plausible, but factually incorrect, information. The line between fact and well-constructed fiction has never been so thin.
The examples are as absurd as they are worrying. We saw a major airline's chatbot invent a non-existent refund policy, leading to the company being held legally responsible. Lawyers have already been embarrassed in court by citing legal cases completely fabricated by AI. And, in the realm of the bizarre, a search engine even suggested adding non-toxic glue to pizzas, a "tip" extracted from a satirical comment on the internet. These cases illustrate that AI, for now, lacks discernment or a commitment to truth; it is a pattern-matching machine.
And this brings us to the second question, perhaps the most critical. With AI becoming such a fluid and integrated facilitator, who is actually checking the conclusions? The convenience of an instant result can lead us to dangerous complacency, accepting its conclusions without due rigor. Do decisions go through a real human filter, or are we slowly becoming mere approvers of algorithmic suggestions?
The answer to navigating this new era lies in redefining our relationship with technology. AI should be used as a brilliant facilitator, a tireless and ingenious intern, but not as the ultimate validator. The responsibility of checking, ensuring the provenance and validity of information before any publication or important decision remains, and always must remain, human. The era of ubiquitous AI is not about replacing human thought, but about augmenting it. It is up to us to use this power wisely and, above all, responsibly.

