Artificial intelligence (AI), once a promising trend, has now become a fascinating reality. According to data from McKinsey, AI adoption by companies jumped to 72% in 2024. Applied across various personal use cases, in organizations, and even governments, AI—especially Generative AI (GenAI)—is expected to continue growing rapidly, adding trillions of dollars to the global economy.
Although its benefits are undeniable, there are still murky aspects. A survey by Deloitte found that many organizations believe new challenges may arise due to the expansion of AI pilot projects, unclear regulations on sensitive data, and doubts about using external data (e.g., third-party licensed data). Among the surveyed companies, 55% said they avoid certain AI use cases due to data-related concerns, and an equal proportion are working to improve their data security.
Digital insecurity was a relevant theme at the 2024 World Economic Forum, which highlighted it as one of today’s top risks, behind misinformation and fake news, extreme weather events, and political polarization. The interviewed leaders mentioned that new technological tools and capabilities, such as those provided by artificial intelligence, should make the path to cybercrime more difficult over this decade.
Prevention remains better than cure
The development of AI poses risks to organizations if not implemented correctly. However, well-designed artificial intelligence can not only prevent vulnerabilities but also become a highly effective tool for combating potential attacks. To achieve this, the first step is to keep in mind that AI adoption should occur in stages.
When protection is prioritized over detection, with preventive actions, breaches become much clearer and easier to control. Companies’ main concern should be the security of their infrastructure. A robust AI platform with established components contributes to innovation, efficiency, and consequently, a safer environment.
One strategy in this regard is the adoption of open source, now one of the main drivers of artificial intelligence. Open source has been the engine of innovation for decades, and by combining the expertise of developer communities worldwide with the power of AI algorithms, it unlocks extreme potential for secure innovation. Open-source solutions, based on hybrid open cloud, give organizations the flexibility to run their AI applications and systems in any data environment—whether in public or private clouds or at the edge—ensuring greater security.
More than secure, a trustworthy AI
Several factors must be considered when mitigating risks. From the perspective of transparency and explainability, algorithms should be understandable. Additionally, it is crucial to ensure AI systems do not perpetuate biases. As a leader in open-source solutions, at Red Hat, we promote collaborative and open development models where the community can audit and improve algorithms, facilitating real-time bias control and mitigation.
Moreover, we are committed to democratizing AI use through open source and initiatives like Small Language Models, which enable more organizations to leverage AI without technological or cost barriers. A recent report by Databricks showed that over 75% of companies are choosing these smaller, customized open-source models for specific use cases.
An example is open AI environments that provide a flexible framework for data scientists, engineers, and developers to create, deploy, and integrate projects faster and more efficiently. Platforms developed through open source have security built into their design, making it easier for organizations to train and deploy AI models with strict data protection standards.
Aligned with the future
Another concern for companies and society regarding large-scale AI use is sustainability. According to a Gartner, AI is driving rapid increases in electricity consumption, with the consultancy predicting that 40% of existing AI data centers will be operationally limited by power availability by 2027.
Optimizing the energy consumption of technological infrastructures is essential to reduce carbon footprints and mitigate the effects of climate change, contributing to achieving the goals of the United Nations (UN) 2030 Agenda. Projects like Kepler and Climatik, for example, are essential for sustainable innovation.
AI and its complements, such as GenAI and Machine Learning, can—and indeed already are—revolutionizing essential sectors through innovative solutions, such as automated medical diagnostics or risk analysis in the justice system. Alongside other technologies like quantum computing, the Internet of Things (IoT), Edge Computing, 5G, and 6G, this technology will be the foundation for developing smart cities, discovering unprecedented innovations, and writing a new chapter in history. But while all these solutions play a crucial role, we must always remember that it is the talents who develop, implement, and strategically use them to solve specific problems, aligning technology and business.
Collaboration is therefore essential to mitigate risks and advance more securely toward a sustainable future built on the foundations of AI. Collaboration based on open-source principles promotes transparency, an open culture, and community control, while also driving the development of ethical, inclusive, and responsible AI technology in the short and long term.