The code to empower AI

Artificial intelligence (AI), once a promising trend, has now become a fascinating reality. According to data from  McKinsey, AI adoption by companies has jumped to 72% in 2024. Applied in various personal use cases, in organizations, and even in governments, AI, especially Generative AI (GenAI), is expected to continue to grow rapidly, adding trillions of dollars to the global economy.

Although its benefits are undeniable, there are still nebulous facets. A Deloitte found that many organizations believe new issues may arise due to the expansion of AI pilot projects, unclear regulations on sensitive data, and doubts about the use of external data (e.g., third-party licensed data). Of the companies surveyed, 55% said they avoid certain AI use cases due to data-related concerns, and an equal proportion is working to enhance the security of their data.

Digital insecurity was a relevant topic at the 2024 edition of the World Economic Forum, which highlighted it as one of the main risks of today, behind misinformation and fake news, extreme weather events, and political polarization. The leaders interviewed mentioned that new tools and technological capabilities, such as those provided by artificial intelligence, should make the path to cybercrime more difficult over this decade.

Prevention remains better than cure

The development of AI brings risks to organizations if not implemented correctly. However, a well-designed artificial intelligence can not only prevent vulnerabilities but also become a highly effective tool to combat possible attacks. For this, the first step is to bear in mind that the adoption of AI must occur in stages.

When protection is prioritized over detection, through preventive actions, violations become much clearer and easier to control. The main concern of companies should be the security of their infrastructure. A robust AI platform with established components contributes to innovation, efficiency, and consequently, to a safer environment.

One of the strategies in this regard is the adoption of open source, now one of the main drivers of artificial intelligence. Open source code has been the engine of innovation for decades and, by combining the experience of developer communities around the world with the power of AI algorithms, unleashes extreme potential for secure innovation. Open source solutions, based on open hybrid cloud, give organizations the flexibility to run their AI applications and systems in any data environment, whether in public or private clouds, or on the edge, ensuring greater security.

More than just safe, a reliable AI

Several factors must be considered when mitigating risks. From the transparency and explainability perspective, algorithms should be understandable. Moreover, it is essential to ensure that AI systems do not perpetuate biases. As a leading company in open-source solutions, at Red Hat, we promote collaborative and open development models, where the community can audit and improve algorithms, facilitating real-time bias control and mitigation.

Furthermore, we are committed to democratizing the use of AI through open-source code and initiatives such as Small Language Models, which allow more organizations to leverage AI without technological or cost barriers. A recent report from  Databricks showed that over 75% of companies are choosing these smaller, customized open-source models for specific use cases.

One example is open AI environments that provide a flexible framework for data scientists, engineers, and developers to create, deploy, and integrate projects more quickly and efficiently. Platforms developed by open source have security built into the design, making it easier for organizations to train and deploy AI models with strict data protection standards.

Aligned with the future

Another concern of companies and society regarding the widespread use of AI is related to sustainability. According to   Gartner, AI is driving rapid increases in electricity consumption, with the consultancy predicting that 40% of existing AI data centers will be operationally constrained by energy availability by 2027.

Optimizing the energy consumption of technological infrastructures is essential to reduce carbon footprint and mitigate the effects of climate change, contributing to achieving the goals of the United Nations 2030 Agenda. Projects like Kepler and Climatik, are essential for sustainable innovation.

AI and its complements, like GenAI and Machine Learning, can – and indeed are already doing so – revolutionize essential sectors through innovative solutions, such as automated medical diagnostics or risk analysis in the justice system. Along with other technologies like quantum computing, Internet of Things (IoT), Edge Computing, 5G and 6G, this technology will be the foundation for the development of smart cities, for the discovery of unprecedented innovations, and for writing a new chapter in history. However, while all these solutions play a crucial role, we must always remember that it is the talents who develop, implement, and strategically use them to solve specific problems, aligning technology and business.

Collaboration is therefore essential to mitigate risks and move forward more securely towards a sustainable future built on the foundations of AI. Collaboration based on open-source principles promotes transparency, open culture, and community control, as well as driving the development of ethical, inclusive, and responsible AI technology in the short and long term.