Artificial intelligence (AI), a promising trend before, became a fascinating reality today. According to data from the McKinsey, the adoption of AI by companies jumped to 72% in 2024. Applied in different personal use cases, in organizations and even in governments, the AI, especially Generative AI (GenAI), must continue growing rapidly, adding trillions of dollars to the global economy
Although their benefits are undeniable, there are still nebulous facets. A survey from Deloittediscovered that many organizations believe that new problems may arise due to the expansion of AI pilot projects, unclear regulations regarding confidential data and doubts about the use of external data (for example, third-party data under license. The companies heard, 55% said they avoid certain AI use cases due to data-related issues, and an equal proportion is working to improve the security of their data
Digital insecurity was a relevant topic at the 2024 World Economic Forum, which highlighted as one of the main risks of the present, behind misinformation and fake news, of extreme weather events and political polarization. The interviewed leaders mentioned that new tools and technological capabilities, as provided by artificial intelligence, they should make the path to cybercrime more difficult throughout this decade
Prevention is still better than cure
The development of AI ??brings risks to organizations if not implemented correctly. However, a well-designed artificial intelligence can not only prevent vulnerabilities, but also become a highly effective tool to combat potential attacks. For that, the first step is to keep in mind that the adoption of AI should occur in stages
When protection is prioritized over detection, with preventive actions, the violations become much clearer and easier to control. The main concern of companies should be the security of their infrastructure. A robust AI platform with established components contributes to innovation, efficiency and, consequently, for a safer environment
One of the strategies in this regard is the adoption of open source, today one of the main drivers of artificial intelligence. Open source has been the engine of innovation for decades and, by combining the experience of developer communities worldwide with the power of AI algorithms, releases an extreme potential for safe innovation. Open source solutions, based on open hybrid cloud, they give organizations the flexibility to run their AI applications and systems in any data environment, whether in public clouds or private clouds or at the edge, ensuring greater safety
More than safe, a trustworthy AI
Several factors must be considered when mitigating risks. From the perspective of transparency and explainability, algorithms must be understandable. Furthermore, it is essential to ensure that AI systems do not perpetuate biases. As a leading company in open source solutions, At Red Hat we promote collaborative and open development models, where the community can audit and improve algorithms, facilitating real-time bias control and mitigation
Furthermore, we are committed to democratizing the use of AI ??through open source code and initiatives like Small Language Models, that enable more organizations to leverage AI without technological or cost barriers. A recent report of the Databricksshowed that more than 75% of companies are choosing these smaller and customized open source models for specific use cases
An example is open AI environments that provide a flexible framework for data scientists, engineers and developers create, we will implement and integrate projects more quickly and efficiently. Platforms developed by open source have security built into the design, making it easier for organizations to train and deploy AI models with strict data protection standards
Aligned with the future
Another concern of companies and society regarding the use of AI ??on a large scale is related to sustainability. According toa Gartner, AI is driving rapid increases in electricity consumption, with the consultancy predicting that 40% of existing AI data centers will be operationally limited by energy availability by 2027
??Optimizing the energy consumption of technological infrastructures is essential to reduce the carbon footprint and mitigate the effects of climate change, contributing to achieving the goals of the United Nations (UN) 2030 Agenda. Projects like Kepler and Climatik, for example, they are essential for sustainable innovation
AI and its complements, how GenAI and Machine Learning, can — and they are indeed doing that — revolutionize essential sectors through innovative solutions, automated medical diagnoses or risk analyses in the justice system. Along with other technologies such as quantum computing, Internet of Things (IoT), Edge Computing, 5G and 6G, this technology will be the foundation for the development of smart cities, for the discovery of unprecedented innovations and to write a new chapter in history. But, although all these solutions play a crucial role, we must always remember that it is talents that develop them, they implement and use it strategically, to solve specific problems, aligning technology and business
The collaboration is, therefore, fundamental to mitigate risks and move forward more safely towards a sustainable future built on the foundations of AI. Open source principle-based collaboration promotes transparency, open culture and community control, in addition to driving the development of ethical AI technology, inclusive and responsible in the short and long term