StartArticlesThe code to enhance AI

The code to enhance AI

Artificial intelligence (AI), once a promising trend, has become a fascinating reality today. According to data from the McKinseythe adoption of AI by companies jumped to 72% in 2024. Applied in various personal use cases, organizations, and even governments, AI, especially Generative AI (GenAI), is expected to continue growing rapidly, adding trillions of dollars to the global economy.

Although their benefits are undeniable, there are still cloudy facets. A survey from Deloittediscovered that many organizations believe that new problems may arise due to the expansion of AI pilot projects, unclear regulations regarding confidential data, and doubts about the use of external data (e.g., third-party data under license). The companies surveyed, 55% said they avoid certain AI use cases due to data-related issues, and an equal proportion are working to improve their data security.

Digital insecurity was a prominent topic at the 2024 World Economic Forum, which highlighted it as one of the main current risks, behind misinformation and fake news, extreme weather events, and political polarization. The interviewed leaders mentioned that new tools and technological capabilities, such as those provided by artificial intelligence, are expected to make the path to cybercrime more difficult throughout this decade.

Prevention is still better than cure

The development of AI ??brings risks to organizations if not implemented correctly. However, a well-designed artificial intelligence can not only prevent vulnerabilities but also become a highly effective tool to combat potential attacks. For this, the first step is to keep in mind that the adoption of AI should occur in stages.

When protection is prioritized over detection, with preventive actions, violations become much clearer and easier to control. The main concern of companies should be the security of their infrastructure. A robust AI platform with established components contributes to innovation, efficiency, and consequently, a safer environment.

One of the strategies in this regard is the adoption of open source, today one of the main drivers of artificial intelligence. Open source has been the engine of innovation for decades, and by combining the expertise of developer communities worldwide with the power of AI algorithms, it unlocks immense potential for safe innovation. Open source solutions, based on open hybrid cloud, give organizations the flexibility to run their AI applications and systems in any data environment, whether in public or private clouds or at the edge, ensuring greater security.

More than secure, a trustworthy AI

Several factors must be considered when mitigating risks. From the perspective of transparency and explainability, algorithms must be understandable. Furthermore, it is essential to ensure that AI systems do not perpetuate biases. As a leading company in open source solutions, at Red Hat we promote collaborative and open development models, where the community can audit and improve algorithms, facilitating control and mitigation of biases in real time.

Furthermore, we are committed to democratizing the use of AI.through open source code and initiatives like Small Language Models, which allow more organizations to leverage AI without technological or cost barriers. A recent report of the DatabricksIt showed that more than 75% of companies are choosing these smaller, customized open-source models for specific use cases.

An example is open AI environments that provide a flexible framework for data scientists, engineers, and developers to create, deploy, and integrate projects more quickly and efficiently. Open source-developed platforms have security built into the design, making it easier for organizations to train and deploy AI models with strict data protection standards.

Aligned with the future

Another concern of companies and society regarding the use of AI?On a large scale, it is related to sustainability. According toa GartnerAI is driving rapid increases in electricity consumption, with the consultancy predicting that 40% of existing AI data centers will be operationally limited by energy availability by 2027.

Optimizing the energy consumption of technological infrastructures is essential to reduce the carbon footprint and mitigate the effects of climate change, contributing to the achievement of the United Nations (UN) 2030 Agenda goals. Projects like Kepler and Climatik, for example, are essential for sustainable innovation.

AI and its complements, such as GenAI and Machine Learning, can — and are indeed already doing so — revolutionize essential sectors through innovative solutions, such as automated medical diagnoses or risk analyses in the justice system. Along with other technologies such as quantum computing, Internet of Things (IoT), Edge Computing, 5G, and 6G, this technology will be the foundation for the development of smart cities, for the discovery of unprecedented innovations, and to write a new chapter in history. But, although all these solutions play a crucial role, we must always remember that it is the talents who develop, implement, and strategically utilize them to solve specific problems, aligning technology and business.

Collaboration is therefore essential to mitigate risks and move forward more safely toward a sustainable future built on the foundations of AI. Open-source principle-based collaboration promotes transparency, an open culture, and community control, as well as drives the development of ethical, inclusive, and responsible AI technology in the short and long term.

Thiago Araki
Thiago Araki
Thiago Araki is Senior Director of Technology for Latin America at Red Hat.
RELATED ARTICLES

LEAVE A RESPONSE

Please enter your comment!
Please enter your name here

- Advertisement -

RECENT

MOST POPULAR

[elfsight_cookie_consent id="1"]