StartArticlesThe code to enhance AI

The code to enhance AI

Artificial intelligence (AI), antes uma tendência promissora, it became a fascinating reality today. According to data from McKinsey, the adoption of AI by companies jumped to 72% in 2024. Applied in various personal use cases, in organisations and even in governments, the AI, especially Generative AI (GenAI), must continue to grow rapidly, adding trillions of dollars to the global economy

Although their benefits are undeniable, there are still hazy facets. A survey of Deloittediscovered that many organisations believe new problems may arise due to the expansion of AI pilot projects, regulations unclear about confidential data and doubts about the use of external data (for example, third-party data under license. As empresas ouvidas, 55% said they avoid certain AI use cases due to data-related issues, and an equal proportion is working to improve the security of their data

A insegurança digital foi um tema relevante na edição de 2024 do Fórum Econômico Mundial, que destacou como um dos principais riscos da atualidade, behind misinformation and fake news, of extreme weather events and political polarization. The interviewed leaders mentioned that new tools and technological capabilities, como as fornecidas pela inteligência artificial, they should make the path to cybercrime more difficult throughout this decade

Prevention is still better than cure

The development of AI ??poses risks for organisations if not implemented correctly. However, a well-designed artificial intelligence can not only prevent vulnerabilities, but also become a highly effective tool to combat potential attacks. For that, the first step is to keep in mind that the adoption of AI should occur in stages

When protection is prioritised over detection, з превентивними діями, the violations become much clearer and easier to control. The main concern of companies should be the security of their infrastructure. A robust AI platform with established components contributes to innovation, efficiency and, consequently, for a safer environment

One of the strategies in this regard is the adoption of open source, today one of the main drivers of artificial intelligence. Open source has been the engine of innovation for decades and, by combining the experience of developer communities worldwide with the power of AI algorithms, liberates an extreme potential for safe innovation. Open source solutions, based on open hybrid cloud, they give organizations the flexibility to run their AI applications and systems in any data environment, whether in public clouds or private clouds or at the edge, ensuring greater safety

More secure, a reliable AI

Several factors must be considered when mitigating risks. From the perspective of transparency and explainability, algorithms should be understandable. Furthermore, it is essential to ensure that AI systems do not perpetuate biases. As a leading company in open source solutions, at Red Hat we promote collaborative and open development models, where the community can audit and improve algorithms, facilitating real-time bias control and mitigation

Furthermore, we are committed to democratizing the use of AI ??through open source code and initiatives like Small Language Models, que permitem que mais organizações aproveitem a IA sem barreiras tecnológicas ou de custo. A recent report of the Databricksshowed that over 75% of companies are choosing these smaller and customised open source models for specific use cases

An example is open AI environments that provide a flexible framework for data scientists, engineers and developers to create, implanting and integrating projects more quickly and efficiently. Platforms developed by open source have security built into the design, making it easier for organisations to train and deploy AI models with strict data protection standards

Aligned with the future

Another concern of companies and society regarding the use of AI ??on a large scale is related to sustainability. According toa Gartner, AI is driving rapid increases in electricity consumption, with the consultancy predicting that 40% of existing AI data centres will be operationally limited by energy availability by 2027

??Optimising the energy consumption of technological infrastructures is essential to reduce the carbon footprint and mitigate the effects of climate change, contributing to achieving the goals of the United Nations 2030 Agenda. Projects like Kepler and Climatik, for example, they are essential for sustainable innovation

AI and its supplements, como IA Generativa e Aprendizagem Automática, могут — and they are indeed doing that — revolutionise essential sectors through innovative solutions, como diagnósticos médicos automatizados o análisis de riesgo en el sistema de justicia. Along with other technologies such as quantum computing, Internet of Things (IoT), Edge Computing, 5G і 6G, this technology will be the foundation for the development of smart cities, for the discovery of unprecedented innovations and to write a new chapter in history. But, although all these solutions play a crucial role, we must always remember that it is talents that develop them, implement and use it strategically, to solve specific problems, aligning technology and business

The collaboration is, therefore, fundamental to mitigate risks and move forward more safely towards a sustainable future built on the foundations of AI. Collaboration based on open source principles promotes transparency, open culture and community control, besides promoting the development of ethical AI technology, inclusive and responsible in the short and long term

Thiago Araki
Thiago Araki
Thiago Araki is Senior Director of Technology for Latin America at Red Hat
RELATED SUBJECTS

LEAVE A RESPONSE

Please type your comment
Please, type your name here

RECENT

MOST POPULAR

[elfsight_cookie_consent id="1"]