On the eve of Qlik Connect''' 2025, the Artificial Intelligence Council of Qlik(AI Council) is aligning around a clear message for the industry: untrustworthy AI will not be scaled—and AI that can't be scaled is just theater. Their perspectives converge on a critical shift in corporate AI: the need to move beyond experimentation and advance toward execution, driven by transparency, governance, and trustworthy data at its core.
Despite record investments in AI, most companies remain stuck in the lab. According to a... Recent IDC researchWhile 80% organizations plan to implement workflows with Agentic AI, only 12% feel prepared to support autonomous decision-making at scale. Confidence in the results (Please provide the Portuguese text you would like translated. "outputs" is not a complete sentence or a substantial body of text.The value that AI provides is diminishing in the face of growing concerns about hallucinations, biases, and regulatory pressure. And as models become commoditized, competitive advantage is shifting—not to those with the most advanced models, but to those who can operationalize AI with speed, integrity, and trust.
The Qlik AI Council emphasizes that trust must be built in from the start—not added later. Execution is the new differentiator, and it only works when the data, infrastructure, and outputs are verifiable, explainable, and actionable. In today's landscape, the companies that stand out won't be the ones that test the most—but the ones that deliver results.
"AI that operates without transparency and redress is fundamentally impossible to scale," says Dr. Rumman Chowdhury, CEO of Humane Intelligence. "It's not possible to incorporate autonomy into systems without incorporating responsibility. Companies that don't treat governance as core infrastructure will be unable to scale—not due to technological limitations, but due to failures of trust."
"We're entering a crisis of trust in AI," says Nina Schick, founder of Tamang Ventures. "From deepfakes to manipulated content, public trust is collapsing. If companies want to create AI that scales, they first need to build systems people believe in. This requires authenticity, explainability, and a deep understanding of the geopolitical risks of uncontrolled automation."
"The regulatory landscape is changing rapidly and won't wait for companies to catch up," says Kelly Forbes, Executive Director of the AI Asia Pacific Institute. "Executives need to understand that compliance is no longer just a legal safeguard. It's a competitive differentiator. Trust, auditability, and risk governance aren't restrictions—they're what make enterprise-scale AI viable."
"Last year's Nobel Prizes recognized the increasingly prominent role AI plays and will play in scientific discovery, from developing new medicines and materials to proving mathematical theorems," says Dr. Michael Bronstein, Professor of AI at DeepMind in the University of Oxford. "Data is the lifeblood of AI systems, and we not only need new data sources designed specifically with AI models in mind, but also need to ensure we can trust the data on which any AI platform is built."
"The market is lacking execution," says Mike Capone, CEO of Qlik. "Companies aren't falling behind because they lack access to powerful models. They're falling behind because they haven't incorporated reliable AI into the structure of their operations. Therefore, at Qlik, we've created a platform focused on decisive and scalable actions. If your data isn't trustworthy, your AI won't be either. And if your AI isn't trustworthy, it won't be used."
The Qlik AI Council's message is clear: AI is advancing rapidly, but trust comes first. The time to act is not next quarter. It's now. Companies that fail to operationalize trustworthy intelligence will fall behind — not for what they didn't build, but for what they couldn't scale.
To hear more from the Qlik AI Council and industry leaders driving trustworthy and scalable AI, join the Qlik Connect livestream this week.

