.jpg)
Industrial AI isn’t an algorithm problem, it’s a data problem. Discover what Honeywell HUG 2025 revealed about AI trust and data accuracy, as confirmed by Honeywell’s leadership and Adlib’s work with leading manufacturers.
AI Isn’t the Problem. Data Is.
This week, Adlib’s CEO Chris Huff and GM of International Markets Erikjan Franssen joined leaders across manufacturing, energy, and industrial operations at the Honeywell User Group (HUG) EMEA in The Hague.
As an OEM partner to Honeywell and the document and data accuracy layer powering many of its customers’ operational workflows, we had a unique vantage point on what industrial enterprises are truly expecting from AI in 2025. Our consensus:
AI excitement is high, spending is up, but transformation is being bottlenecked by one issue above all others: data accuracy and trust.
This picture isn’t unique to HUG. In a recent IIOT World keynote "Building Data Accuracy and AI Trust in Smart Manufacturing", Chris Huff highlighted, 95% of enterprise AI initiatives show no measurable ROI, despite significant investment. Meanwhile, Deloitte’s 2025 Smart Manufacturing Survey found that only 29% of manufacturers are using AI/ML at the facility or network level, and 23% are still in pilot mode, even as most report meaningful productivity gains when implementations work.
Gartner tells a similar story: through 2026, 60% of AI projects that aren’t supported by AI-ready data will be abandoned, and 63% of organizations are unsure they have the right data management practices for AI. Everest Group has likewise found that enterprises that rebuild trusted, governed data foundations can accelerate AI time-to-value by as much as 65%.
What HUG made clear is why this is happening and what leading manufacturers are doing about it.
Our conversations highlighted a paradox we’re now seeing in almost every industrial account: AI budgets are growing, but AI remains stuck in proof-of-concept.
From the Deloitte 2025 Smart Manufacturing Survey:
At HUG, leaders echoed exactly what Cesar Bravo, AI Solutions Director at Honeywell, and Erikjan Franssen from Adlib described in their interviews: the difference between experimentation and transformation is focus and discipline, not tools.
Cesar’s view was straightforward: you start with the challenge, then apply technology where it fits, not the other way around. Plants that define clear use cases, cleanse and structure the right data, and test in realistic conditions are the ones moving from pilots to production.
As Cesar put it, models are now more precise and more accessible than ever, but organizations “need good practice and methodology to deal with data, to preprocess the data, [and] clean data pipeline.” .
The mindset shift is clear: AI is no longer an experiment. It’s an operational initiative.
And operational initiatives demand rigor: governance, lineage, quality, and interoperability. Not just another dashboard.
Chris's take on the widespread belief that “industrial data isn’t ready for AI":
"Much of the data isn’t unusable because it’s small, it’s unusable because it’s inconsistent, unstructured, or trapped in systems and documents never designed for AI."
Across industrial base, operators see the same patterns:
As Erikjan highlighted, even the most digitized factories still depend heavily on external supplier and vendor documentation. If that content isn’t digitized, normalized, and validated, AI can’t learn from it.
In Lucian Fogoros’ (Co-founder, IIOT World) recap of HUG conversations, he described this as the difference between data volume and data usability. Manufacturers are swimming in data, but decades of logs, PDFs, manuals, and inspection records remain invisible to AI because they’re stuck in unstructured formats.
This matches current industry research:
At HUG, the message was blunt:
If your documents aren’t accurate, structured, validated, and governed, your AI won’t be either.
A dominant theme in discussions with manufacturing and energy leaders: trust, not algorithms, is the biggest barrier to AI adoption on the plant floor.
Teams described scenarios where:
As Claudia Chandra, Honeywell’s CPO, and Chris Huff pointed out, the earliest sign of trouble is often not downtime but data decay. A piling up of “exceptions,” suspicious trends, and dashboard values that no longer match reality.
That’s why the best plants are beginning to treat data health like asset health, using AI to monitor and flag anomalies in data flows themselves, not just in equipment, and implementing what is essentially “predictive maintenance for data.”
At the governance level, highly regulated industries are now demanding:
As Chris summarized in multiple sessions: to scale AI, you must deliver both accuracy and usability. If AI outputs show up in someone’s workflow without clear validation, operators will do what they’ve always done... double-check manually or ignore the system entirely.
IIOT World - AI Frontiers 2025 - Keynote: Building Data Accuracy and AI Trust in Smart Manufacturing
SSON x Adlib - IDP Summit - AI outcomes Start Upstream: Why Cleaning Document Pipelines Beats Tuning Prompts
One of the most compelling threads from HUG wasn’t just about 2025, but rather about where plants are heading by 2028.
In a joint discussion, Claudia Chandra and Chris Huff described a shift from data entry to data fluency:
In Chris’s words, when “the scribbles from an eight-hour shift” become structured, searchable insight, you standardize formats and get to a decision much faster.
By 2028, the plant won’t just raise an alarm. It will explain why something is happening and what to do next, if the underlying data and document foundation is accurate and governed.
Another clear pattern from HUG: no one wants to build custom data accuracy and document-transformation tooling from scratch.
Across conversations, manufacturers told us they don’t have enough internal data engineers, can’t realistically integrate decades of fragmented data and documents on their own, and are still wrestling with legacy engineering and compliance content that generic tools simply can’t handle.
This aligns with findings Chris referenced from recent MIT and Everest Group research: external partnerships in AI and data are being deployed roughly twice as often as pure internal builds, because collaboration is the faster path to scale.
Examples that surfaced at or around HUG included:
The consistent message:
Everyone wants a plug-in accuracy and trust layer that makes AI successful, not another platform to maintain.
Across industrials, three outcome categories show up repeatedly. AI isn’t being justified on “innovation” alone, it’s being asked to move real, measurable needles.
Manufacturers expect AI to:
Recent Deloitte work suggests these expectations are realistic: respondents reported up to 20% gains in production output and employee productivity, and 15% more capacity, when smart manufacturing and AI are properly implemented on top of solid data foundations.
Honeywell’s customers (especially in energy, life sciences, and industrial manufacturing) lean heavily on documents to prove safety and compliance.
Leaders emphasize that AI must:
One global energy company, for example, has already saved over $1M in engineering hours and avoided compliance issues by automating CAD → PDF transformations and enforcing metadata governance at scale.
More and more enterprises are moving from:
“AI as automation” → to → “AI as advisor.”
They’re targeting:
But every leader we spoke with added the same caveat: none of this works if the historical and real-time data feeding the models is incomplete, inconsistent, or stale.
As Erikjan put it:
“AI is only as good as the documents and data that feed it. If those are wrong, every insight downstream is wrong too.”
Every insight from HUG reinforced the same conclusion:
AI cannot be trusted until the data and documents feeding it are trustworthy.
Adlib’s role is increasingly seen as foundational to the AI outcomes manufacturers want.
In practice, that means:
In Lucian Fogoros’ HUG pieces, Chris and Claudia described the next evolution as a world where plants not only run themselves, but “explain themselves” with AI that is transparent, testable, and rooted in accurate data.
Or, as Chris summarized it:
“AI won’t replace an employee. But an employee using AI will replace one who isn’t.”
That’s only true, though, if the AI they’re using is powered by data they can trust.
If there was one universal message across the recent events Chris attended, it was this:
AI will transform industrial operations but only when accuracy, trust, and governance are solved first.
Enterprises don’t want more AI tools. They want AI they can rely on, driven by:
For Honeywell customers, and for industrial enterprises globally, the path forward is clear:
Adlib is proud to stand alongside Honeywell as the document and data accuracy layer that helps industrial enterprises move from pilots to production, with AI that not only acts, but explains itself.
Take the next step with Adlib to streamline workflows, reduce risk, and scale with confidence.