From prototypes to production: Building trust in AI

With a month to go until we host a series of exclusive events with Barry O'Reilly, Barry takes over our blog to address AI hesitancy and AI trust, as we shift from proof-of-concepts to production code!

In this two part blog series our Chief AI Officer David Low and Barry O’Reilly ruminate on the need for leaders to shift from piloting experiments to empowering their organisations to ship production solutions that capitalise on the potential of AI.

So without further ado, we hand over to Barry to discuss the question of trust, and how leaders can create environments that thrive in ambiguity.

The excitement and the fear

Every boardroom I enter is buzzing with AI. Leaders are excited, inspired, but also a little anxious about what it means for their industry.

Yet, when the moment comes to move beyond flashy prototypes and actually “switching it on” in production, hesitation seems to creep in.

Why? Because trust—not technology—is the real barrier that leaders are tussling with.

In heavily regulated industries like pensions, banking, energy and healthcare, the fear is real: Can we rely on this system? Will it hold up under scrutiny? Is it explainable? Are we even allowed to use it?

Too often, regulation doesn’t just slow progress—it freezes it. Organisations are either told they can’t try at all, or they’re only allowed to run trivial pilots that make little real impact.

The result: AI ends up locked away in a corner, impressive in demos on one ‘AI-innovator’ machine but irrelevant in the real world, or on the organisational technology real-estate where it can make a real difference.

But how can leaders break through the barriers and start creating real change?

The prototyping trap

Today, we’re told anyone can build a prototype in minutes with no technical ability to think of.

The internet and your LinkedIn feed is littered with flashy proofs-of-concepts from the latest Generative AI tool, yet how many do you believe have made it to production?

The hard part still remains of building systems that are scalable, evaluated, trusted, governed, and resilient enough to matter.

This dynamic plays out across industries: teams can quickly spin up demos in AI tools or with AI assistants. But when they put those demos under the stress of 500 real users, systems collapse. The leap from prototype to production remains the bottleneck.”

It’s easy to prototype. It’s hard to productionise.

The governance dilemma

On top of the ‘prototyping trap’, we have the larger and more cumbersome challenge of good governance. Governance is meant to protect, but in many cases it also suffocates.

At one end of the spectrum, enterprises block AI tools like GitHub Copilot altogether, fearing what sensitive data the systems might ‘see’. At the other end, you’ve got individuals running unapproved proof-of-concepts on personal laptops, hidden from governance altogether. Both extremes are bad. Neither builds trust. So what’s the answer?

Getting past this governance gridlock is key to unlocking the innovation that gets trapped in the “tiny footprint”—safe but insignificant experiments that never scale.

The better approach

The path forward isn’t about avoiding risk—it’s about finding smart, low-risk entry points that build confidence and momentum.

Example 1: Pension RFP Analysis

Pension providers bid for massive contracts through highly complex RFP processes—thousands of pages long, governed to the letter. The stakes are enormous. The data is dense. But crucially: it’s not personal. It’s business information.

By applying AI to analyse RFP outcomes, providers can finally see where they win, why they lose, and how to improve. It’s low risk—no GDPR or PII concerns—yet high value.

AI transforms an administrative burden into a decision advantage. Suddenly, it’s not a prototype rotting in the corner, it’s a trusted tool shaping the future.

Example 2: Life Insurance and External Proof-of-Concepts

Some firms are so risk-averse they won’t even run a simple AI-driven marketing campaign in their own environment. One life Insurance business is a case in point. Their governance wouldn’t allow them to touch generative imagery themselves.

The workaround? Build the campaign externally. The risk is contained, the barriers are lowered, and they get to see the results in action. Once they experience the safety and value, they’re far more open to bringing it in-house.

Waracle has used this approach in other industries too, like with a learning company last year. By running their proof-of-concept externally, we helped them test AI responsibly, without risking sensitive customer data. That confidence became the bridge to scaling safely.

The lesson leaders are learning

That guided, real-world pilots are not toys. They’re deliberate, low-risk experiments that generate meaningful results.

They can in turn, reduce fear, build confidence, and unlock the path from curiosity to capability.

Trust is the real product

David and the team at Waracle are reinforcing something vital, in this new era, trust itself is the product.

  • Trust that the system works under real-world loads.
  • Trust that governance is baked in from the start, not retrofitted later.
  • Trust that partners will deliver responsibly, not oversell capabilities they don’t actually have.

The truth is, most enterprises can’t do this alone. MIT research shows the majority of in-house AI efforts fail. Companies need experienced partners who can help them both imagine and implement. As David put it, the number one question clients ask is simple: “Have you done this in a regulated space?”

That’s why examples matter. They become trust signals. When one firm shows a safe, working prototype in a regulated environment, others take notice. Success creates a ripple effect of confidence.

Principles for building trust at scale

If organisations want to get out of the prototyping trap, they need to design for trust from day one. That means:

  • Transparency – Make it clear how insights are generated. Black boxes breed fear; clarity builds confidence.
  • Accountability – Map results to owners, decisions, and actions. Trust grows when people know who is responsible.
  • Production-readiness – Don’t build toys. Build with governance, compliance, and scale in mind—even at the pilot stage.
  • Repeatability – As David noted, the goal isn’t just doing one project—it’s ensuring you can do it again, faster and better, the next time.

Turning it on

Innovation isn’t just about moving fast—it’s about moving fast in ways people trust.
The companies that win won’t be the ones with the flashiest prototypes.

They’ll be the ones who figure out how to switch AI on in production, safely, responsibly, and at scale.

So ask yourself: what would it take for your organisation to confidently flip the switch?
Because the future won’t wait. And neither should you.

Share this article

Authors

Barry O’Reilly
Technologist, Strategist, Entrepreneur & Author

Related

Article19 August 2025

Validating AI-driven digital biomarkers

Article08 May 2025

Embracing the AI First Future