In the first part of this series, our friend and industry leader Barry O’Reilly laid out a fundamental challenge facing leaders today: moving AI from exciting prototypes to production-ready systems. He correctly identified that trust, not technology, is the real barrier. Barry’s closing question was stark: what would it take for your organisation to confidently flip the switch?
I want to pick up that thread from a more personal perspective, starting with a story that brought this question of trust into sharp focus for me.
As part of my own continuous learning, I was recently experimenting with a well-known AI prototyping tool. I fed it my research, crafted detailed prompts, and it generated a beautiful product concept—brand, vision, structure, the works. I was impressed with the speed.
Ready for feedback, I used another AI tool to crawl the app, checking for broken links. But by accident, it crawled a different URL and returned a report on a site that was, to my astonishment, nearly identical to the one I had just “created.” The same branding, the same wording, the same vision. The only material difference was the colour scheme, which I had selected myself.
It was complete, unintentional plagiarism.
My settings for the tool were private. I had explicitly opted out of model training. Yet, the model had seemingly reproduced an idea it had encountered before.
This experience gets to the heart of the trust deficit we all face. It’s not just about internal governance or the risk of employees misusing tools; it’s about the fundamental reliability and integrity of the AI models themselves.
Who can you trust when the black box has a memory you can’t inspect, and how might unshackled AI destroy others’ trust in you through unplanned actions or omissions?
This incident highlights why so many organisations are getting stuck. In a previous article, I talked about the two phases of technology adoption: Explore and Execute. The “Explore” phase is all about learning and creating cases for further investment. The “Execute” phase is where you can attribute real tangible value to the work.
A lack of trust is trapping companies in an endless “Explore” cycle, investing without understanding the return or extracting the right amount of value. But we’re now seeing two distinct patterns of this paralysis.
The first is the one I experienced: using flashy generative tools to spin up demos built on a foundation of sand. The nagging doubt means these prototypes can never make the leap to production.
The second pattern is almost the opposite. Companies with more financial firepower, terrified of the risks, call in the “big boys”—the large, traditional consulting firms. After a lengthy and expensive engagement, they are left with a series of sound governance steps, a perfect theoretical framework, but no actual output. They have a rulebook for a game no one is playing. They’re effectively back to square one, only now with a bloated process that makes it even harder to experiment.
In both cases, the result is the same: an innovation lab full of clever ideas that the business is too afraid to “switch on”, as we’ve seen half-a-dozen times in the past few months alone.
So, what’s the answer? How do you bridge the gap between a promising idea and a production-ready reality, avoiding both the black-box trap and the governance quagmire?
The path forward is to experiment in a way that methodically builds trust from day one. This is why we’ve developed our “sandbox” approach. It’s a controlled environment designed specifically to deliver speed to value.
Crucially, this isn’t about simply putting a safe wrapper around the same black-box tools we don’t trust. The power of the sandbox is that it allows us to use our own trusted, transparent methods and technologies to rapidly test ideas at arm’s length from your core systems.
Here’s how it works in practice:
1 – We De-Risk the Environment. The work happens on our secure infrastructure, not yours. This immediately removes the primary blocker for internal governance and risk committees.
2 – We Use Your Data, Safely. We work with your real business data, but in a non-sensitive context. For a pension provider, this could be thousands of pages from past RFP documents—high-value, but no PII.
3 – We Prove Real-World Value, Quickly. This isn’t a theoretical demo or a governance document. We solve a tangible business problem, proving the value of the approach and building the business case for moving to the “Execute” phase.
Good governance needn’t be a bloated consulting job. We have built tooling that allows for the rapid inspection of output against governance and regulation. This makes compliance part of the creative process, not a gatekeeper. It transforms the conversation from “What are the risks?” to “What are the possibilities?”
Barry’s principles for building trust—Transparency, Accountability, Production-readiness, and Repeatability—are spot on. The sandbox is how we make them real without sacrificing momentum.
In a sandbox, we lift the lid on the process. It becomes a shared, understandable collaboration. We establish clear ownership and a “human-in-the-loop” from day one. We build the pilot with scale in mind, proving that the solution isn’t just clever, it’s viable.
Most importantly, the first sandbox project creates a playbook. It documents the process and the solutions, becoming your organisation’s guide to doing it again, faster and better. It’s how you build a real, internal capability and, finally, make the confident leap from Exploring to Executing.
The companies that succeed with AI won’t be the ones with the flashiest demos or the thickest governance binders. They will be the ones who build a systematic process for turning curiosity into capability. They will understand that trust isn’t something you have; it’s something you build. One deliberate, low-risk, tangible step at a time.
As for my prototype: whilst it won’t see the light of day, good work never goes to waste. The 10-head “agent swarm”, which acted as an experienced pairing team more than a trashy vibe coding process, is ripe for the next challenge. We’d love to introduce you.