Fragmented journeys
Customers move across teams, channels, and systems that were never designed to work together.
Most AI and CX investments look convincing in a roadmap, business case, or vendor demo. The real question is whether they will work inside your actual organisation.
The AI Investment and CX Audit is a short, executive-level review. It identifies where AI and CX initiatives are likely to underperform, stall, or create unintended risk before major investment decisions are locked in.
AI sits on top of journeys, data, ownership, and design that already exist. When those foundations are weak, AI does not fix them. It amplifies them. Most AI failures are not model failures. They are environment failures.
Customers move across teams, channels, and systems that were never designed to work together.
What gets measured drives behaviour. AI optimised against the wrong metric will optimise against the customer.
Bad processes do not become good processes when wrapped in a model. They become faster bad processes.
When AI cannot resolve, the handoff matters more than the model. Most environments are not ready for it.
Architecture and roadmap shaped by what vendors sell, rather than what the operating model needs.
On paper, the solution looks modern. In reality, AI can amplify the cracks.
Failure used to be obvious. A bot that did not understand. A workflow that broke in front of the customer.
That has changed. Modern AI sounds correct, looks correct, and behaves with confidence even when it is wrong. Fluent systems fail more persuasively.
This is the gap between perceived comprehension and real understanding. Customers, agents, and executives all over-trust systems that present well, and the cost of that over-trust compounds quietly until it surfaces as churn, regulatory exposure, or hidden downstream demand.
From the book
"The phrase 'the system decided' is corrosive. It removes responsibility and denies recourse. Autonomous systems act on behalf of organisations. That relationship must be explicit in language, escalation, and recovery."
Designing AI Conversations at Scale · Ben Farrell
Each is independent. Each is vendor-neutral. Each is designed to improve decision quality before money is committed.
Each area is examined for fit, readiness, and risk. The output is not a framework. It is a decision.
Whether the AI ambition matches the actual business strategy and customer reality.
Whether the journeys AI will sit inside are coherent, designed, and resilient.
What AI will see, what it will miss, and what the system actually knows about the customer.
Whether interactions are designed for cognition and trust, or simply for completion.
Who owns the AI, who governs it, and who is accountable when it behaves unexpectedly.
Where the chosen stack creates lock-in, fragility, or capability ceilings you cannot easily exit.
Whether the case for AI rests on real value, perceived value, or unexamined assumptions.
A short, considered document written for the people making the decision. Not a slide deck. Not a vendor brief.
A frank read on where the current direction is most likely to fail, and where assumptions are doing the work models cannot.
The two or three moves that genuinely change outcomes, separated from the long list of things that merely look like progress.
Sequenced actions that are honest about dependencies, ownership, and what the organisation is actually capable of in that window.
A working session with the executive sponsor or board, on request. Designed to land the decision, not the deck.
The advisory work is not implementation. It is not a vendor relationship. It is not a delivery shop.
It is a small number of senior leaders making large decisions, with a trusted external perspective in the room.
The focus is clarity, risk, and outcomes. The output is better decisions, made earlier, with fewer regrets.
The work is informed by enterprise reality. Roles where AI and CX decisions carry real consequence, real budget, and real customers.
Ben Farrell is Head of Growth and Strategic Alliances, Webex CX Solutions APJC at Cisco. The advisory work sits alongside that role and is offered selectively, in a personal capacity, to organisations facing high-stakes AI and CX decisions.
The lens is practical. AI is treated as a behavioural and organisational problem first, and a technology problem second. The interest is in why systems fail in the wild, not why they succeed in the slide.
Indicative ranges. Final scope and pricing are confirmed in conversation, after the work is properly understood.
A short note is enough to start. Most engagements begin with a 30 minute confidential conversation, with no expectation of fit on either side.
What you share stays with Ben. Nothing is forwarded, distributed, or used outside the conversation.
Replies typically within two business days. Sydney time.
Ben will respond personally within two business days. If the matter is time-sensitive, mention it in your message and a faster reply will be arranged.