He is Head of Growth and Strategic Alliances, Webex CX Solutions APJC at Cisco, and has spent twenty years inside the problems that most AI strategies paper over: fragmented journeys, misaligned metrics, operating models that were never designed for what they are now being asked to do.
That work has shaped how millions of Australians interact with their banks, insurers, airlines, and government services. CommBank's Ceba, NRMA's Arlo and Nomi, Jetstar's Jess, Australia Post, the Department of Home Affairs, and FedEx are conversations he helped design.
The work is not theoretical. It comes from sitting inside enterprise environments where AI is being deployed at scale, watching where decisions get made, and developing a clear sense of which ones tend to fail and why.
AI does not fail in isolation. It inherits the environment it is deployed into.
That observation, and the body of thinking around it, led to the book, the diagnostic, and the advisory practice. Not as separate products, but as different entry points into the same underlying question: what does it actually take for AI in CX to work in the wild?
The advisory work is offered selectively and in a personal capacity, separate from his role at Cisco. It is vendor-neutral, focused on decision quality rather than delivery, and designed for the moment before major investment decisions are locked in.
The diagnostic grew out of a pattern he kept seeing: organisations describing their AI and CX challenges in very different language but exhibiting the same underlying behaviours. Six of them, reliably, across industries and geographies. That pattern is now a sixteen-question diagnostic used by leaders across enterprise CX, contact centres, and digital transformation.
The speaking work takes the same ideas to conferences, offsites, and leadership forums. Not as a vendor pitching a methodology, but as someone who has spent years thinking carefully about the gap between how AI is sold and how it actually performs.
The position he holds is a simple one: customer experience is cognitive and emotional work, not information delivery. AI is most dangerous not when it fails obviously, but when it fails persuasively. Fluent systems that sound correct, optimise the wrong metrics, and create hidden downstream demand.
That means the questions worth asking are not about model capability. They are about the environment the model has to survive in. The journeys, the data, the operating model, the escalation design, the governance, and the people making decisions about all of the above.
An academic background in psychology and literature shapes a design philosophy that treats language not as an interface element, but as a carrier of power, trust, and institutional intent. Conversational AI does not just answer questions. It exercises authority — over access, over outcomes, over how customers understand what is and is not possible. That responsibility cannot be delegated to a model.
Currently based in Sydney. Speaking globally.