Transformation Playbook

Five things I'd do on day one of a contact center CX transformation

The five leadership decisions that turn an AI roll-out into a transformation.

Most contact center AI initiatives fail to become CX transformations. What gets you there is: focus and speed. Focus, because no operation can change everything at once. Speed, because if the change isn't visible in weeks, the operation stops believing it's happening.

1

What benchmark should a CX transformation measure against?

Not the existing contact center. Benchmark against the experience customers actually want.

This is the conversation that has to happen in the first week. Most leaders skip it because it's uncomfortable.

Almost every AI launch measures itself against the current operation. If the AI handles a conversation as well as a human agent would have, it succeeds. This sounds reasonable, but it locks in mediocrity from day one.

The current CSAT, the current handle time: those numbers aren't a standard. They're what the operation learned to live with. You only see it from inside.

Human agent shadowing

An agent toggles between four tabs to find a policy that should take ten seconds. The team lead pretends not to see that her best agent uses a personal Google Doc as her real knowledge base because the company one is too clunky. None of that is in the dashboards.

Make that the benchmark, and you digitize a broken process instead of redesigning it.

The right target is the experience customers would choose to pay for, at a cost the business can sustain. You cannot be excellent at everything. Great service starts by deciding what you'll be deliberately worse at. Everything else becomes a guiding light, not a goal.

This is what unlocks the courage to do the harder work. Killing rules that exist for organizational reasons rather than customer reasons. Some people in the room will hear it as criticism of their work. It isn't. It's a refusal to inherit the wrong frame.

2

How should you scope the first phase of an AI launch?

Volume × policy clarity × cost of being wrong. The workflows that score highest go in phase one.

The instinct on day one is to pick the most ambitious targets. The complex workflows. The edge cases. Six months later you have a partially-working system on hard problems and no proof of value. This applies to both AI agents and AI QA.

The Scoping Formula
Volume×Policy clarity×Cost of being wrong
Highest scores → Phase one. Everything else → Next milestones.

Applied to one rollout, the score qualified 5 of 15 candidate workflows, together covering 40% of volume. The other 10 were deferred. The only date that matters is now and later, and to get to later, the team knows exactly what success looks like.

5 workflows shipped in 6 weeks beats 15 attempted in 9 months.

Smaller scope keeps failure easy to spot, because with too many variables changing at once, you can't tell what broke. Inside each workflow, don't chase 100% coverage on day one. AI that handles the obvious cases cleanly, with reliable escalation for the rest, beats AI that takes everything and gets the judgment calls wrong.

Each variable kills a different failure mode. Volume protects against shipping AI on workflows that don't move the metric. Policy clarity protects against shipping AI on judgment calls it can't make consistently. Cost of being wrong protects against shipping AI in places where being wrong is catastrophic. Optimize all three. Never trade one for another.

The scope trap

Scope creep is a discipline problem.

Pick the five, and the requests to add a sixth start the next day, with new justifications every week. The sponsor's only job here is to keep saying no, same answer, every time. Without that, five becomes twelve by month two. That's how focus dies.

3

How should the success metric be defined?

In capability terms, not aggregate deflection. Handle the top customer intents above a defined accuracy threshold. Deflection is what happens when capability is real.

“We need 40% deflection by Q4” is the goal that ruins more AI work than any other. It sounds disciplined, but it's a corruption.

A team given a deflection target hits the number two ways. They route more traffic to AI even when the AI can't handle the additional intents. Or they let the AI deflect confidently in cases where it shouldn't. Both work on paper. Both ship bad customer experiences at scale. Both leave the executive sponsor wondering why CSAT is dropping while the deflection dashboard is green.

The right framing on day one: handle the top N customer intents at greater than 90% accuracy, with no-resolution (AI gave up) under 5% and false-resolution (AI was wrong) under 1%. Accuracy here means accuracy against customer intent and business policy. Not whether the AI produced a convincing answer.

The metric you choose on day one determines what your project becomes.

Capability-framed transformations expand by adding new intents one at a time, after each previous intent stabilizes. Deflection-framed projects expand on volume regardless of quality.

You can hit a deflection target with bad AI. You can't hit an accuracy target with bad AI. Pick the metric that doesn't lie.

4

What's the right relationship between an AI roll-out and existing tools?

Replacement, not addition. Most AI roll-outs add a layer, which compounds the mess. Successful transformations specify, on day one, which tools the AI replaces.

Watch what happens on most roll-outs. The AI gets bolted on. Three platforms become four. Agents get another tab. Six months later the operation is more complex than before the AI arrived.

If the AI roll-out doesn't reduce the agent's tool count, it's not a transformation. It's an addition.

On day one, name the tools the AI roll-out will retire. Make retirement a measurable success criterion, not a hoped-for downstream consequence.

Every tool removed is a context switch eliminated, a training burden reduced, an integration failure mode erased. The deeper benefit is architectural: a platform that consolidates controls its own data flow and can improve continuously. Stack consolidation is not a bonus. It's the work.

The vendor trap

Not just on features.

When picking AI vendors, choose partners on their willingness to roll up their sleeves and help you train and launch. Not the ones with the slickest product who ship software and disappear.

5

How should the team's relationship with failure be designed?

Build the team's relationship with failure in week one, before there is anything to fail at. The alternative is failing privately, in month six, in large ways that have already done damage.

Most operational cultures punish bad news. Project managers learn to manage perception. Status updates become exercises in framing. “On track” means “I haven't surfaced the problems yet.”

On day one, establish the rule explicitly: if you see something broken, name it. Brutal feedback is welcomed. The team isn't graded on whether things break. It's graded on how fast things that broke get surfaced and fixed.

The Friday cadence

Three sections, every week, public.

Every Friday update has three sections: what worked, what didn't, and what decision is needed from leadership this week. The third section is the one that matters most. Transparency without escalation paths is just visibility into problems no one is empowered to fix.

If the “what didn't work” section is consistently empty, the team is hiding things. If the “decision needed” section is consistently empty, the team has stopped believing leadership will decide.

The failures we surface this week are the ones we can fix cheap. The failures we hide this week are the ones that bury us next quarter.

The thing that holds it together: ownership.

A CX transformation needs one accountable owner. Not a steering committee, not a vendor, not a coalition of product, operations, and technology leaders.

One executive owns the customer outcome, the operating metric, the tool retirement plan, and the weekly escalation. Without that, none of the five decisions hold.

Built for CX transformations

Want to make these decisions with someone who's made them before?

Salted helps leaders scope milestones, define the metrics that matter, retire the right tools, and lead through what fails. Phase one and beyond, we've done all of it before.