Separating the Vendor Claims From the Actual Deployments
Every contact centre platform vendor is now describing their product as "AI-powered." That phrase has become so overused it covers everything from a basic keyword-matching chatbot to genuinely sophisticated real-time analysis. The gap between a vendor's AI marketing and what that AI actually does in production is often significant.
This article is based on what we're seeing actually work across contact centre deployments – operations ranging from 15 agents in a regional business to 200-seat outsourced service centres. The useful filter when evaluating AI capability is simple: can a vendor show you live production metrics from real customers showing measurable improvement, or are they showing you a demo and projections? The former is worth pursuing. The latter needs more scrutiny.
There are genuinely useful AI capabilities available right now. There are also capabilities that vendors are selling confidently but that routinely underperform in practice. It's worth being clear about which is which before you commit budget.
What's Working: Transcription and After-Call Summarisation
Real-time and post-call transcription is the most mature and consistently valuable AI capability in contact centres right now. Modern speech-to-text quality – using models from AWS, Google, Microsoft, or specialist providers like Speechmatics – has reached the point where accuracy on clear telephony audio runs 90–95%, which is good enough for most operational use cases.
The immediate practical value is in after-call work (ACW). Agents currently spend 2–5 minutes after each call entering notes, selecting wrap codes, and updating CRM records. For a 40-agent centre doing 200 calls per day, that's 400–1,000 minutes of productive time per day consumed by post-call administration. AI summarisation that generates a draft call summary and pre-selects the most likely wrap code cuts that time by 50–70% in deployments we've seen. At a fully loaded cost of $35/hour per agent, that's $4,700–11,700 in recovered capacity per month for a 40-seat operation.
The broader value of transcription builds over time. When you have full transcripts of every call, you can run analysis on call drivers, identify policy gaps, pick up compliance issues, and understand what your customers are actually asking about. Manual QA sampling gets you a 2–3% view of your calls. Automated transcription gets you 100%. That changes what you can know about your operation.
Average handle time reductions from transcription-driven ACW automation typically run 15–25% in well-implemented deployments. That's a material productivity gain. The caveat: it requires solid CRM integration. If the AI-generated summary lands in a separate tool that agents still have to copy across manually, most of the efficiency benefit disappears.
What's Working: Intent Detection for Smart Routing
Intent detection – using AI to understand what a caller or digital contact actually wants, before routing them – is another area with proven production results. Traditional IVR routing asks customers to navigate menu trees that don't match how they think about their problem. "Press 1 for billing, press 2 for technical support" works when every customer's issue fits neatly into your taxonomy. Most issues don't.
AI-powered intent detection, typically built into modern cloud contact centre platforms (Amazon Connect, Genesys Cloud, NICE CXone, Twilio Flex), works by analysing what the customer says in natural language – either in a voice IVR or in a digital channel – and routing based on understood intent rather than menu selection. In a well-trained deployment, misroute rates drop from 15–25% on traditional IVR to 5–8% on intent-based routing.
That improvement matters more than it might seem. Every misroute is a transfer, and every transfer adds 2–3 minutes to the average handle time of that interaction. In a centre handling 500 contacts per day with a 20% misroute rate, reducing misroutes by two-thirds saves 500+ minutes of handle time daily – equivalent to adding two to three agents to capacity.
Intent detection also enables smarter prioritisation. A caller showing indicators of churn risk (specific language patterns associated with cancellation intent) can be flagged for routing to a retention-skilled agent rather than the general queue. That capability requires training on your specific customer language, but the models are fast to configure with modern tooling.
What's Working: Agent Assist for High-Variation Queries
Agent assist – real-time AI that surfaces relevant knowledge, suggested responses, and next-best-action prompts during a live interaction – works well in specific circumstances and poorly in others. The distinction matters before you invest.
Agent assist performs best where: the query space is varied and knowledge-intensive, the knowledge base is well-structured and current, and agents are handling a high volume of calls where they need to search for information mid-call. Technical support, complex financial products, insurance queries, and healthcare navigation are environments where agent assist shows strong ROI. A well-implemented agent assist deployment in a technical support centre can reduce average search time by 40–60 seconds per call where a knowledge lookup was required.
Agent assist performs poorly where: queries are simple and repetitive (agents already know the answers), the knowledge base is poorly maintained (AI surfaces incorrect or outdated content, which creates errors), or agents have a high base level of expertise (the AI suggests things they already know, creating distraction rather than assistance).
The failure mode most often seen in deployments that don't work is a poorly governed knowledge base. If your knowledge management is a SharePoint site with 400 documents that haven't been reviewed in 18 months, AI-powered agent assist will surface outdated content confidently. Clean up the knowledge base before deploying agent assist, not after.
What Doesn't Work Yet: Fully Autonomous Voice Agents for Complex Service
Fully autonomous AI voice agents – replacing human agents for end-to-end complex service interactions – are being sold aggressively right now. The honest picture from live deployments is that the technology is not there yet for anything beyond scripted, low-complexity transactions.
Autonomous voice agents work adequately for: appointment reminders and confirmations, simple FAQ responses, payment collection on known balances, and order status updates. These are constrained interactions with predictable paths. For this subset of contacts, AI deflection rates of 60–80% are achievable.
Autonomous voice agents fail noticeably for: complaint handling, complex billing disputes, anything requiring empathy or judgment, multi-step troubleshooting with branching logic, and interactions where the customer's situation deviates from the expected path. The failure rate in these scenarios isn't marginal – it's high enough to generate customer frustration and negative feedback at scale. When AI voice agents fail on a complex interaction and the customer has to repeat everything to a human agent, you've made the experience worse than if the call had been answered by a human in the first place.
The customer tolerance for AI voice agents is also lower than vendor projections suggest. Australian consumers, based on contact centre research, show significantly higher dissatisfaction rates when they realise they're talking to an AI on a complex service issue compared to a straightforward transaction. That expectation gap creates real customer satisfaction risk if you deploy autonomous voice broadly before the technology is ready for it.
The right current-state strategy is AI augmentation – AI making human agents faster, more accurate, and more consistent – rather than AI replacement. The replacement case may arrive in three to five years as the technology matures. Deploying it now across your full contact volume is premature.
How to Evaluate a Vendor's AI Claims
When a contact centre platform or AI vendor presents AI capabilities to you, these are the questions that separate genuine capability from marketing positioning:
- Can you show me production metrics from a comparable deployment? Not a demo, not projections – actual data from a live customer. AHT reduction, ACW time saved, misroute reduction. If they can't produce this, that's informative.
- What does the model require to perform at the claimed level? Most AI capabilities require training data, clean knowledge bases, or integration work that isn't reflected in the headline feature claim. Understand the prerequisites before you assess the capability.
- How is accuracy measured, and what's the failure mode? Intent detection at 85% accuracy sounds good until you understand what the other 15% does. Does it route to a general queue? Does it fail silently? Does it create a dead end for the customer?
- Where does the data go? For Australian contact centres handling personal or sensitive information, understanding data residency for AI processing is mandatory. Some AI features in major platforms route audio or transcripts through overseas processing nodes. That may create Privacy Act obligations you need to manage.
- What's the integration requirement? AI that doesn't write back to your CRM, or that requires agents to use a separate tool, loses most of its practical value. Understand the integration architecture before committing.
CX Direct works with contact centres on AI capability assessment and implementation. We can help you separate what's ready to deploy from what needs more time, and build a sequenced roadmap based on your specific environment and customer base. Get in touch to discuss your contact centre AI plans.