How are customer service metrics changing in the age of AI?
Table of contents:
- First response time (FRT)
- Average handle time (AHT)
- Cases handled
- Automated resolution rate
- First contact resolution (FCR)
- Time to resolution (TTR)
- Content views
- Customer satisfaction (CSAT)
- Net promoter score (NPS)
- Customer effort score (CES)
- Internal quality score (IQS)
- Return on investment (ROI)
- Bot involvement rate
- Bot engagement rate
- Conversational insights
What do all the best customer support teams have in common? An obsessive commitment to creating a great customer experience is a good first step, but that will only get you so far without one crucial ingredient: rigorous reporting on key customer service metrics.
Knowing how to find the signal in the data noise is what allows the best support teams to keep providing quality customer service, high customer satisfaction, and a high-performing team. But with AI transforming customer service as we know it, how should support leaders adapt their core metrics to get a true measure of success in this new era?
“Leaders looking to take advantage of the immense opportunity AI presents will need to think differently about metrics and KPIs”
The customer service landscape is experiencing a monumental shift as AI becomes more advanced. With the technology now enabling more compelling customer interactions and near-instant resolutions of many customer questions, support teams can focus on activities that create additional value for their customers. Recent research from Intercom’s State of AI in Customer Service: 2023 Report shows that investment in AI for customer service is quickly accelerating, with 69% of support leaders planning to invest more in AI in the year ahead.
Leaders looking to take advantage of the immense opportunity AI presents will need to think differently about metrics and KPIs to ensure that in an AI-first world, the true impact of customer service is being measured in the right way.
The evolution of traditional support metrics
While support metrics as we know them are evolving, they’ll remain essential to your team’s success. AI will fundamentally change the way support teams work, and some of the metrics that mattered to a last-generation support offering may become less relevant in a world where humans and AI work seamlessly together.
“It will be crucial to think about both the customer and teammate experience when assessing your current approach to reporting”
Customers’ expectations of support are also rapidly evolving as a result of AI offering lightning-fast answers and resolutions, which means support team service level agreements (SLAs) and benchmarks will need to be reset. Our own Customer Support team is already adjusting the metrics and benchmarks we use to measure success as our AI chatbot, Fin, resolves more and more of our customers’ queries.
To set your team up for success in this new age of customer service, it will be crucial to think about both the customer and teammate experience when assessing your current approach to reporting so you can ensure you’re keeping a pulse on the numbers that matter most.
Here are some of the key areas and metrics that will be impacted by these changes, along with our tips for adapting your reporting approach to take advantage of the opportunity that lies ahead.
How you measure successful customer interactions
For many customer service teams, generative AI technology – such as AI-powered chatbots – will become the first point of contact for customers seeking support. These bots are capable of offering quick and helpful answers, and can also disambiguate queries and pass them on to a human support rep to provide further assistance if they don’t know the answer.
With AI on the frontlines tackling your inbound support volume, some of the core metrics used to measure the speed and effectiveness of your support delivery will need to be adapted.
First response time (FRT)
“First response time” (FRT) is the time it takes for your team to send an initial response to a customer’s query.
Given that leading AI bots are capable of offering customers near-instant responses, slow response times – and lengthy wait times for customers – are becoming things of the past. This will dramatically change customer expectations; the assumption that response and ultimate resolution would require a wait will be replaced by the expectation of an immediate response and speedy resolution.
💡 Tip
To get an accurate read on both your team and AI bot’s performance, consider creating separate reports for “bot first response time” and “human first response time” for a holistic view of how quickly your customers are getting responses across the board.
When assessing an AI-human support experience, it will also be important to look at a broader set of metrics alongside first response time, such as average handle time, to understand how quickly customers’ issues are being resolved beyond the first point of contact.
Average handle time (AHT)
“Average handle time” (AHT) measures the average time your team spends working on customer conversations, and is often used by support leaders to understand team capacity and staffing needs.
With AI bots resolving the majority of simple queries, your team will be dealing with more complex and time-consuming issues, so new benchmarks will need to be identified for the average handling time of customer conversations to make room for this adjustment.
💡 Tip
Similar to first response time, try creating separate reports for “average bot handle time” and “average human handle time” for a complete picture of how long it takes for your customers to get resolutions to their issues.
While you might see overall and bot handling times decreasing, human handling time will likely increase as a result of support reps dealing with trickier issues. If you see human handling time climbing, consider looking at other metrics, such as CSAT, to see if this is having a knock-on effect in other areas.
How you measure productivity
We know that in order to have an outsized impact in this new era of customer service, humans and AI will need to work together. AI should effectively be thought of as a new support rep on your team, and as such, it will be crucial to know how to measure its performance, as well as the domino effect it will have on your team’s capacity.
Deploying an AI chatbot will inevitably free up your team’s time to focus on other activities, such as consultative or proactive support, or knowledge management. With your team focusing on a wide range of tasks, the way in which you measure productivity and gauge your team’s capacity will need to be adapted.
Cases handled
“Cases handled” refers to the number of cases, tickets, or conversations handled by support agents. This can be measured on an hourly, daily, or weekly basis, and is often used as a measure of team performance and productivity.
Traditionally, support reps would be expected to handle a certain number of customer queries over a set period of time, so there would be a benchmark in place for evaluating team productivity. In the age of AI, that has been thrown into flux. Support reps are now tackling a much more complex set of customer issues, with the simple ones being resolved by AI bots. And given that complex cases often require more investigation and time investment, the number of cases handled per hour, day, or week is going to change.
The role of “customer support rep” is also becoming much more diverse, with reps getting more involved in other areas like help center content creation and knowledge management. With your team splitting their time between different tasks, the number of cases handled becomes a less relevant metric for assessing team productivity.
💡 Tip
Consider mapping out all of the other areas of impact your team can contribute to, and understand how each area can be factored into an overall system for measuring performance. By identifying these other areas of focus for your team, such as help center content creation or community moderation, you’ll be able to get a more accurate read on team productivity.
Automated resolution rate
“Automated resolution rate,” or “rate of automated resolution” (ROAR), measures the number of support tickets or conversations that are entirely resolved by automation, such as bots.
Prior to the release of AI-powered bots, automated resolution rate would consist of queries that were resolved by simple bots, or more advanced models built on machine learning, such as Custom Answers for Fin (formerly Resolution Bot).
Now, some of the most powerful AI bots on the market are able to automatically resolve up to a staggering 50% of customer queries, freeing up support teams to focus on the more complex queries that require a human touch. With the bot tackling up to half of the frequently asked or more common questions, support leaders are likely to see a significant jump in automated resolution rate in their reporting dashboards.
💡 Tip
With automated resolution rate soaring, it’s important to think about how else you can derive insight from this metric. For example, if your automated resolution rate has jumped from 15 to 50%, consider the knock-on effects this is having in other areas. How much time is your team saving? How much happier are your customers with the speed and quality of support?
On the other hand, if you’re noticing a drop in your automated resolution rate, there’s likely an underlying issue that needs to be addressed. This can indicate that your bot doesn’t have access to the right content it needs to answer customers’ queries. Consider auditing your help center to ensure your content is up to date and that your bot has everything it needs to help your customers.
First contact resolution (FCR)
“First contact resolution” (FCR) measures how often your customers’ queries are resolved after their first call, email, text, or chat session with your company’s support team.
Certain AI-powered bots, like Intercom’s Fin, use content in your help center to serve up relevant answers to your customers’ questions, and in many cases, are able to answer these questions on a first attempt. This not only means that your customers are getting support faster than ever before, but will also likely result in your first contact resolution rate increasing.
💡 Tip
With more customer queries being resolved in a single interaction thanks to your AI chatbot, you should start thinking early about other impactful work your team can do with the additional time the bot is freeing up, and how you can measure the success of this work. By scoping this work now, you can enable your team to upskill in new areas so when their time does start to free up, they can jump straight to impact and contribute to the business beyond standard support metrics.
AI is also offering customer service teams the opportunity to make support targets more competitive. For example, teams could offer real-time human support for certain issues or customers, or start working more proactively with customers on setup and activation.
Time to resolution (TTR)
“Time to resolution” (TTR) measures the average time it takes for a customer query to be fully resolved, from the time a ticket or conversation is opened to the point at which it is marked as “resolved” or “closed.”
As with many other metrics, time to resolution is going to be hugely impacted by the ability of AI bots to quickly resolve a large number of customer queries. It’s likely that the bot’s time to resolution will decrease, and human time to resolution will climb. This is to be expected, as your team will be dealing with more complex issues that take longer to get to the bottom of.
💡 Tip
Consider splitting out your reports by “bot time to resolution” and “human time to resolution” to understand how quickly common or simple queries are being resolved, as well as how long it takes for your team to resolve more complex ones.
As your AI bot begins to tackle more complex queries that involve a lot of back and forth, it will be important to understand how much time it takes to resolve those issues.
Content views
“Content views” is a measure of the number of times customers have viewed your help center content, for example, articles in your knowledge base.
Understanding how AI plays into your overarching self-serve support experience is important, so you should be looking at how customers are interacting with your help center articles to get a sense of how easily they’re able to find answers to their own questions. In an AI-first world, views of your help center articles might start to decrease as AI chatbots leverage the content to serve up answers to your customers instead of linking to the articles themselves.
💡 Tip
By monitoring the number of views your help center and support community content gets, you can understand whether customers viewing this content need to seek additional support after reading an article or post, or if it helped them to resolve their query. It’s helpful to set a time parameter around this, for example, if a customer doesn’t reach out to your team within 24 hours of viewing content, you can consider it a “deflection” of a potential support conversation.
Once you deploy an AI chatbot, the total volume of content views will likely start to decrease as your customers get help directly from your bot rather than having to go to your help center. If this happens, try to contextualize content views within your overarching self-serve support experience to understand how customers are getting help through different avenues.
How you measure the customer experience
Naturally, all of the changes brought about by AI are going to transform the customer experience. Sure, your customers will get the benefits of faster, more efficient support, but they’ll also be interacting with new technology, so it will be crucial to monitor this new customer experience to ensure their needs are still being met.
Customer satisfaction (CSAT)
“Customer satisfaction” (CSAT) is a measurement that reveals how happy your customers are with your business and involves calculating the percentage of positively rated conversations out of the total number of conversations rated by your customers. CSAT surveys can range from in-depth to lightweight – from asking customers to rate an interaction from zero to 10, sending them direct feedback questions, or even letting them choose the emoji that best represents their experience.
It’s no secret that customers have varying degrees of trust in bots as a whole. In the past, they’ve often led customers down decision-tree paths with no resolution, or caught them in an endless loop that they couldn’t get out of. Obviously, this isn’t an ideal experience for anyone. But recent advances in generative AI have begun to inspire more customer trust in bots, largely due to the fact that they’re able to communicate more effectively than traditional bots, and the expectation is that they have a higher likelihood of returning a helpful answer – fast.
Support teams are very cognizant of keeping a pulse on customer satisfaction as they lean more heavily on AI bots. And according to Intercom’s State of AI in Customer Service: 2023 Report, 58% of support leaders have seen improvements to their CSAT scores as a result of using AI and automation.
💡 Tip
It’s crucial that support teams can get a close read on how efficiently and effectively customers are getting help. CSAT plays a big part in this, so it’s important to understand how customers are rating conversations that your AI bot is involved in.
When looking at your CSAT reports, try to understand how conversations that the bot has been involved in are being rated – or if they’re being rated at all (it might transpire that customers are less inclined to leave ratings after interactions with bots than with humans). This will help you understand whether customers are happy with the interaction, the level of support the bot was able to provide, and how easy it was to get transferred to a member of your team if further help was needed. By digging deeper in these areas, you’ll be able to improve the bot’s performance and ensure your customers are consistently getting a great experience.
Net promoter score (NPS)
“Net promoter score” (NPS) is a metric that organizations use to measure customer loyalty toward their brand, product, or service. It is measured as a score ranging from -100 to +100.
Similar to CSAT, customer-centric companies place huge emphasis on monitoring their NPS. It enables them to temperature check customer attitudes towards their product or service, and build personalized engagement plans to, for example, connect a “detractor” – someone who has given a low score in their NPS survey – with someone on their team in order to understand their challenges and improve their experience.
AI-powered bots will now be included in the mix of services being reviewed by your customers in NPS surveys, so it will be crucial to understand the impact they’re having on your scores.
💡 Tip
Your NPS survey gives you the opportunity to drill down into elements of your product or service that the customer either likes or doesn’t like. Without assistance from AI, analyzing these comments can become very time-consuming. But luckily, AI now offers you the ability to quickly summarize the insights your customers are providing. Consider what questions you’d like to focus on and use AI to distill key learnings from your surveys.
Customer effort score (CES)
“Customer effort score” (CES) determines the amount of effort a customer has to make in order to have their request processed. This could include getting an answer to a question, having an issue resolved, fulfilling a product purchase, or signing a contract. CES can be measured using surveys to ask customers how difficult or easy it was to have their needs met, for example, on a sliding scale of “very easy” to “very difficult.”
CES is an important metric for support leaders to keep a pulse on, as customer happiness – and subsequently, loyalty and retention – often depend on how easy the customer finds it to work with your company. Traditionally, customer effort score surveys would be sent to customers at important milestones in their journey, such as after an interaction that led to a purchase or after an interaction with your support team, to find out how easy or difficult the experience was for them.
In this new world of AI-powered support, the goal is to reduce customer effort across the board even further. AI bots are capable of streamlining the support experience, offering fast, accurate answers to unblock customers and provide a delightful experience. However, you’ll need to understand exactly how AI is impacting the level of customer effort required, and if customers are experiencing a high level of effort in other areas.
💡 Tip
Consider sending out a customer effort score survey after a customer has interacted with your AI chatbot to understand how difficult or easy it was for them to get the help they needed. You can use these ratings to gauge whether your bot is meeting your customers’ needs and providing a smooth support experience, or dig deeper into potential points of friction to find ways of making the process easier for them.
How you maintain quality across your support
Quality assurance (QA) is a critical component of any support operation. In order to delight customers with a stand-out, consistent customer experience, you need to monitor how support is being delivered in your organization.
When it comes to evaluating the quality of support delivery, AI unlocks new opportunities to conduct analysis at scale. Every company has their own interpretation of what makes a “quality support experience,” but despite the subjective nature of how it’s measured, quality assurance will undoubtedly be transformed by AI.
Internal quality score (IQS)
An “internal quality score” (IQS) is a measurement of how well your team is delivering support, determined by people within your organization, rather than your customers. Internal reviewers score customer conversations based on how well they map to a set of criteria that are important to your company. This scoring system can be reflected in a “QA scorecard,” and is unique to each support team.
With the introduction of AI to the customer experience comes the need for an adapted QA process. Traditionally, internal quality scores would assess the performance of support reps, whereas now, there’s a heightened need to look at the overarching customer journey to understand if there are limitations within your product, if your processes are efficient, and if AI is effectively handing off conversations to your team.
Embracing AI to help with routine QA tasks like building samples or doing quality checks will empower support teams to scale their quality assurance process and ensure they’re consistently meeting a high bar of quality across their support offering.
💡 Tip
With IQS changing from a measure of individual performance to an indicator of service standards throughout the customer journey, consider adapting your QA criteria or scorecard to reflect the most important areas for your business.
For example, at Intercom, we split our scorecard into three sections:
- People: The old-school way of making sure our specialists are doing the right thing.
- Processes: Looks at whether the processes we have in place are correct – this also looks at our AI chatbot Fin’s handover to our specialists.
- Product: What can we do to make our product better for the customer experience?
How you demonstrate value
It’s pivotal for any support team to be able to point to the value they’re creating for their business – as well as communicate that to their senior leadership team. In recent years, the perception of customer service organizations has shifted from being that of a “cost center” to a “value driver,” and in this dawning era of AI-powered support, it will be important to know how to continue demonstrating and communicating the value being created across the support org.
Return on investment (ROI)
Return on investment (ROI) is a metric used to understand the value of an investment versus its cost.
In many organizations, customer service has traditionally been seen as a cost center. For this reason, support leaders are highly cognizant of managing headcount, as well as using metrics like “cost to serve” in order to demonstrate ROI. With the arrival of generative AI, we anticipate a shift from these traditional ROI calculations towards the ROI of automation features, in particular.
“In this new era of customer service, being able to understand and report on the successes of AI and automation will be crucial”
Our research shows that 55% of support leaders are concerned about how to balance investment in AI with investment in existing support resources. It takes time to set up a great automation strategy, so for many support leaders, taking a step back and diverting resources away from the frontline and into an AI strategy can feel like a challenge. But, there is significant ROI to be made for support teams that do take the leap.
In this new era of customer service, being able to understand and report on the successes of AI and automation will be crucial. And with 68% of support leaders struggling to implement a baseline report or success metrics for costs saved by AI and automation, this is an area where forward-thinking teams should consider investing in upskilling.
💡 Tip
Consider calculating the time and cost savings AI and automation will bring to your team to demonstrate its value. For example, try calculating:
- The number of queries your team receives that could be handled by AI.
How to calculate: Divide the number of conversations closed in one message by the overall number of conversations in the same time period and multiply by 100 to find the percentage.- The amount of conversation handovers done by your team each week.
How to calculate: Multiply the average time spent per handover x the number of handovers x the number of support reps on your team.- The total time support reps spend drafting responses.
How to calculate: Multiply the average time spent writing a message x the number of queries x the number of support reps on your team.
New metrics are emerging
In addition to the changes we’re seeing in traditional customer service metrics, new ways of measuring the success of support are also emerging as a result of AI. Support leaders looking to adapt their reporting approach should think about incorporating these new metrics to ensure they’re measuring the right things in this unfolding era of customer service.
Bot involvement rate
As you roll out an AI-powered bot, it will be important to understand its involvement or coverage rate, i.e. the number of conversations it’s involved in out of the total number of conversations your team receives.
💡 Tip
To get the most out of your AI chatbot, consider enabling it to be involved in as many customer conversations as possible. But, you’ll need to be thoughtful about cases where you don’t want the bot to be involved and would prefer to have a human-only experience, such as providing white-glove support to VIP customers.
Bot engagement rate
As with anything, it’s not only critical to know what’s working well across your support, but also what’s not. If customers are intentionally trying to leapfrog your bot to speak to someone on your team, there may well be opportunities to improve your bot’s performance.
💡 Tip
Try measuring your customers’ engagement rate with your AI chatbot and looking at markers like “next action taken” to understand if the bot is answering your customers’ questions, or if there are opportunities to improve the overall experience. For example, this could enable you to pinpoint potential knowledge gaps or evaluate the conversation design to ensure the bot is greeting your customers in a friendly, helpful way.
If customers do disengage, consider asking them for feedback to understand why. Armed with these insights, you can make informed changes to your bot experience to maximize impact.
Conversational insights
In addition to unlocking new levels of efficiency and time savings, AI also offers support teams the ability to analyze customer conversations in innovative ways. Now, AI can analyze your customer interactions in real time and at scale, enabling support teams to unearth previously unavailable insights and drive truly impactful “voice of the customer” programs in their organizations.
With the ability to distill insights from such large volumes of customer conversations, you can understand how your customers are feeling about their interactions with your business and empower your team to focus on providing proactive, personalized customer service.
💡 Tip
Use AI to do a thorough analysis of your customer conversations and use these learnings to:
- Identify areas for improvement across your support.
- Make other teams aware of recurring customer issues or pain points and champion the voice of the customer internally.
- Understand where your team can add even more value for your customers throughout their journey and focus on providing proactive support.
Setting your customer service team up for success
AI presents a huge opportunity for support leaders to enhance their reporting capabilities, unlock easier and more efficient ways to measure the quality of support and performance of their teams, and ensure customers are always getting the best possible experience. Additionally, by using AI to free up support reps’ time, support teams can focus on leveraging data they’re collecting to derive insights that can be used to improve their systems and processes, as well as share customer insights internally.
To get a true measure of success in this emerging age of customer service, it will be crucial to understand how your team is spending their time, and develop new ways of reporting on success in the areas that matter most to your business.