How support leaders measure (and improve) the quality of their customer support
The wider global business scene is finally realizing what many successful companies have known for quite some time now: customer support is the face of your company.
At a time when your customer-facing interactions are more impactful than ever, it’s not good enough to provide “good enough” support; you need to provide high-quality experiences, every time.
Investing in the quality of your customer support experience is one of the most powerful ways to grow your company, but every company’s definition of “quality” is different. That’s why we partnered with Klaus, the conversation review and QA platform for support teams, to get a deeper understanding of what “quality support” actually looks like.
Together, we surveyed hundreds of CX professionals, team leads, managers, and executives to get a deeper understanding of how they measure and improve support quality. We wanted to know: what are the industry standards for support teams? How are other customer-centric support teams measuring the impact of conversational support? And how can you put a QA process in place that actually tells you if your support is moving the needle in the right direction?
The result is the Customer Support Quality Benchmark Report 2021. Get your copy now, or keep reading to learn some of our key takeaways about how support quality is changing – and what you need to do about it.
How the support landscape is changing
This last year has brought about an enormous shift in the way support teams work – and in customers’ expectations of that support. Increasingly, we’re seeing that customers want digital-first solutions that allow them to get the support they need, on their own terms. That means: meeting customers in the channels they’re already using; providing them with resources so they can self-serve when they want to; and offering personalized, empathetic, conversational experiences when they’re needed.
In our survey, we found that:
- Self-service support has become the second most popular support channel. 70% of customer support teams have a knowledge base, FAQ section, or other help docs that allow their customers to self-serve.
- Chat has passed phone support. 65% of customer service teams now offer support on chat, ahead of 60% of businesses offering phone support.
- Proactive support is on the rise. 85% of customer service teams now offer proactive help, in the form of things like in-app onboarding, outbound messaging, or notifications.
But while the approach to customer support has undergone a transformation, the way that companies measure its success – and track its impact – hasn’t necessarily evolved to match these new goals.
So what’s the problem with the support metrics that most teams have been using up until now? It’s not that there’s anything wrong with them, exactly – it’s just that they’re not giving you the full picture.
Take time to close, for example. It only tells you the time spent; it doesn’t reflect the difficulty of the problem, the relationship with the customer, or whether the support rep went above and beyond to find the best solution. If you’re focusing on time to close as your primary support KPI, it could inadvertently encourage your support reps to prioritize the fastest, easiest short-term solve – not necessarily the one with the greatest long-term result for the customer.
Similarly, an irate customer who’s having issues with your product could leave a negative conversation rating because they’re frustrated with an answer they don’t like, even if your agent dealt with the issue in an empathetic and thoughtful way.
Timely responses and customer happiness are integral to delivering a quality support experience, of course. But as we can see, they’re limited in the information they provide. So what’s a quality-minded, customer-centric support team to do?
“Rather than focusing on any one figure, the trick is to curate a selection of KPIs that give you a 360º view of your team’s performance, and which align with your values as an organization”
Rather than focusing on any one figure, the trick is to curate a selection of KPIs that give you a 360º view of your team’s performance, and which align with your values as an organization. Using a combination of operational metrics that track things like timings and number of resolutions in tandem with more in-depth conversation quality reviews, assessing factors like empathy and tone, you can create a more comprehensive set of metrics that will give you a clearer picture of the overall quality of support your team is delivering.
Defining “quality”
“Quality” is a subjective term – and that’s okay. Every company has its own vision of what their customer-facing interactions should look and feel like. What matters is that each support interaction represents you as a company, and meets not just your customers’ expectations but also your own internal standards.
Nonetheless, we found that there are some common factors to look out for. According to our research, almost 86% of support teams check the proposed solution offered by the support rep as a sign of the conversation’s quality. Nearly 80% look for product knowledge, and almost 78% look to see that the agent followed internal processes correctly. This emphasis on accuracy makes sense; good support relies on helping the customer to do the thing they’re trying to do, in the best way for them.
“Almost 84% of support teams look for tone and empathy when they’re assessing quality”
Perhaps most interestingly, we learned that almost 84% look for tone and empathy when they’re assessing quality. This is a big indicator of the importance of human support within the Conversational Support Funnel, our framework for providing efficient, effective, and personal support at scale. It’s clear that support leaders are recognizing the importance of tone and empathy to great support experiences, especially when you’re trying to foster personalized, long-lasting customer relationships.
As a greater number of simple support queries can now be resolved with proactive and self-serve support, things like tone and empathy are the areas where your human support reps can really shine. And, as the number of “difficult” conversations has risen significantly since the start of the pandemic, it’s also one of the areas where your reps can make the most impact.
Another trend we saw is an increased focus on proactive support as an indicator of quality, with 44.2% of respondents saying that they look for support reps sending links to further reading to customers as a quality criteria. This is certainly one to watch, and it’s a great way to start building proactive support into your KPIs so you can start to track and measure its impact – especially since 85% of customer service teams now report offering proactive help.
How support leaders are measuring quality
Once you know what you’re looking for – such as accuracy, tone and empathy, and proactivity – it’s time to map out those internal quality standards into an actionable, measurable framework. To do this, you can create a scorecard that will help you to track your own unique internal quality score (IQS) based on the criteria you value most.
Equipped with that scorecard, you then need to put a process in place for implementing those reviews. How often will you perform them? How will you select which conversations to review? Who will do the reviews? Again, these will depend on your own team’s needs – but here are some industry insights to help guide you.
Who reviews?
In our research, we found that 43% of support teams have dedicated support QA specialists. The number of dedicated QA specialists scaled in proportion to the size of the support team.
In other cases, conversation quality reviews are performed by managers, trainers, or even peers (which is what we do here at Intercom).
“All our remotely onboarded agents appreciated the peer-to-peer exercise and they see it as a valuable way to learn best practices from their more experienced colleagues”
Izabela Gusa, Quality Manager at Avira, told us that since pivoting to working remotely, her team has started doing peer-to-peer reviews. “All our remotely onboarded agents appreciated the peer-to-peer exercise and they see it as a valuable way to learn best practices from their more experienced colleagues,” she says. “The distance we all expected to challenge us proved to release new ways of improving the quality of our conversations.”
How often?
We learned that 69% of support teams conduct regular conversation reviews, with 53% doing so on a weekly basis. If you’re looking to get more raw data to work with, however – for example, if you’re just starting out on your QA journey and want to get a solid baseline – consider setting daily review goals, which will bring in the biggest review volumes.
“Having fewer metrics on your scorecard means that you can build quality reviews into your daily routine with just a few extra minutes”
Don’t forget that the “who” and the “how often” are intrinsically linked. If you have a dedicated QA team, you’ll have more resources to work with, but if you’re relying on manager or peer-to-peer reviews, be mindful of their workloads. In this case, having fewer metrics on your scorecard means that you can build quality reviews into your daily routine with just a few extra minutes, and without overstretching your team.
Which conversations?
The vast majority of respondents (81.9%) review random samples of their support conversations. This is a useful way of getting an effective sample across the whole team, and maintaining consistency on an ongoing basis.
“Almost 44% of teams find the silver lining in difficult situations by reviewing escalated conversations and turning them into learning opportunities”
We also found that over half of support teams use quality reviews as part of their onboarding for new agents. In these situations, you might start by reviewing 100% of the new agent’s conversations at first, then gradually lowering that percentage as they successfully onboard. Conversation reviews play an important role in coaching, too; 43.8% also use them to help correct poor agent performance.
Almost 44% of teams find the silver lining in difficult situations by reviewing escalated conversations and turning them into learning opportunities.
Providing quality support at scale
Defining what quality means for your team and building your team’s unique metrics around that definition means that you can be confident that you’re delivering a consistent, conversational support experience every time.
And while 82% of support teams believe that conversation reviews improve the quality of their customer service, building better relationships with your customers isn’t the only benefit. The same number of respondents also believe that conversation reviews are useful for their team members’ professional growth.
“By the end of 2021, 80% of support teams will have QA procedures in place”
Indeed, Scott Tran, founder of the customer support community Support Driven, has noticed that many support teams in the community “have increasingly adopted quality as a KPI to help make the case for specialization and career path development.” According to the report, one in three support teams surveyed say they’re currently tracking IQS, and 10% of support teams say it’s their most important metric. By the end of 2021, 80% of support teams will have QA procedures in place.
What makes quality such a powerful metric is that there’s simply no substitute for it. This is particularly potent when we take into consideration the uniquely human components that define “quality”, such as empathy.
Building this focus on quality into your support team’s KPIs means that every support interaction becomes an opportunity to create an impactful, personalized, tailored customer experience. And isn’t that what you should be measuring?