Trust issues: How to help customers believe your AI agent
While scanning through conversations in our team inboxes recently, I realized that there are times where customers don’t believe the responses from our AI agent Fin, even when Fin is correct.
These folks all asked to speak to a human team member, just to have the team member reiterate what Fin said.
In pretty much all of these cases, this human-to-human exchange was enough to resolve the query, which got me thinking: what is it about Fin that the customer doesn’t trust? And how can we bridge this gap?
This post originally featured in our CS newsletter, The Ticket.
👉 Subscribe and join the 27K+ customer service professionals who receive industry news, tips, and career guidance every two weeks in their inbox.
Building trust in something new is never an easy thing to do. Getting customers to warm up to an AI agent is particularly challenging because customers are still influenced by years of frustrating experiences with chatbots that (let’s be honest) have been pretty rubbish.
Because of this, support teams today face two key challenges:
- Dissolving that historic distrust and changing how customers think and feel about interacting with AI.
- Learning a new way of providing support using an AI agent and configuring the technology in a way that makes customers confident enough to trust the AI agent off the bat.
While this is tricky, it’s a challenge that excites me because it shows how AI is pushing us to learn and think in entirely new ways. I’m still very much figuring this out alongside all of you, but here are some approaches that are working for my team.
Simplify, simplify, simplify
Support processes are by and large pretty robust. We’ve been conditioned to think that the more information we provide the better, but actually, I’d say this approach has discouraged people from reading what’s in front of them because we’ve hit them with so many things.
“If the interaction with your AI agent is too complicated, of course the customer is going to tune out and automatically assume it can’t help them”
Think about when you call phone support you’re familiar with. You don’t want to hear all the automated messages at the start, you just want to press the button and get through to whatever you need.
If the interaction with your AI agent is too complicated, of course the customer is going to tune out and automatically assume it can’t help them. We need to stop having so much stuff that customers have to read and do, and make it easy to get them their answer.
The good news is that the technology is evolving to make the full interaction between customers and AI agents more conversational and less clunky, while still collecting all the information your support team needs if the query ends up being routed to them.
Here’s some insight into how we’re working towards this with Fin Guidance and Workflows.
A snapshot look at how Fin Guidance can help facilitate more seamless interactions between customers and AI agents.
Fine-tune how your AI agent communicates
On the topic of information overload, it might be the case that your AI agent is providing your customer with the right answer to their question, but it’s getting buried in a verbose or poorly structured response.
There’s a real nuance here. You want to get to a point where your customer is getting the right amount of information to feel like their query is being resolved by the AI agent without needing a human to dig deeper, but you also don’t want to overwhelm them.
How you structure content in your knowledge base has a direct impact on the quality of your AI agent’s answers, so honing in on this is important. How can you make your language sharper? How can you tweak your wording?
My best advice here would be to choose an AI agent that allows you determine if it answers in a concise or robust fashion to suit your unique business needs.
With Fin, you can customize how comprehensive its answers are.
Get your human team to validate your AI agent
The situation I described earlier, where customers have requested to speak to a human only to have the human give the same response as the AI agent, is absolutely fine. Reassure your team that building trust is a gradual process and something we have to be patient with.
Emphasize to your team that they are the people that your customers already trust, and hearing them confirm what the AI agent has responded with is what will build their trust in new AI-powered support over time.
“Keep your focus on making it as easy as possible for your customers to get the help they need”
Here’s a tip: Have your team be mindful of the exact wording they use to confirm your AI agent’s solutions and get a satisfied response from the customer. We’ve found that being specific works better than generic validation. Instead of just responding with something like, “Fin was correct,” instead try something like, “The workflow steps Fin outlined are exactly what I guide customers through when setting up their first automation – what Fin suggested will get you up and running.”
At the end of the day, there’s no shortcut to earning trust – whether that’s trust in your team, your product, or your AI agent. Keep your focus on making it as easy as possible for your customers to get the help they need.
Proving that this is your priority, whether customers are engaging with your AI agent or human team, is what will build genuine trust over time.