Intercom vs Zendesk: Two AI agents put to the test
We all know that generative AI is transforming the customer service industry.
AI agents are already handling customer queries with impressive accuracy, and the teams using the right AI solutions are seeing remarkable results. They’re resolving more issues faster and delivering better customer experiences, which is allowing human support agents the freedom to focus on more complex, high-value interactions.
However, identifying the right AI solution is not easy amid all the noise. Our research shows there are significant performance gaps when it comes to resolution rates, accuracy, and quality between different AI agents on the market.
We’ve repeatedly tested our Fin AI Agent against competitors’ offerings to ensure it performs optimally in every way. Here, we’ll show you how Fin compares to Zendesk’s AI agent and walk you through the research process to give you an in-depth understanding of why Fin is the superior choice.
Fin is the best AI agent on the market – with stats to prove it
Let’s start with some numbers.
When we put Fin head-to-head against Zendesk’s AI agent, the difference isn’t just noticeable – it’s remarkable. Three things in particular stood out:
In 80% of cases, Fin provided better answers across the board, demonstrating superior performance in accuracy, completeness, and overall quality. This isn’t just about getting the answers right; it’s about delivering the kind of experience that builds customer trust and loyalty.
“Fin can handle twice the number of complex questions Zendesk’s AI agent can, transforming what’s possible with automated support”
Fin is also much more capable at handling complexity. Unlike Zendesk’s AI agent, which defaults to basic responses when faced with challenging queries, Fin maintains natural conversations by asking clarifying questions. And now, Fin can answer more types of questions with the recent addition of actions. This sophisticated capability means Fin can handle twice the number of complex questions Zendesk’s AI agent can, transforming what’s possible with automated support.
Perhaps most impressively, when dealing with questions that require pulling information from multiple sources – the kind of query that typically needs human intervention – Fin achieves a 96% answer rate, significantly outperforming Zendesk’s 78%. For support teams, this means more queries resolved automatically, faster response times, and happier customers.
These numbers are compelling – but how did we get to them? Here’s an overview of the research process we followed.
How we compared Fin to Zendesk’s AI agent: A look at our evaluation process
Step 1: Setting the stage
To evaluate the AI agents in an unbiased manner, we needed a completely neutral dataset of help articles and relevant questions that we knew were grounded in the articles.
Using ChatGPT 4, we created a fictional bed and breakfast website with 48 comprehensive articles, all of which we loaded into Fin and Zendesk’s AI agent, ensuring a fair playing field.
We also generated 200 customer questions based on the 48 articles. Some were straightforward, while others required piecing together information from multiple articles.
We asked all 200 questions to both Fin and Zendesk’s AI agent.
Step 2: Checking the outputs for hallucinations
Before we started judging the outputs, we checked for any made-up information – hallucinations – in the responses. We found that there was no statistical difference in the hallucination levels between the two AI agents.
Step 3: Judging the answers
We used four advanced AI models (Anthropic Claude 3 Opus, GPT-4, GPT-4 Turbo, and GPT-4 Omni) to act as impartial judges. These “judges” had access to the articles and question bank, and were instructed to vote on the answers provided by both Fin and Zendesk’s AI agent for given questions while considering the articles as the source of truth.
To determine a winner, we applied the Elo rating system, which calculates a score based on which AI agent delivered the better answer, according to the AI judges. Over hundreds of such “competitions,” a clear winner emerged.
The results were clear: when pitted side-by-side, Fin’s answers are almost always better than Zendesk AI agent’s.
Step 4: Digging into the details
We wanted to dig deeper into what specifically made Fin’s answers better. So, we looked more closely at how Fin outperformed Zendesk AI agent in the following areas:
- Providing a direct response.
- Giving the most “readable” answers for humans.
- Delivering a complete resolution of the query.
A direct response
Fin outperformed Zendesk’s AI agent by providing more direct responses to every question type. The most notable difference was in its ability to answer “hard” questions, where Fin answered more than double the questions Zendesk AI agent did, and questions that required piecing together information from multiple sources, where Fin provided answers to 96% of the questions, and Zendesk AI agent only managed 78%.
The most “readable” answers for humans
Accurate answers are one thing, but how they’re structured matter a lot for the end user experience. Fin provided more comprehensive answers than Zendesk’s AI agent, with the average response coming in at 120 words compared to 50 words. Fin’s responses were also formatted to be more scannable, including elements like newlines and bulleted lists.
A complete resolution of the query
Looking at the direct answer results we got, we estimated the probability of a complete resolution provided by Fin by applying the following formula:
In relative terms, Fin was 66% more likely to provide a resolution for a query when both Fin and Zendesk AI agent provided an answer. Similar to the results we saw with the direct response investigation, Fin was also the winner across every answer category.
A few notes on research limitations
While our test was thorough, it had some limitations:
- We used a simulated help center, not real-world data.
- AI judges are great, but they might not perfectly match human judgment.
- The two products we tested have different features, and this could impact the results to an extent.
Overall, these findings clearly demonstrate Fin’s superior performance in direct testing. But beyond the numbers, there are several crucial advantages that make Intercom the clear choice for forward-thinking support teams. Here’s what this means for your business in practical terms.
What sets Intercom’s Fin apart
Flexibility to use Fin as part of Intercom’s seamless AI-first platform – or whatever CS platform you’re currently using
First, we understand that every support team has unique needs. That’s why we’ve made Fin incredibly flexible – you can either use it as part of our comprehensive AI-first system or integrate it with your existing platform, including Zendesk and Salesforce, and access all of its benefits. There’s no need to overhaul your entire support stack or disrupt your team’s workflow – Fin can help you get results in whatever way suits you best.
Pricing that makes sense: 99¢ per resolution
We want AI to be accessible for every team, so we’ve also taken a radically different approach to pricing. While other vendors lock you into complex contracts with hidden costs, we keep it simple: 99¢ per resolution. This transparent, outcome-based model means you only pay for actual value delivered. You don’t have to worry about spending a large chunk on something that doesn’t actually help move your business ahead.
Innovation that keeps you ahead
The thing that has always set Intercom apart is how fast we move. When it comes to AI, staying ahead matters because the sooner you get the latest capabilities, the better your automated customer experience will be.
We’re rolling out new features and capabilities at an unprecedented rate, continuously improving Fin’s performance based on millions of real customer interactions. When you choose Intercom, you’re not just getting today’s best-performing AI – you’re partnering with the most innovative company in the space, ensuring you’ll stay ahead of the curve as AI technology continues to advance.
The future of customer service is here – and it’s already delivering results
Many companies are making noise about their AI capabilities. But what they’re not doing is backing up this noise with evidence. From our research, it’s clear that Intercom’s Fin AI Agent outperforms a significant competitor – Zendesk AI agent – in providing the best, most accurate, high-quality answers. This means you can bring it onboard as part of your team and fully trust in its abilities to resolve a huge share of your customer queries, freeing up your human teammates to focus on more meaningful work.
“In a market full of noise and ambitious claims, we let our results do the talking”
Since this research was conducted, we’re proud to share that we’ve raised the bar even higher by launching Fin 2, our next-generation AI Agent. Delivering human-quality support, it’s capable of achieving a 51% resolution rate straight out of the box, with some of our customers achieving up to 86% after spending some time refining its use. To date, Zendesk is still marketing their first-generation AI agent.
What’s particularly exciting is that this is just the beginning. In a market full of noise and ambitious claims, we let our results do the talking. The data is clear, the performance gaps are real, and the future of customer service is already here. Are you ready to see what Fin can do for your team?