Fin 2: Powered by Anthropic’s Claude LLM
I’m excited to announce that Fin 2, our latest generation of Fin AI Agent, is powered by Anthropic’s Claude, one of the most sophisticated Large Language Models (LLMs) available today.
We build product at the very boundary of what’s possible in AI customer service, and our collaboration with Anthropic is a big step forward for us. With Claude we get intelligence, performance, and reliability that lets us deliver even more value to our customers with Fin. It’s not just about faster responses (though it does that); it’s about redefining how AI can enhance customer service.
Clearly Anthropic agrees as they have chosen Fin 2 as their own customer service AI agent. This, to me, is a strong validation of the technology we’ve built and the value that Fin is delivering for our customers. It’s also a good counterpoint to folks who are still thinking about rolling their own RAG bot :-) (PS, you should probably run less software in general.)
Why Claude? Why Now?
We constantly improve Fin by running millions of conversations through hundreds of A/B tests. Our performance criteria are things like answer accuracy, resolution rate, CSAT, human assessment of the answer quality, and many more things that our AI team won’t let me share. We constantly test out new models and new ways to use them.
“With Claude, Fin answers more questions, more accurately, with more depth, and more speed”
We landed on Claude for one simple reason: it delivers. And it doesn’t just deliver faster speed or scaled operations, but also high-quality service, performance, and reliability.
With Claude, Fin answers more questions, more accurately, with more depth, and more speed. We’re able to deliver an average resolution rate of 51% across thousands of Intercom customers and millions of conversations, making it the best performing AI agent in the industry.
So what’s next?
In our Fin 2 launch, we cover a lot of where we’re at and where we are going. Fin now takes actions and delivers personalized answers with custom answer length, tone of voice, etc. Fin can follow your policies, it can analyze conversations, calculate CS globally, and so much more.
Of course, our future plans include all the stuff you can imagine – voice, video, proactive, you name it – but we’ll come back to that later. For this release we’re prioritizing delivering what our customers wanted, vs. what would make for a cool-but-immature demo.
“Our collaboration with Anthropic is a key part of our journey to the highest resolution rate possible”
So where to from here? Our goal is to resolve as many CS conversations as possible. Today, our average resolution rate is 51% out of the box (i.e. before any significant tuning) which is pretty incredible. You may see larger numbers quoted by other folks, but I encourage you to double and triple click on exactly what they’re saying; we too can selectively sample certain customers or certain verticals and give you a far higher number.
Our collaboration with Anthropic is a key part of our journey to the highest resolution rate possible, while still thinking about our customers, and our customer’s customers. We want Fin to deliver great experiences; we are not building a deflection engine.
Working with Anthropic helps us stay at the forefront of AI, letting us explore the future and work on the bleeding edge, while delivering stable, reliable, and incredible results for our customers.