Response Time: Vol. 37
You satisfy your customers, but can you satisfy our curiosity?
With Kelly Burnette, Classroom Success Manager at Writable from HMH.
Please tell us a little bit about your company and what you do there.
Writable builds lifelong writing and reading skills for students in grades 3-12. We are now a part of HMH, and our team helps drive innovation in education technology through thoughtful and intentional AI tools. As a Classroom Success Manager, it is my job to make sure that educators using our program are supported when they need it most – usually in front of a class full of students!
What’s the most valuable thing that working in customer service has taught you?
The power of listening. Oftentimes, a customer will think their problem is one thing when it is actually something entirely different. By truly listening and knowing my product, I can help resolve the problem at its core and improve their experience.
Describe the essence of great customer service using only three words.
Timely. Clear. Kind.
Which movie robot would you choose as your AI sidekick, and why?
EVE from Wall-E. She knows her mission and isn’t going to back down from it, even when she’s told to. My mission is to help the customer, and sometimes I have to break some rules to do so!
What can you do that a bot will never be able to replicate?
I have a shared experience with our customers, knowing what it’s like to use technology in the classroom as a teacher. No bot can combine my product expertise and personal experience to understand our users’ unique realities.
What’s the most embarrassing thing you’ve ever said/done to a customer?
I’ve mistyped a lot of things because we respond very quickly, but I don’t really get embarrassed! I find my customers usually appreciate the humanity of the interaction.
Do you identify more with the title “customer support,” “customer service,” “customer success,” or “customer experience,” and why?
All of the above! We have a very specific role – “Classroom Success.” That’s because in the world of education, the most important place our app needs to work is in the classroom. Our teachers don’t have time to wait for us to escalate issues or ask other teams, so we are uniquely trained to be tech support, curriculum experts, implementation gurus, and customer advocates all at once.
What’s the one piece of advice you would give to your peers in the customer service industry?
Tag those kind comments for a rainy day. Most customer experiences are positive, but those negative ones can really mess up your attitude. I like to tag my bright spots and appreciative chats so when I’m feeling run down I can go back and remember that I do make a positive difference in my customers’ days.
What’s the worst customer service you’ve ever experienced?
I refuse to call our internet provider – I make my partner do it. The hoops I have to jump through to get to the correct person, and the way they try to manipulate customers into buying higher speed internet when they can’t deliver the quality we already pay for is insane.
What’s your greatest productivity hack?
I block the first 15 minutes of the day to lay out my priorities for this day, this week. I block the last 15 minutes to check on whether I accomplished those things, which gives me a chance to celebrate my wins and feel prepared for what comes next.
What book are you reading at the moment?
I am a fan of fantasy and fiction. I just finished Two Twisted Crowns by Rachel Gillig – it’s a really great duology!
If customer service was an Olympic sport, what would be the main event?
Juggling multiple high priority conversations at once! If a customer asks, “Are you still there?” you are disqualified.
What’s the best thing a customer has ever said to you?
“I am going to go teach all my teammates what you showed me.” Turning customers into champions. 💪
What gif best describes your mental state right now?
Where do you get your support leadership news?
I always attend Intercom’s webinars and check out the resources. I also stay active on LinkedIn and the communities there.
What do you wish people knew about working in customer service?
It’s not as awful as it’s made out to be. Yes, we have our bad days, but you get the chance to connect with a lot of different people. More often than not, our customers are grateful and kind to us, and we get to make a difference in their day. I love getting to turn a negative experience into a positive one for them.
If you wrote a book about your experiences in customer service, what would the title be?
“Is There Anything Else I Can Help You With?”
What’s the strangest thing a customer has asked you?
I once had a customer ask if I could come over to help them figure out how to check their work email at home. They were in Oregon, I’m in Virginia.
What’s your most used emoji in customer chats?
😅
Conversation closed… for now 😏
If you’re interested in being featured in our Response Time series, you can share your insights on customer service – and what Olympic sport customer service would be – with us here.
Fin over email: How we built a multichannel AI agent
Email is an essential channel for support, but email conversations lead to slower resolutions for customers when compared with synchronous channels like live chat.
With the advent of AI-first customer service, a lot of frontline customer queries are now being dealt with by LLM-powered AI agents. Our own Fin AI Agent resolves more than 50% of customer queries immediately.
However, there’s a perception that AI agents can only function over chat. Our research has shown that many customer service leaders continue to equate AI to chat experiences, rather than thinking about how it can deliver support across multiple channels, just as human agents can.
Well, we’re changing that perception with the latest updates to Fin AI Agent – customers can now get instant responses to their emails.
Customers can now get AI answers to their emailed support questions
Getting Fin AI Agent to work over email presented some interesting technical and UX challenges – here, we dive into the process and share some of our learnings.
How Fin for email works
When a user contacts a business’ customer support team via email, Fin AI Agent will automatically jump into the conversation to resolve the issue. Fin’s answers use generative AI technology to create the answer based on a range of support content using the Retrieval-Augmented Generation (RAG) framework.
Fin not only provides direct answers to queries, it’s also more conversational, with the ability to ask clarifying questions if the user’s initial message isn’t clear enough to find the best response. For the most complex cases that Fin isn’t able to answer, Fin will seamlessly hand over to a support agent.
Our development journey
When Intercom launched Fin AI Agent in March 2023, it was the first generative AI-powered customer service agent on the market. We tapped into learnings from our previous machine learning-based product, Resolution Bot, to inform what a generative AI Agent could look like. Since then, we’ve continued to improve and expand our offering by introducing completely new features or rolling out improvements to the underlying model, thereby increasing resolutions.
Starting from first principles
When it came to defining how we would build Fin over email, we didn’t have a blueprint for what the solution should look like. Email as a channel is very different from chat, so we were unsure whether Fin over email should work in the same way. This is where our “Think big, start small, learn fast” principle became relevant, and pushed us to apply first principles thinking.
We started with research to better understand why email automation was important for customers, what kind of requirements they had, and what impact we could anticipate if we built Fin over email. The insights were summarized into a doc called an “Intermission”, which we create at the start of all product initiatives, in keeping with our “Start with the problem” principle.
Iterative development
We decided to start small with an alpha version as there were many assumptions to validate. The team proceeded to build the technical foundations and a very simple teammate experience – just enough to be able to set Fin live on email, but with no bells and whistles. Since we already had a lot of the building blocks in place – a solid email solution and a very flexible automation system (Workflows) – we were up and running quickly.
“This close partnership is at the heart of how we work in R&D – it allows us to move fast as we have tight feedback loops with the customers who will use and benefit from our product”
We reached out to a handful of Fin AI Agent customers, who have a high number of monthly email conversations, to provide us with feedback on what we had built so far. This provided us with enough insight to define scope for our open beta release.
At Intercom, we are very fortunate to be able to partner with our customers as we make progress on our thinking. We work closely together to understand their needs and gather feedback on our initial solution. This close partnership is at the heart of how we work in R&D. It allows us to move fast as we have tight feedback loops with the customers who will use and benefit from our product.
The early feedback helped us shape our open beta. At this stage, we kicked off a more in-depth design phase, resulting in an artifact called an “Interconcept”. This phase of development is driven by the product designer and outlines a set of different approaches, each with a list of pros and cons.
When we were ready to start building, the lead product engineer created a project plan to outline what we needed to build and in what sequence, making it very easy to bring the rest of the team together. Once we launched Fin over email to open beta, we focused on monitoring usage and gathering as much feedback as possible, aiming to uncover any necessary improvements or new functionality required for general availability.
Challenges and considerations
Despite the team working on Fin AI Agent for over a year, amassing deep usage insights and seeing a great deal of success, making Fin work over email came with its own challenges.
Technical challenges
In 2022, prior to the generative AI explosion, Intercom launched the ability to run automations and chatbots over different channels, such as WhatsApp, SMS, and email. At the time, email already proved to be a more complex channel to automate.
From a technical perspective, some examples of challenges we faced when working with email automation were:
- Email deliverability was out of our control – mail clients (such as Gmail and Outlook) can block addresses and throttle usage.
- Multiple queries in the same message happen more often over email, meaning that we needed to ensure we process them separately so no context is lost.
- Converting automated content for chat (which tends to be shorter, separate messages) into a single email with correct formatting (i.e a heading, the body, and an email signature) was not a trivial task.
User experience considerations
Besides the technical challenges, we also had to solve problems that impacted the end user experience.
For most end users, talking to an AI agent over email is a much less established habit than talking to one over live chat, which meant that we had to design an experience that took all the standard expectations around email into account, rather than trying to replicate a chat experience over email.
It was also important for us that the experience felt natural and intuitive so that end users felt comfortable with interacting with an AI agent over email.
We had to consider many differences between live chat and email when designing the new experience, such as:
- As email is an asynchronous channel, conversations don’t have an instant back-and-forth like they do over live chat and customers often have to wait longer to receive a response to their question.
- The email content is usually longer and contains more information, whether that’s text or images.
- Interactive steps that you can add to chat conversations, such as buttons, don’t quite translate over to email.
- Setting expectations that a user is talking to an AI agent requires different visual cues in email than over live chat.
- Emails render very differently across a number of email clients (e.g. Gmail and Outlook), resulting in a long list of design requirements.
Adapting our underlying AI architecture
Lastly, with the learnings gathered from both the technical and end user experience challenges, we partnered with Machine Learning scientists and engineers to create a new component in the AI agent’s underlying architecture specifically for email. Different to our original AI agent over chat, this new agent was developed with the specificities of email in mind, such as:
- Ability to process multiple questions from a single message separately; for example, it can directly answer some queries and clarify others in the same email response.
- Not processing email signatures containing images that are not relevant to the query.
- A built-in mechanism to ignore spam and automated emails.
As the expectation for a response over email isn’t as instantaneous as chat, we were also able to perform some more complex LLM querying for better and more robust answers without significantly impacting the response times.
Fin over email in action
The impact for our customers has been immediate. For instance, Robb Clarke, Head of Technical Operations at RB2B, reported these astonishing results:
“RB2B 2x’d its user base in the last 58 days but my support team is fielding 45% LESS inquiries thanks to one major change, Fin AI Agent started handling email replies. This simple yet powerful change saved us from handling an additional 493 tickets. At 15 minutes per ticket, that’s about 123 hours saved. If you’re not using it yet, you’re missing out. The efficiency and time savings are game-changers – 12 months from now, our team of 2 is going to be acting like a team of 20.”
Within the first month of release, Fin processed over 1 million end user emails. Fin has provided an AI-generated answer to over 81% of the email conversations it has been involved in, automatically resolving more than 56% of them on average.
Fin over email is available now. Learn more about how it can transform your customer support experience, or check out this instructional video, which shows you how to set it up to support your customers.
Your customer service experience has to deliver great support everywhere your customers expect to communicate with you, and that means AI agents have to be able to deliver support in those channels too. With Fin AI Agent, that omnichannel AI-powered support experience is a reality.
Evolving Intercom’s database infrastructure
Intercom is rolling out a major evolution of our database architecture moving to Vitess – a scalable, open-source MySQL clustering system – managed by PlanetScale.
For many years, Intercom has used Amazon Aurora MySQL as our default database. With the addition of our custom sharding solution for high scale data, Aurora MySQL has allowed us to scale our databases with relative ease. It has supported hundreds of terabytes (TB) of data, 1.5 million reads per second, and tens of thousands of writes per second. Aurora MySQL has served us well as the source of truth for the majority of Intercom’s most critical production data.
“We deeply understand the importance of reliability because we experience it firsthand”
For our customers, when Intercom is down, critical parts of their business are affected. They expect flawless uptime, and so do we, even accounting for unforeseen disruptions or planned database maintenance. Our own teams – including Customer Support, Sales, Product, Engineering, IT, and more – rely heavily on our platform every day. An outage doesn’t just impact our customers; it impacts us directly. We deeply understand the importance of reliability because we experience it firsthand.
In late 2023, as we reviewed our database architecture, several factors led us to seek improvements: enhancing the customer experience, addressing operational friction, and keeping pace with a shifting database landscape.
Our review surfaced these goals:
- Eliminate downtime due to database maintenance and writer failovers.
- Reduce the complexity and cognitive load of working with databases across engineering teams.
- Streamline the migration process and improve the latency of running large-scale database table migrations.
- Achieve straightforward, low-effort scaling of MySQL for the next decade.
We aim to build “boring” software and are committed to running less software, choosing to build on standard technologies, and outsource the undifferentiated heavy lifting. With this in mind, we decided earlier this year to move our database layer to Vitess managed by PlanetScale within our AWS production accounts.
Why Vitess?
Vitess is a MySQL-protocol compatible proxy and control plane for implementing horizontal sharding and cluster management on top of MySQL. Originally developed by YouTube and now used by companies such as Etsy, Shopify, Slack, and Square, Vitess combines MySQL features with the scalability of NoSQL databases. It offers built-in sharding capabilities that enable database growth without necessitating custom sharding logic in the application.
Vitess automates tasks that impact database performance, such as query rewriting and caching, and efficiently handles functions like failovers and backups using a topology server (a system that keeps track of all the nodes in the cluster) for server management. It addresses the lack of native sharding support in MySQL, facilitating live resharding with minimal downtime and maintaining up-to-date, consistent metadata about cluster configurations. Importantly, it also acts as a connection proxy layer which would eliminate the majority of database related incidents we’ve had in recent years. These features effectively provide unlimited MySQL scaling.
Why PlanetScale?
PlanetScale builds upon Vitess by offering a managed platform that provides an exceptional developer experience and handles the undifferentiated heavy lifting of managing the underlying infrastructure. Their expertise, which includes core Vitess team members, allows us to benefit from advanced features like advanced schema management, database branching, and automated performance optimization.
The details around scale and challenges below largely relate to our US hosted region – the infrastructure in our European and Australian regions is similar but at a smaller scale. PlanetScale will be rolled out to all regions.
Supporting high scale: 2011 to 2024
As Intercom scaled, we adapted our database strategies in three main ways:
- Get a bigger box: In the very early days of Intercom, scaling our databases was straightforward – we simply upgraded to larger and more powerful database instances. This vertical scaling approach allowed us to handle increased load by leveraging AWS’s flexible and ever improving instance types. With a maintenance window, we could move to instances with more CPU, memory, and I/O capacity as our data and traffic grew. However, this strategy has its limits. There’s only so much capacity you can add before hitting the ceiling of what a single machine can handle, both in terms of hardware limitations and ability to perform certain operations such as database migrations.
- Functional sharding: To move beyond the constraints of vertical scaling, from 2014 we started implementing functional sharding within our architecture. This involved splitting our monolithic database into multiple databases, each dedicated to specific functional areas of our application. For example, we separated our conversations table out into its own database. By distributing the load across dedicated databases, we reduced contention and improved performance for specific workloads. This has its drawbacks, cross-database queries became more complicated, and maintaining data consistency across different shards required additional coordination through multi-database transactions. As AWS introduced larger and more powerful database instances, this scaling strategy has remained relevant.
- Move to RDS Aurora: Soon after AWS released RDS Aurora in 2015, we eagerly migrated to RDS Aurora from the original RDS MySQL offering. Aurora’s architecture decoupled storage from compute, and allowed us to easily scale-out using read-replicas, avoiding replication lag and other problems that existed in traditional MySQL implementations at the time.
Sharding per customer
As our customer base and data continued to expand significantly, we faced database scalability challenges that could no longer be addressed by vertical scaling or functional sharding. To overcome this, we implemented customer sharding by horizontally partitioning our data based on customer identifiers. This approach allowed us to distribute the load more evenly across multiple database clusters and scale horizontally further by adding new database clusters as needed. Effectively, each customer would have their own database for high scale data (e.g. conversations, comments, etc.).
“Our sharding solution enabled us to handle billions of data rows and millions of reads and writes per second without compromising performance”
Building our own sharding solution was a substantial undertaking which we completed in 2020. We dedicated a team to develop a tailored solution using technologies we were already familiar with. This enabled us to handle billions of data rows and millions of reads and writes per second without compromising performance. Thanks to this setup, we were now able to migrate large-scale tables that we hadn’t been able to touch for years, unlocking easier and faster feature development.
Managing this sharded environment introduced new complexities. For example, our application had to incorporate logic to route queries to the correct shard and simple migrations, for example adding a new table, would take days to complete. This was better than not being able to change these tables at all, but still not optimal.
What problems did we see in our current setup?
Connection management
Intercom operates a Ruby on Rails application with its primary datastore being MySQL. In the USA hosting region, where the vast majority of Intercom workspaces are hosted, we run 13 distinct AWS RDS Aurora MySQL clusters.
A problem of this architecture is connection management to MySQL databases. There are limits on the maximum number of connections that can be opened to any individual MySQL host, and on Amazon Aurora MySQL the limit is 16,000 connections. Intercom runs a monolithic Ruby on Rails application, with hundreds of distinct workloads running in the same application across thousands of instances, connecting to the same databases.
“The use of ProxySQL allows us to scale our application without running into connection limits of the RDS Aurora MySQL databases”
As each running Ruby on Rails process generally needs to connect to each database cluster, the connection limit is something we had to engineer a solution for. On most of the MySQL clusters, the read traffic is sent by the Ruby on Rails application to read-replicas, which spreads the connections out over a number of hosts, in addition to horizontally scaling the query load balancing across the read-replicas.
However, for write requests, we need to use a different approach, and in 2017 we rolled out ProxySQL to put in front of the primary writer nodes in each MySQL cluster. ProxySQL maintains a connection pool to each writer in the MySQL clusters and efficiently re-uses connections to serve write requests made by our Ruby on Rails application. The use of ProxySQL allows us to scale our application without running into connection limits of the RDS Aurora MySQL databases.
In the last year, we’ve experienced a number of outages related to our use of ProxySQL. These issues arose particularly when we attempted to upgrade to ProxySQL 2.x and utilize new features like its integration with RDS Aurora read replicas, which led to instability and outages.
Database maintenance
Maintenance windows are a necessary evil of most database architectures, and nobody loves them. For many of our customers, when Intercom is down, large parts of their business is down too. This is increasingly relevant as Intercom builds out features such as Fin AI bot, which can resolve large amounts of conversations for our customers.
Maintenance windows are something we’ve avoided unless absolutely necessary and when needed, run the majority of them at the weekend in order to reduce the impact for our customers. With AWS Aurora, any upgrades or planned instance failovers (for example, for increasing the size of a database instance) required maintenance windows with customer impact ranging from five to seventy minutes.
For instance, during our upgrade from Aurora 1 to Aurora 2, we conducted ten maintenance windows across our regions, each causing actual disruptions between twenty and seventy minutes.
We knew we needed to do better here, and remove the need for maintenance windows entirely.
Intercom’s database architecture 2024 and beyond – enter PlanetScale
While these methods have allowed us to scale with relative ease, the database landscape has changed dramatically. Compared to 2019, when we decided on our custom application sharding approach, there are now more options for building practically infinitely scalable databases appropriate for Intercom.
Embracing Vitess and PlanetScale
To address the limitations and complexities of our existing database architecture, we have embarked on a journey to adopt Vitess managed by PlanetScale. This transition represents a significant evolution in our approach to database management, aiming to enhance scalability, reduce operational overhead, and improve overall availability for our customers. We have already migrated several databases and have many more to transition in the coming months. The benefits we’re already seeing include:
Simplifying connection management
One of the immediate benefits of Vitess is its ability to act as a single connection proxy layer through its VTGate component. VTGate is a stateless proxy server that handles all incoming database queries from the application layer. It intelligently manages connection pooling and query routing, effectively multiplexing a large number of client connections over a smaller number of backend connections to the MySQL servers.
“VTGate allows us to scale our application seamlessly without worrying about connection constraints”
By centralizing connection management, VTGate eliminates the 16,000 connection limit per MySQL host that we previously faced with Aurora. This removes the need for ProxySQL in our architecture, reducing a massive source of complexity, and potential points of failure. VTGate also provides advanced query parsing and can route queries based on the sharding key or even handle scatter-gather queries across multiple shards when necessary. This allows us to scale our application seamlessly without worrying about connection constraints or overloading individual database instances.
Zero-downtime maintenance and failovers
Vitess offers advanced features like seamless failovers, which are critical for eliminating customer downtime during maintenance operations such as software upgrades and changing instance sizes. Its built-in failover mechanisms ensure that if a primary node goes down, a replica can take over almost instantaneously without impacting ongoing transactions. This aligns perfectly with our goal of providing flawless uptime and eliminates the need for extended maintenance windows that disrupt our customers’ operations. With the clusters we’ve already migrated, we can refresh the database instances without any noticeable impact on our customer-serving metrics.
Native Sharding Support
Perhaps the most significant advantage of Vitess is its native support for horizontal sharding. Unlike our previous custom sharding solution, Vitess abstracts the complexity of sharding away from the application layer. Our engineers no longer need to write custom logic to route queries to the correct shard; Vitess handles it automatically based on the sharding scheme we define.
“This reduction in cognitive load allows our teams to focus more on delivering new features and less on managing database intricacies”
In time, we will also be able to combine our functionally sharded databases into a single logical database thereby reducing the complexity we introduced to maintain data consistency across the databases. For example, currently, if a new comment is created, three individual databases must be kept in sync. This reduction in cognitive load allows our teams to focus more on delivering new features and less on managing database intricacies.
Streamlined migrations and scalability
Running large-scale database migrations has been a pain point due to the time and complexity involved. Migrations on our largest non-sharded tables can take months to complete. Vitess addresses this with its online schema change tools operating on sharded data, enabling us to perform migrations with minimal impact on performance. Additionally, scaling horizontally becomes a straightforward process. Need more capacity? Simply add new shards, and Vitess will manage the data distribution without requiring significant changes to the application.
Partnering with PlanetScale
By choosing PlanetScale to manage our Vitess deployment within our AWS production accounts, we leverage their expertise and the contributions of the Vitess core team members they employ. PlanetScale provides a developer-friendly experience and takes on the undifferentiated heavy lifting of managing the underlying infrastructure. This partnership ensures that we benefit from best-in-class database management practices while allowing us to remain focused on what we do best: building our AI-first customer service platform for our customers.
One of the standout features PlanetScale offers is its advanced schema management capabilities. PlanetScale enables non-blocking schema changes through a workflow that allows developers to create, test, and deploy schema modifications without impacting the production environment. This is facilitated by their concept of database branching, akin to version control systems like Git. Developers can spin up isolated database branches to experiment with changes, run tests, and then merge those changes back into the main branch seamlessly. This drastically reduces the risk associated with schema migrations and empowers our engineers to iterate faster, ultimately accelerating our product development cycles. Just like with Git, if a database schema change is pushed to production and an issue is discovered, it can be reverted easily.
“This new mechanism improved the latency of the previously expensive query by 90%”
PlanetScale also allows for net new mechanisms we can use to serve requests. For instance, we recently used materialized views to optimize the counting of open, closed, and snoozed conversations for teammates. This new mechanism improved the latency of the previously expensive query by 90%, leading to a faster teammate experience and reduced database load.
Additionally, PlanetScale provides automated index and query optimization tools. The platform can analyze query performance and suggest or automatically implement index improvements to enhance database efficiency. This proactive approach to optimization reduces the operational overhead typically associated with manual database tuning – everyone on the team can now operate like a world class database expert. These improvements ensure that our queries run efficiently and allow us to maintain high application performance, which translates to a smoother and more responsive experience for our customers.
Challenges faced during migration
Moving the databases that are responsible for Intercom’s most critical data is a major undertaking and it has not been without its challenges. Despite thorough planning and testing, we encountered several issues that provided valuable learning opportunities and ultimately strengthened our migration strategy as we move across more databases.
Latency spikes due to cold buffer pools
One of the initial hurdles was unexpected latency during the cutover of one of our core databases to PlanetScale. When we redirected traffic to the new Vitess cluster, we anticipated some initial latency as the database caches warmed up. However, the latency spikes were more significant and lasted longer than expected – particularly in one availability zone.
This was primarily due to cold buffer pools on the MySQL instances within Vitess. Since these instances had not served production traffic before, their caches were empty. As a result, queries that would typically be served from memory had to fetch data from disk, increasing response times. While we anticipated this problem, we expected only a few seconds of latency, however in reality it continued for twenty minutes and made the Inbox slow to respond to customer requests.
To mitigate this for subsequent migrations we’ve implemented read traffic mirroring to pre-warm the buffer pools before redirecting live traffic. By simulating traffic to load frequently accessed data into memory, we can reduce the initial latency spikes during future migrations.
Disk I/O saturation and resource limits
During periods of high load after the initial cutover and at traffic peak, we observed that some replica servers were experiencing disk I/O saturation. The replicas reached the maximum IOPS allowed by their attached storage volumes. This led to increased CPU utilization in the “iowait” state, further degrading performance.
“Scaling down by removing excess capacity is significantly faster and less disruptive than scaling up under pressure”
The root cause was that the replicas’ IOPS were under-provisioned for the workload they needed to handle. To resolve this, we initiated the scaling out of additional replicas. However, adding new replicas was time-consuming due to the size of our data – restoring backups to new instances and allowing them to catch up with replication took several hours. During this period, standard operations in the Inbox were 1.5 to 3x slower than usual, with Workload Management most affected – slowing to between 5x and 10x normal latencies.
Our takeaway from this is that we will significantly overscale all clusters as we move across load. Scaling down by removing excess capacity is significantly faster and less disruptive than scaling up under pressure.
Configuration changes and unexpected interactions
We also faced challenges when certain configuration changes interacted poorly with application behavior. For instance, increasing the transaction pool size and the maximum transaction duration seemed beneficial in isolation. However, combined with a surge of scheduled operations, for example bulk unsnoozing of conversations on the hour, these changes led to resource saturation. The database was flooded with long-running transactions, causing latency and errors impacting the Inbox.
The road ahead
Our migration to Vitess is more than just a technological upgrade; it’s a strategic move to future-proof our database architecture for the next decade and beyond. By embracing Vitess and partnering with PlanetScale, we’ve positioned ourselves to provide even greater reliability, scalability, and performance for our customers.
“The lessons we’ve learned and the mitigations we’ve implemented have set us up for success as we continue migrating our remaining infrastructure”
So far, we’ve successfully migrated our databases related to our AI infrastructure and one of our most critical databases powering the Inbox. These early migrations have validated our decision and provided invaluable insights. The lessons we’ve learned and the mitigations we’ve implemented have set us up for success as we continue migrating our remaining infrastructure.
Looking ahead, we’re excited about the possibilities that Vitess and PlanetScale open up for us. The native sharding capabilities will allow us to simplify our database architecture, reducing complexity and operational overhead. Our teams can focus more on delivering innovative features and less on managing database intricacies, ultimately enhancing the experience for our customers.
Pioneer 2024: Intercom’s first ever AI customer service summit, in summary
We’ve just hosted our inaugural AI customer service summit, Pioneer, and there are so many incredible insights and stories to share.
Thousands of customer service and tech enthusiasts joined us, both in person in London and online via livestream, to explore how AI is transforming the support space. The energy and excitement was evident through all the talks, customer sessions, and enthusiastic conversations.
The event was a true celebration of the pioneers leading the way at a time of great change. We announced Fin 2, the most advanced AI Agent in the industry, put the spotlight on our wonderful customers who spoke about how they transformed their support in an AI-first world, and heard from our Co-founder and Chief Strategy Officer Des Traynor on how AI is getting real.
Renowned tech writer Benedict Evans spoke about cutting through the AI hype of the moment to realize actual real-world value, and in particular how to think about the changes ahead.
Some of our amazing customers shared their lessons and experiences adopting AI-first customer service, sparking lots of conversations among the audience.
And we ended the day with a special live recording of our Off Script series with two incredible guests: English actor, author, and comedian Stephen Fry and musical pioneer, visual artist, and activist Brian Eno. They discussed what technology has meant for them, and what AI means for society at large.
You can watch the on-demand recordings here and catch some of the key takeaways below. Enjoy.
Next-generation, now
Our Chief Executive Eoghan McCabe kicked the day off with a big vision of how AI is transforming customer service. He also articulated the big scope of Intercom’s ambition. As he put it, “Our commitment to you is that you’ll never find a better-performing or feature-rich customer service AI agent anywhere on the market.”
To reinforce that commitment, our Chief Product Officer Paul Adams launched Fin 2, our next-generation AI agent that deliver our highest resolution rates with powerful new capabilities to handle your frontline support. In Paul’s words, “Fin 2 is the first AI agent that delivers human quality service. That has been our mission and it can do it. We’ve built it in partnership with all of you.”
This new generation AI agent is the culmination of so many years building customer service tools and incredible effort on the part of our product teams. And this technology is only getting better from here. An inspiring keynote at an inspiring time for the industry.
Lighting the way forward
Intercom Co-founder and Chief Strategy Officer Des Traynor spoke about how this AI thing is getting real. That doesn’t mean it will be smooth sailing, as he admits – we’re somewhere in the peak of expectations in the AI adoption hype curve. Expect to see a lot of people questioning the value – people will ask hard, skeptical but sometimes important questions.
But as Des explained, eventually every aspect of a product will change – who uses it, who buys it, the pricing, performance, and more. The whole landscape will get rewritten across all industries. But it won’t happen all at once or as quickly as we think. Instead, the evolution of AI will be like the adoption of electricity, as Des put it. Electricity didn’t happen overnight, but when it did it fundamentally transformed everything.
“Electricity didn’t happen overnight – it was 1879 when Edison filed a patent for the light bulb,” Des says. “It was the 1900s before things like London lit up. And then the second order effects, well, the 9-5 that we all know and love came from the fact that electricity existed. Then late night shopping and shift work. All of these things were only possible because of the rollout of electricity. That’s the sort of second and perhaps third order effects from one dude filing a patent for a light bulb in 1879.”
The future, as Des sees it, is bright and getting brighter.
Paradigm shifts
“World-renowned influencer and AI expert” Benedict Evans, as he jokingly described himself, delivered a typically perceptive presentation on how to think about the ways in which AI will transform industries such as customer service.
Evans cited examples of how technological transformations has previously disrupted industries to guide how to think about what happens next – basically, there are no clear answers, but boundless possibilities ahead. As Evans pointed out, our assumptions around the trajectory of these technologies are often misplaced – spreadsheets didn’t spell the end for jobs in accountancy and finance, for example.
Evans followed it up with a fascinating Q&A session with Des, where they explored the implications for customer service, and the key foundational shifts that will determine how this generative-AI era plays out.
“Every 15 years or so we’ve gone through one of these platform shifts,” Evans pointed out. “It changes how we do our work, changes what the tools are, changes what tools can be built, changes what kinds of companies and what kinds of products we all use.”
A personal history of technology with Fry and Eno
The day concluded with an extraordinary meeting of minds, as Stephen Fry joined Brian Eno for a live recording of our Off Script series.
And Off Script it most certainly was – a winding, illuminating conversation about their respective relationships with technology. They swapped stories about how tools such as the Mac and the synthesizer shaped their careers and lives, and how their initial enthusiasm for technology has given way to a good deal of skepticism as the impact of social media has become a greater part of our lives.
The pair have enjoyed a front-row seat in the world of technological transformation over the past half century – as became clear with first-person anecdotes featuring the likes of Apple designer Jony Ive, pioneering AI researcher Marvin Minsky, and Amazon founder Jeff Bezos, to name a few.
They also shared their thoughts on how AI will affect culture and society – a cautious perspective, it’s fair to say, informed by their own creative efforts and concerns about the impact of AI on the entertainment and arts industries.
Ultimately, though, Fry and Eno reflected on the potential of AI technology to transform lives for the better, when used in the right way, for the right reasons.
“Let the machine do what the machine does,” Fry said, “and the better machines do things the more attention you can give to what it is that humans do and what humans are. As AI takes over various clerical and bureaucratic jobs, logistical jobs, and so on, the the more your work every day will be about people, it will be about imagination, it will be about creativity, it will be about fresh thinking.”
Innovator sessions
Throughout the day, we had wonderful conversations on the Innovator Stage with three of our customers: Angelo Livanos, Senior Director of Global Customer Support at Lightspeed Commerce; Natalie Hurst, Director of Customer Success at Nuuly; and Constantina Samara, Head of Support at Synthesia.
These customers are seeing real results using AI, right now. And they shared just how much Fin is transforming their support operations, team dynamics, and customer experience. If you missed the live conversations, we’d highly recommend you check out the on-demand recordings. But in the meantime, here’s a quick recap:
Lightspeed Commerce
As an AI-forward company, Lightspeed was excited to use AI tools like Fin AI Agent and AI Copilot to allow them to do their jobs better, further enjoy their work, and delivers better experiences for their customers. But with hundreds of agents supporting customers in multiple regions and languages, they couldn’t just flip a switch and roll AI out overnight.
To make sure their teams were set up for success to make the most of Fin, they placed a great deal of focus on training, ongoing support and enablement, clear and frequent communication, and cementing alignment on the ultimate vision and goals.
Angelo spoke in depth about how Lightspeed navigated this period of change. By bringing the whole company on the AI journey with them, Lightspeed’s support team were able to generate a ton of excitement across the company. As Angelo put it, “It’s building a bit of a cult following of people that are saying, ‘This has done some pretty great stuff. How do we tap into this now?’”
The great stuff in question? Resolution rates of up to 65% and 31% more conversations closed daily by agents. So it’s easy to see why there’s such company-wide excitement.
You can catch up on our full conversation with Angelo here.
Synthesia
A 690% increase in customer contact in just four months is almost hard to imagine, but that’s exactly what Constantina and her team at Synthesia faced this year. Instead of 40,000 customers seeking support each month, they were suddenly seeing 316,000.
When that happened, Constantina’s priority was to leverage AI and automation to help her team manage that spike – which (spoiler alert) they did, with great success.
As Constantina said, “Without the level of automation we have with Fin and Intercom, I’d have needed a team of 150 people to manage that.” But by leaning on these tools, they empowered a whopping 98.3% of the 316,000 customers seeking support to resolve their query themselves. In other words, only 1.7% of those customers needed to speak with the team.
So not only were they able to scale support to tackle an enormous spike, they actually freed up time for support agents to deal with meaningful conversations on a daily basis, and even explore other exciting opportunities to create impact – like what “premium support” could look like, and how they could offer support as a service.
You can dig into the details of our chat with Constantina here.
Nuuly
With support volume on the rise, Natalie expected to have to significantly grow her team to keep up with demand. Natalie explained that for her, around 50 associates is a sweet spot for a support team; anything larger makes her feel disconnected from each individual employee.
So therein lay the challenge for Nuuly: how could the team meet increasing demand without dramatically adding headcount?
The answer? (One more spoiler alert) Fin.
Since adding Fin to the team and embracing a human-AI approach to support, Natalie has been able to free up her support associates to handle queries that require human empathy and judgment, and spend more time building strong relationships with their customers and teammates. The combination of Fin and other Intercom automation features have also enabled Natalie to slow projected staff growth by 40%, which lets Nuuly’s tight-knit support team maintain their team size and culture.
You can check out our chat with Natalie here.
Pioneer innovator spotlight: How Synthesia managed a 690% spike in customer contact without increasing headcount using AI and automation
We spoke with Constantina Samara, Head of Support at Synthesia, about the impact of AI and automation on scaling support in a cost-effective way, changing customer attitudes towards these technologies in customer service, and lessons learned from rolling out AI.
In just four months, the number of Synthesia’s customers seeking support on a monthly basis rose from 40,000 to 316,000 – a 690% increase. To meet this level of demand without AI and automation, the team would have needed to grow to 150 people, but with Intercom, they were able to swiftly tackle the spike without increasing headcount, all while reducing resolution time by 96% and maintaining high customer satisfaction.
Let’s take a closer look at how they did it.
Can you tell us a bit about Synthesia, your role, and how you came to be a customer service leader?
Synthesia is an AI video generation platform that enables our customers to create studio-quality videos with AI avatars and voiceovers in over 120 languages. My role is Head of Support, and my path to becoming a leader in this space actually happened sort of unexpectedly. I have a background in psychology and an interest in human behavior, so when I found myself in a customer service role, I wanted to apply my background to analyzing interactions with customers and teammates and understanding what makes people respond in certain ways to different situations.
“I’m really passionate about service being the best service”
I’m really passionate about service being the best service, and by understanding the behaviors of your customers and your team, you can use those insights to create the best possible experiences for them.
What motivated you to implement AI with Intercom?
We’re a fast-growing business, and naturally, as our customer base scales, our support volume increases alongside it. We were seeing our customer contact rate rise by anywhere from 20-30% month over month, which was becoming challenging to manage with the number of people we had on the team.
We knew we needed to leverage AI and automation to help us, and that Intercom had the tools to help us do that.
Did you face any challenges during the implementation? How did you go about solving them?
We encountered two main obstacles that we needed to overcome:
1. Preparing our knowledge base
We started testing Fin AI Agent as soon as it was in beta, and that’s when we had the hard realization that our knowledge base was really not fit for purpose. The responses we were getting back were kind of all over the place – Fin was contradicting itself because our knowledge base was clearly contradicting itself. So I’d say the biggest challenge for us was having to almost redo our knowledge base. Of course, we had a bit of a starting point, but it was a big piece of work.
In the early stages of optimizing our help content, we joined Fin in every second customer conversation to make sure we were getting it right. That was challenging at the time, but really beneficial and has helped hugely in the long run. We can now be sure that Fin has what it needs, and that our knowledge base is up to date and accurate. So even for customers who navigate to the help center and don’t open a conversation with Fin, they now have much better information available to them too.
“The return on investment you see when you’re successful with [Fin] far outweighs the cost of having to introduce a role or two to make it a success in the first place”
To help us revamp our knowledge base and get it ready for Fin, we needed to hire people to manage that work. That’s obviously an extra cost, and something I’d imagine many businesses are having to weigh up right now, but the return on investment you see when you’re successful with it far outweighs the cost of having to introduce a role or two to make it a success in the first place.
2. Getting buy-in from the support team
The second challenge we faced was implementing it in a way that the support team was on board with. And that ultimately came down to how we positioned the rollout.
Something I hear a lot in the customer service space – and that we encountered ourselves – is that there’s a fear around AI taking jobs and meaning support teams won’t be required anymore. So we wanted to offer our team reassurance that we were bringing in AI to alleviate pressure and enable them to be more satisfied and fulfilled in their roles, rather than answering refund questions over and over, every single day.
It’s a really fine balance between introducing automation and maintaining team engagement. Because automation can be great, but if you’ve lost engagement with your team and they don’t have the same passion and energy to provide the same level of service that they did before automation, you’ve lost human support.
Once the team actually started using AI, they were able to truly realize its impact and the opportunities it created for their roles. They suddenly had more time to do investigative work and actually learn and grow, whereas before they were just doing those repetitive tasks. Without us even going out to the team and trying to gather that feedback, they were coming to us and saying, “We haven’t seen questions about X for a very long time,” and we were like “Yeah, because Fin resolved 1,000 of them.” That was a big milestone moment where the benefits came to the forefront for the team – that they were able to deal with meaningful conversations on a daily basis.
Fast forward to now and we’ve never seen so much engagement in support. If anything, Fin has now increased appetite on the team to introduce as much AI and automation as possible.
What level of impact have AI and automation had on your support operations? Any highlights or metrics you could share?
The ability to manage our rising customer contact rate was definitely a highlight for us. Like I mentioned, we were seeing a 20-30% increase in customers seeking support month over month, but between April and August of 2024 alone, we saw an increase of 690%. Instead of 40,000 customers seeking support each month, we were suddenly seeing 316,000.
“Even if we continue to see a big increase each month, I don’t anticipate us having to increase our team headcount for a significant amount of time”
Without the level of automation we have with Fin and Intercom, I’d have needed a team of 150 people to manage that. But with their tools, we were able to handle the spike without having to grow our team to meet the demand. In fact, of the 316,000 customers seeking support in August, 98.3% were able to resolve their query through self-serve support, which meant only 1.7% needed help from our agents. And even if we continue to see a big increase each month, I don’t anticipate us having to increase our team headcount for a significant amount of time. I think that showcases the gravity of the benefits that we’re getting from Fin AI Agent and Intercom’s other automation features.
Outside of that, we’re also seeing results in other areas, like:
- CSAT: Our human CSAT is consistently high, currently sitting at 93%. And since implementing Fin, our Fin CSAT has actually doubled. One thing that I keep finding across different industries and people I speak to is that there’s a fear of frustrating customers and decreasing customer satisfaction by introducing AI and automation. I personally think it’s really important to bust that myth and let people know that’s actually not the case. We’ve got some really good stats that can evidence that. You just need to invest the time in setting it properly.
- Fin AI Agent answer rate: Our Fin answer rate is anywhere up to 98%, which in my opinion is really good. That means that in nearly all of the conversations it’s involved in, it’s able to understand and provide an answer to a customer’s question.
- Fin AI Agent resolution rate: Right now our resolution rate with Fin is 55%, which frees up a lot of time for our team. Our goal is to get that number up to 80% in a controlled manner, so that’s a big area of focus for us in the coming months.
- Resolution time: Since launching Fin, our resolution time has gone from five days and five hours to four hours and 37 minutes – a 96% decrease.
Do you think AI has changed customer behavior at all? If so, how?
I’ve noticed a remarkable change in customers when it comes to Fin, specifically. Automation is not new to support, but more often than not, you’d find that customers would greet any level of automation with dissatisfaction straight away and seek human support. And I think that was down to lack of intelligence behind those automations in the past, where it was always a tick box activity of sending something out to the customer that wasn’t really relevant or didn’t cover what they were trying to achieve. Whereas with Fin, the change in customer behavior I’m seeing is that they’re a lot more receptive.
“That level of intelligence that sits behind [Fin] has really changed the dynamics with customers and automation:
I’ve got so many examples of conversations where customers are thanking Fin for giving them the right response. So that level of intelligence that sits behind it has really changed the dynamics with customers and automation.
What does the next chapter of AI-first customer service look like at Synthesia?
AI isn’t a “turn it on and let it work its magic” kind of technology. It requires maintenance and optimization to make sure it’s successful. So we’ll continue to enhance our knowledge base and train Fin to give the best possible answers to our customers, and part of that will be identifying our outlier questions to further expand Fin’s coverage rate across our support volume.
And now that we have more time freed up on the team, we’re exploring what “premium support” could look like and how we can offer support as a service. There’s no way we would be able to do that if we didn’t have the level of automation we have with Intercom and Fin and if it wasn’t successful.
I’m also really excited for the next wave of Intercom’s AI features. We’ve gotten a preview of what’s in the works and I think those new features will help us achieve our goal of reaching 80% resolution rate and continue to scale our support in a way that protects our team and is cost-effective. For example, Fin being able to take actions and read data in the background, or being able to customize its tone of voice. These will completely change how we interact and support customers, and being able to customize the tone of voice
What advice would you give to other customer service leaders embarking on this journey based on your own experience? Any lessons learned?
One lesson I’ve learned is to involve your teams in the process from day one. Let them know what it is that you’re trying to achieve and make them part of the objective. Tell them what the problem is and have them be part of how you scope this. Because nine times out of 10 that enables them to get bought into what you’re trying to do. And not only that, but it also gives them the chance to highlight challenges and issues your customers and support functions are facing that you’re not necessarily aware of.
So even though we’re seeing record levels of engagement in support now, had I brought the team in prior to going live and made them part of the implementation process, I think we would have seen more engagement from the outset. That was a big lesson for me.
Pioneer innovator spotlight: How Nuuly resolves 38% of queries instantly with Fin AI Agent and maintains 95% CSAT
We spoke with Natalie Hurst, Director of Customer Success at Nuuly, about her motivation for adopting AI in customer service, the immense impact it’s had for both Nuuly’s support team and customers, implementation challenges she encountered – and overcame – and her vision for the future of AI-first customer service.
Nuuly has been a long-time Intercom customer, and initially chose the platform because it combined powerful elements of automation with a personal, human approach to customer service. That still rings true in an AI-first world. Now, the Nuuly team is able to leverage Fin AI Agent to complement their existing automation and workflows, making the customer experience smoother, faster, and more efficient – and their support associates’ jobs more fulfilling and exciting. With AI resolving a large chunk of their support volume, Nuuly’s support associates have more time to tackle queries that require human empathy and judgment – and importantly, to continue building strong relationships with their customers.
This human-AI approach has enabled the team to resolve 38% of queries instantly, reduce response times by 20%, and maintain an impressive CSAT score of 95%.
Let’s find out how.
Can you tell us a bit about Nuuly, your role, and how you became a leader in customer service?
Nuuly is owned by Urban Outfitters Inc. and is a curated fashion destination for anyone who loves fashion and is exploring how to wear and buy in ways that are gentler on the planet and their wallets. I’m the Director of Customer Success at Nuuly and lead our support team.
I’ve had experience in various roles before I came to customer service. I started in sales, planned to move to HR, and got an opportunity to take a customer service role in the fashion industry and progressed to a leadership position from there. My roles have always had one common factor throughout, and that was a deep care for people. I’ve always loved building relationships with customers, coworkers, and employees and helping them be successful. And working in customer service at Nuuly with such a passionate subscriber base makes the job incredibly fun and easy.
What motivated you to explore AI solutions for your customer support?
We wanted to get ahead of AI functionality as quickly as we could and be an early adopter of the technology. We actually brought someone on full time to help us explore AI and what it could do for our team around the same time that Intercom was announcing Fin. That worked out really well because we had a dedicated person focused on implementing Fin in a way that made sense for Nuuly and could incorporate all of our brand personality. Ultimately, we wanted to maintain the customer journey that we had already created and have Fin be an added layer of efficiency.
There were a couple of related pain points we were trying to address with AI:
1. Maintaining team size and culture
I wanted to slow down the rate at which we were adding headcount to meet rising demand for support. I’ve witnessed a number of rounds of layoffs throughout my career, and I’m always conscious of not growing the team too fast and running the risk of needing to let people go.
“I think a big contributing factor to our high CSAT scores – which are consistently at 95% or above – is that our support team have a genuine connection to our customers”
For me, I have found that around 50 associates is a sweet spot for a support team. Anything larger makes me feel disconnected from each individual employee and it’s harder to create a space where employee and customer feedback is heard, recognized, and actioned on. I think a big contributing factor to our high CSAT scores – which are consistently at 95% or above – is that our support team have a genuine connection to our customers. That’s very rare, particularly for the fashion and ecommerce industries.
2. Keeping our contact rate at a manageable level
“Contact rate” is an internal metric we track and is calculated as the percentage of our total subscriber base that reaches out to support each month. We’re a growing business, and as we get more subscribers, we want to make sure the number of conversations hitting our support associates doesn’t explode and overwhelm them. We knew that Fin AI Agent would be the key to doing that.
For reference, our team was struggling with high contact rates at the end of 2022. Anywhere from 30-40% of our subscribers were seeking support every month, which was a lot. Since implementing Fin (which we call “ChatCat”), we’ve dropped that number by 11%, which has made a huge difference for our team.
What were the main challenges you faced while implementing AI, and how did you go about tackling them?
We encountered challenges in two main areas:
1. Knowledge management
One of the things we did not have set up initially was help articles. So when we brought someone on full time to explore AI, that was the biggest part of their job – getting all the information we needed to feed Fin into Intercom. But once the knowledge base for Fin was up and running, it was a very easy flip of the switch.
2. Getting team buy-in
A big obstacle we faced was initial skepticism within our team. People don’t like change, and in customer service, there are a million different processes, steps, tools, and things to remember, so asking teams to embrace something new can be difficult, even if it’s there to help make their jobs better.
“If you want to have fun interactions and take on challenging questions vs ones that are really easy to solve and are just kind of mindless, AI is the way to do it”
It took some convincing and showing them how it works – and that it works really well – to get them fully bought in. We knew it was important to demonstrate just how much of an opportunity AI presents; if you want to have fun interactions and take on challenging questions vs ones that are really easy to solve and are just kind of mindless, AI is the way to do it. We have a very large team of empaths who like to build relationships both within the team and with our customers. The bigger the team gets, the harder it is to foster those relationships. They understand that AI helps to solve that and now they’re really enjoying not seeing huge queue numbers looming all the time.
Outside of the team, we didn’t have pushback from a leadership perspective and had no security concerns around adopting AI and Fin, which was great. We’ve worked with Intercom for a long time and have a lot of trust in their platform.
What impact have you seen since implementing AI with Intercom? Any “big win” moments you could share?
Since we rolled out Fin, we’ve seen strong results in a number of areas, like:
- Resolution rate: Fin is resolving 38% of conversations it’s involved in right now, which frees our support associates up to work with customers on more complex issues and build strong relationships with them.
- Response time: We’ve reduced our response time by 20% now that Fin is tackling the simple and repetitive queries, so our customers are getting help quicker.
- Staffing forecast: With Fin and Intercom’s other automation features helping us manage our support volumes, we’ve been able to slow projected staff growth by 40%. This lets us maintain our team size and culture, and has also allowed us to be more selective in hiring and get the best of the best candidates.
- CSAT: Customer satisfaction is one of our North Star metrics, and our human-AI approach to support has enabled us to maintain a CSAT score of 95% and above. That comes down to us being able to provide fast and efficient support with AI handling queries that are repetitive or quick to resolve, and our associates having more time to focus on building genuine connections with customers.
The impact we see goes beyond traditional support metrics too. Our business is subscription-based, so we want our customers to come back every single month. One of the things we focus on as a support team is how we can help to retain those customers, and our human-AI approach enables us to solve their issues quickly and efficiently while creating a great experience along the way, which is such an important part of that.
The biggest win we’ve seen to date with AI has been using ChatCat (our customized Fin AI Agent) to reduce human involvement in multitouch conversations and using it to do initial triage. For example, when we need to ask customers for a photo or more information to help them with a query, we could be waiting anywhere from an hour to 10 days to get that information. In those cases, the team was struggling to decide whether to snooze or close conversations or give the customer a nudge. But now, Fin asks for that information before passing the conversation to an agent, which keeps the queues moving and results in faster turnaround of conversations and speedier response times.
What advice would you give to other companies that are hesitant about adopting AI in their customer support functions?
When it comes to AI, you can’t just do it on a whim and let it do its thing, because it may not be as successful as you’d hoped. You really need to think about your org structure and what the future of your team should look like in this new AI-first world. It’s going to be totally new moving forward. It’s also important to look at things like your help content, processes, and workflows to understand if or how well they’re set up for AI. Once you’ve done some preparation in those areas, it’s an easier flip of the switch to turn AI on – and start seeing results.
“Think about how you can lay the foundation now so you can continue to build on it in the future”
Managing AI is going to be an ongoing journey with lots of iterations and optimizations along the way. Think about how you can lay the foundation now so you can continue to build on it in the future.
What is your vision for the future of AI in customer service?
The biggest thing for me is the new career growth paths that are going to come out of this. Customer service has always been stuck in a corner when it comes to growth. Growth within a team in the customer service space can be few and far between, unless there is movement from upper management. Ultimately that means really great associates who want career growth have to leave to go somewhere else to fulfill that. AI is giving us the opportunity to keep that talent in house – creating new career paths and opening up roles that will lead to a really strong and varied set of skills on support teams.
I’m also excited to explore how we can further integrate Intercom’s AI features into our processes – not just for customer interactions but for backend workflows as well. Customer-facing features like being able to tailor Fin’s tone of voice to fit our brand’s unique personality are going to be huge for maintaining a personal connection with our customers. And on the backend, having Fin actually take action on things for us is going to mean that even less conversations need to reach our associates. That’s incredibly exciting to me.
Pioneer innovator spotlight: How Lightspeed achieves up to 65% resolution rate with Fin AI Agent
We spoke with Angelo Livanos, Senior Director of Global Support at Lightspeed Commerce, about burning topics in the customer service space right now, like getting stakeholder buy-in for AI, approaches to rolling out the technology, managing change, and keeping a pulse on employee and customer satisfaction.
Since implementing Intercom’s Fin AI Agent, the Lightspeed team has seen impressive results, such as AI resolution rates of up to 65%. Having seen such success with AI Agent, they decided to roll out Fin AI Copilot, which has resulted in their support agents being able to close a whopping 31% more conversations daily.
Let’s see how they’re doing it.
Can you tell us a bit about Lightspeed, your role, and how you became a leader in customer service?
Lightspeed is a one-stop commerce platform that empowers merchants around the world to simplify, scale, and provide exceptional customer experiences. I’m the Senior Director of Global Support for our hospitality business and have been in the customer service space for about 18 years.
I started on the front lines, like many support leaders, and have held various support-oriented roles during that time, particularly in the tech space – telecommunications, cloud services, hosting, and data centers, etc. I have always loved tech and understanding how it all works at a deeper mechanical level. Being able to pair this passion with helping others was something I naturally gravitated towards. Over time I was fortunate enough to transition into leadership roles where I was able to focus on the development of operations, people, tooling, processes, and strategy.
“Customer service has always been a passion for me”
My journey has been very linear, progressing naturally as I gained experience and moved into roles that allowed me to focus on developing others and creating strong contributors within the teams I led. Customer service has always been a passion for me, and I’ve worked to prove that it can be a rewarding long-term career, not always just an intermediary stepping stone to something else.
What were the driving factors behind deciding to adopt AI with Intercom, and did you face any initial implementation challenges?
Lightspeed has always been a very AI-forward company. We had been exploring opportunities to leverage AI technology internally as well as for the benefit of our customers through product enhancements. We also have a great relationship with Intercom so were excited to explore their AI-powered features as soon as they became available to test.
Personally, I feel like It’s a really exciting time to be in the support industry. I’ve been in this space for 18 years and typically, support is saddled with dated and clunky home-grown case management tools. We’re now in an era where Intercom and other platforms are building support-first, amazing tools like Fin AI Agent and AI Copilot that allow us to do our jobs better, further enjoy our work, and deliver better experiences for our customers. That’s one of the great things about Intercom – the speed at which the team ships new features and improvements. The velocity has been awesome. We see that so rarely in the wider SaaS market, whereas with Intercom, every other day you’re like “hey, check out this new thing,” which gets our team excited and constantly adds value for us.
In terms of implementation challenges, the biggest one for us was change management. We were all on board with rolling out AI, but with a contact center of hundreds of agents supporting our customers in multiple regions and languages, we couldn’t just flip a switch overnight.
How did you go about solving that challenge?
We wanted adoption and go-live to be smooth, so we focused on doing plenty of training and enablement, testing, and coordination with teams from across the company – it was very much kind of a ballet to make sure everything lined up and that everyone was equipped to succeed.
1. Getting set up for success with training
Training was the first key focus area. We leveraged Intercom Academy for the foundational training, which provided our team with a solid understanding of the tools. We also have an in-house training team that developed training modules specifically tailored to our own processes and workflows at Lightspeed. We ran these training sessions in the weeks leading up to the rollout to ensure that our team members were well prepared. This was particularly important given the diversity of our operations, which span multiple languages and geographical locations.
2. Providing ongoing support and enablement
After the initial training, we implemented a “hypercare model” post-launch. For several weeks after the go-live date, we had dedicated Slack channels and forums where team members could ask real-time questions or flag any issues they encountered. This allowed us to address concerns quickly and fine-tune configurations as needed, ensuring that the rollout was as smooth as possible.
3. Focusing on clear, frequent communication
When it came to change management, we recognized that the key to success was not just in training but in communication. We made sure that all relevant teams were informed about the upcoming changes well in advance – this included both the support agents and the peripheral teams who might be impacted by the new workflows. We provided adequate notice and clear instructions on what to expect, which helped to minimize any surprises on the day of the launch.
4. Cementing alignment on the vision and goals
We understood that change management in a large, geographically dispersed team required a coordinated effort, so we worked closely with our leadership team to ensure that everyone was aligned on the goals and benefits of the AI implementation. By involving various stakeholders early in the process, we were able to build a consensus and foster a sense of ownership across the organization.
What did the tech rollout process look like?
The tech side was really fascinating for a few reasons:
1. We had the benefit of having a lot of products with their own Intercom workspace, so we were able to A/B test rolling out AI in a very responsible way.
We were privileged in this sense. It allowed us to implement AI in one product and not another to see what the difference was, as well as do isolated testing.
2. We found that the slower rollout model was not the most optimal approach.
I’ll use rolling out Fin AI Agent as the example here. In our slow approach, say we had 30 topics that we know our team usually gets; we picked one to test with Fin, monitored performance, and then gradually exposed it to more and more topics over time. Whereas with other products, we cast a much wider net exposing Fin to more up-front. We actually found that by expanding the dataset early on, we saw better results and were able to gain enough data at scale to actually determine strengths and areas to optimize and improve.
“The challenge with incremental testing and implementation is that you’re getting such little data”
The challenge with incremental testing and implementation is that you’re getting such little data. It’s hard to look at that data on its own and confidently marry it up with all the other metrics you typically report on. If they move up or down by a percentage point, there’s no way to really attribute it to that test because they’re too far removed. But if you apply it to the whole experience and see a material change in anything, you’re able to draw a more solid conclusion and prove value a lot quicker. When we were reporting to more senior stakeholders, showing them those bigger, impactful numbers was definitely effective in showing value and improvements.
What results have you seen since implementing Fin?
One of the biggest changes was our human/AI handling ratio. On day one of rolling it out via our “all out” approach, we were seeing between 35-40% of our conversations not needing a team member. And pretty quickly we were able to fine-tune that to a point where we’re now seeing upwards of 60% AI handling across most of our products.
When you consider that we manage high conversation volumes each month, that creates a very positive impact in our operations. We’re dramatically augmenting where and when our team engages with our customers and that has a knock-on effect on our resourcing requirements and the allocation of where people spend their time. It has freed up our agents to focus on so many other areas of impact for our customers and has given us the space to rethink our career progression paths and the roles within our team.
Outside of that, we also saw results with AI in a number of other areas, like:
- Resolution rate: Fin is currently resolving between 45-65% of our support volume across our workspaces.
- Involvement in and ability to answer queries: Fin is now involved in 99% of our conversations, and is able to provide an answer to 95% of them – even more complex ones.
- Faster handling times: Now that the triage of a new conversation is being done by Fin, human time is being reduced on each case.
- Reduced training time: With Copilot, our training times are being reduced because we’re not needing to do as much long-tail classroom training – they’re able to leverage the AI for that level of help.
- Cost to serve: AI is perfect for those really basic, low-complexity/education questions. It’s so much more effective to use features like Fin to resolve those questions for a much lower cost so our agents can be freed up to handle the complex ones.
- Customer satisfaction: ChatGPT and similar AI tech is now a part of many peoples day-to-day lives. Our customers are responding well to having access to our AI agent and the same friendly team available if needed. Our CSAT has remained stable since our rollout.
There are all of these auxiliary benefits that in isolation are hard to link to a dollar amount, but the roll-up aggregate benefit is quite tangible.
How did your exec team, support agents, and folks from across the company react to the implementation of AI?
Exec team
The tipping point with regard to getting buy-in and a positive response from the executive team was being able to sandbox Fin AI Agent. That allowed us to see – and show – what it’s capable of firsthand. We set up non-customer facing tests and were really impressed with the results, and when we were able to quickly show its effectiveness and impact, there was a lot of excitement internally. So overall, when it got to the stage of putting Fin in front of merchants, we had a good level of support from our leaders.
“We were very open with the business and brought everyone on the journey with us”
I think us being very forthcoming with the results and the data as we were getting it also helped. We were very open with the business and brought everyone on the journey with us. We also made sure that we engaged our internal legal and security teams to ensure compliance.
Support team
There were two distinct cohorts on the global team – one being long-time Intercom users, and the other being agents that had never used Intercom. And Fin AI Agent got a very positive reception from both.
For agents already using Intercom, the impact was primarily on volume – fewer basic queries reached them, leaving space and time to focus on more complex issues. The AI triage allows the team to jump into the meat of the discussion faster.
For agents that were new to Intercom, they were getting big lumps of goodness all at once. Coming from other customer service tools, they were getting an all-new platform with a nicer composer and workspace to work in, plus AI benefits like decreased volumes and more time.
“It’s building a bit of a cult following of people that are saying, ‘This has done some pretty great stuff. How do we tap into this now?'”
One thing that did need to be adapted was the training program for new team members. With AI handling a lot of the basics, new agents weren’t getting exposed to the fundamentals as much and getting harder queries upfront. I wouldn’t say that was a downside, it was just something that we had to adjust our training programs to account for.
Wider company
And outside of just our team, one of the funniest dynamics I’ve observed is how many people now ask me with pure, positive curiosity how Fin works. I’m getting a lot of questions now from non-support parts of the business, like “How does it do that?” and “How can we get access to it?” And it’s building a bit of a cult following of people that are saying, “This has done some pretty great stuff. How do we tap into this now?”
What are you most excited about for the future of customer service in the AI era?
The next big leapfrog change I see happening in the AI space is the analytics component. There are so many opportunities to leverage AI for data analysis that I’m excited to take advantage of. For example, I believe that AI will:
- Reduce the need for reliance on human categorization.
- Add depth to data and insights.
- Allow us to better detect opportunities to improve.
- Allow for analysis at scale in a quicker and easier way.
- Fuel stronger collaboration between customer service and product teams by being able to capture more detailed customer feedback and sentiment at scale.
I think AI is really going to unlock a lot of interesting stuff from a reporting standpoint, so I’m excited for what comes next.
Fin 2: Powered by Anthropic’s Claude LLM
I’m excited to announce that Fin 2, our latest generation of Fin AI Agent, is powered by Anthropic’s Claude, one of the most sophisticated Large Language Models (LLMs) available today.
We build product at the very boundary of what’s possible in AI customer service, and our collaboration with Anthropic is a big step forward for us. With Claude we get intelligence, performance, and reliability that lets us deliver even more value to our customers with Fin. It’s not just about faster responses (though it does that); it’s about redefining how AI can enhance customer service.
Clearly Anthropic agrees as they have chosen Fin 2 as their own customer service AI agent. This, to me, is a strong validation of the technology we’ve built and the value that Fin is delivering for our customers. It’s also a good counterpoint to folks who are still thinking about rolling their own RAG bot :-) (PS, you should probably run less software in general.)
Why Claude? Why Now?
We constantly improve Fin by running millions of conversations through hundreds of A/B tests. Our performance criteria are things like answer accuracy, resolution rate, CSAT, human assessment of the answer quality, and many more things that our AI team won’t let me share. We constantly test out new models and new ways to use them.
“With Claude, Fin answers more questions, more accurately, with more depth, and more speed”
We landed on Claude for one simple reason: it delivers. And it doesn’t just deliver faster speed or scaled operations, but also high-quality service, performance, and reliability.
With Claude, Fin answers more questions, more accurately, with more depth, and more speed. We’re able to deliver an average resolution rate of 51% across thousands of Intercom customers and millions of conversations, making it the best performing AI agent in the industry.
So what’s next?
In our Fin 2 launch, we cover a lot of where we’re at and where we are going. Fin now takes actions and delivers personalized answers with custom answer length, tone of voice, etc. Fin can follow your policies, it can analyze conversations, calculate CS globally, and so much more.
Of course, our future plans include all the stuff you can imagine – voice, video, proactive, you name it – but we’ll come back to that later. For this release we’re prioritizing delivering what our customers wanted, vs. what would make for a cool-but-immature demo.
“Our collaboration with Anthropic is a key part of our journey to the highest resolution rate possible”
So where to from here? Our goal is to resolve as many CS conversations as possible. Today, our average resolution rate is 51% out of the box (i.e. before any significant tuning) which is pretty incredible. You may see larger numbers quoted by other folks, but I encourage you to double and triple click on exactly what they’re saying; we too can selectively sample certain customers or certain verticals and give you a far higher number.
Our collaboration with Anthropic is a key part of our journey to the highest resolution rate possible, while still thinking about our customers, and our customer’s customers. We want Fin to deliver great experiences; we are not building a deflection engine.
Working with Anthropic helps us stay at the forefront of AI, letting us explore the future and work on the bleeding edge, while delivering stable, reliable, and incredible results for our customers.
Fin 2: The first AI agent that delivers human-quality service
Today we unveil the world’s most advanced AI agent for customer service: Fin 2.
This latest generation of Fin AI Agent combines our highest resolution rates with powerful new capabilities to deliver human-quality service to your customers. These new abilities are in four categories:
- Knowledge: Fin can learn your knowledge faster than before and provides customers with the most accurate, thorough answers to help resolve issues more efficiently.
- Behavior: Fin speaks in your tone of voice, is fluent in 45 languages, and follows your guidelines, policies, and procedures to deliver the best customer experience, every time. You can always rely on Fin to make the right decision.
- Actions: Fin can take actions on behalf of your customers by accessing information from your data sources and systems in order to personalize its service for your customers across channels.
- Insights: Al-generated insights give you the tools to monitor and improve quality and performance for Fin and across your entire support organization.
All of this combined means you can depend on Fin to manage your frontline support, freeing your team to focus on higher impact solutions for your customers.
The best AI agent just got better
When we launched Fin AI Agent in March 2023, it was the first AI-powered agent designed specifically for customer service. From the outset, Fin’s performance blew our minds – just using a company’s existing help center content with little optimization, it could answer over 25% of customer questions.
“We’re launching a suite of new capabilities that allow Fin to answer more questions, in more ways, in more places”
Since then, we’ve been working tirelessly with our customers to improve both Fin’s performance and accuracy. Now, we’re proud to share that customers using Fin 2 see an average resolution rate of 51%, with an accuracy rate of 99.9%.
However, in order to deliver truly human-level support, we’re launching a suite of new capabilities that allow Fin to answer more questions, in more ways, in more places, while giving you complete control over the quality of Fin’s service.
Knowledge
Fin 2 can learn everything about your product by connecting to various sources, whether internal content, external websites, PDFs, or databases.
The Knowledge Hub makes it easy for your team to control, update, and maintain all of the content Fin learns from one centralized location. This means you only need to train Fin once – it automatically stays updated with the latest information, never forgetting or using outdated responses.
Fin 2 also has the ability to combine knowledge from multiple content sources to create tailored answers for your customers – just like a human would. This enhances Fin’s ability to solve more complex questions than any other AI agent on the market.
Behavior
Support conversations are an extension of your brand, and now you have even more control over how Fin communicates with your customers.
Customize Fin’s tone of voice by choosing from five presets. You can also decide between shorter, direct responses or longer, conversational ones – ensuring a consistent and on-brand support experience.
It also supports multilingual interactions, automatically translating your content in real-time to match the customer’s language. This dramatically simplifies customer support for businesses operating in multiple regions and helps you scale your support content.
With AI Category Detection, you can set topics you would like Fin to detect and handle in specific ways. If, for example, a customer asks about refunds, cancellations, reports a bug, or event seems frustrated based on the conversation, Fin automatically categorizes those conversations and will route them according to your settings.
This is easy to set up with natural language commands – it’s as simple as typing “Hey Fin, if a customer seems frustrated, apply the negative sentiment category.” With Workflows, you can make sure Fin routes these conversations to the right team so they are handled with care. You can also create a range of rules for how Fin shows up for your customers based on attributes such as segment, channel, or region.
This is an exciting step forward that combines the incredible capabilities of generative AI with rules-based human control.
Finally, while we’ve learned that many customer service leaders equate AI with the chat channel, Fin is omnichannel, working equally well across chat, email, WhatsApp, and more.
Actions
Now Fin can access your external data sources in order to personalize its service for your customers.
Fin can retrieve customer data and provide answers specific to each customer, like checking recent orders.
Fin can also perform tasks for your customers by updating customer records on their behalf, like changing a customer’s shipping address or adjusting their subscription.
You can create actions for Fin with just a few clicks using our action templates. You can configure what access you want to give to Fin, and then use natural language to give Fin guidance and instructions on when to use this connection to take actions for customers.
Insights
CSAT is a metric that is both incredibly valuable and incredibly flawed, representing only a fraction of the total customer conversations your team handles. This gives CS leaders a very narrow view of the health of their support operation and prevents them from improving their service.
AI Generated CSAT will change how you see your service. Now you can have full visibility across 100% of your customer conversations, both human and AI. This AI-powered analysis tracks CSAT across all of your team’s conversations, allowing you to monitor and optimize the quality of your service across the board.
We have also developed a new AI-powered Conversation Quality report, which surfaces topics of high and low performance so you know where to invest in better content, or better training.
Finally, we just released a new Holistic Overview Report which unifies all insights into a single dashboard, providing you with the full picture of your entire support operation across both human and AI.
You’re in full control
You can easily control where Fin shows up. Try Fin in different channels, with different segments of your customers, or in different regions. However you want to segment your customers, you can turn Fin on for any of them.
New capabilities, same price
So how much does all this cost? The same as it always did – $0.99 cents per resolution. We only charge you if Fin delivers a resolution. If Fin can’t answer, Fin is free to use.
“Very soon, AI Agents like Fin will handle the majority of customer support queries”
If you have Fin today, you get all these upgrades for free. Some of Fin 2’s features are available today, and we’re rolling out the rest through the end of the year.
There is no doubt that the future of customer service is AI-first. Very soon, AI Agents like Fin will handle the majority of customer support queries, improving the customer experience with immediate answers and actions, and enabling support leaders to scale their support without scaling their team.
To learn more, take a look at all of Fin 2’s features and capabilities here.
Response Time: Vol. 36
You satisfy your customers, but can you satisfy our curiosity?
With Robb Clarke, Head of Technical Operations at RB2B.
Please tell us a little bit about your company and what you do there.
I’m the Head of Technical Operations for RB2B. We do person-level identification of US-based anonymous website visitors.
Which celebrity would be really great at your job, and why?
Ryan Reynolds – he’s just so darned charming!
What’s the most valuable thing that working in customer service has taught you?
The most valuable thing that I have learned working in customer service is to compartmentalize different aspects of my life. Customer experiences get tucked away in their own neat little container that has no bearing on the rest of my life – this is especially important with negative interactions.
Describe the essence of great customer service using only three words.
Empathy, patience, communication.
Which movie robot would you choose as your AI sidekick, and why?
What’s the one from Star Wars that Alan Tudyk played where he slapped the guy? K2. Him. Or Johnny 5.
What can you do that a bot will never be able to replicate?
Love.
“Let it flow like water off a duck’s back”
What’s the most embarrassing thing you’ve ever said/done to a customer?
Twenty years ago, we were renovating a drive-in theater that I was managing and we had just finished painting the little symbol of a man above the men’s room door freehand. I climbed down the ladder and started walking back across the lobby to admire it, turned, and said, “That’s a fine looking man,” just as a customer was walking out of the men’s room. He thanked me for the compliment.
Do you identify more with the title “customer support,” “customer service,” “customer success,” or “customer experience,” and why?
“Customer experience,” because ultimately, if they can’t get our software working as expected then that greatly affects their entire experience with us.
What’s the one piece of advice you would give to your peers in the customer service industry?
Let it flow like water off a duck’s back.
What’s your greatest productivity hack?
ADHD meds. Also, Brain.fm and “Do Not Disturb” mode on my phone.
What book are you reading at the moment?
The Anxious Generation by Jonathan Haidt.
Conversation closed… for now 😏
If you’re interested in being featured in our Response Time series, you can share your insights on customer service – and which celebrity would be great at your job – with us here.