Intercom’s patented AI Engine™ allows Fin AI Agent to refine every query, optimize every response and validate the quality of each answer. As a result, it's the only AI agent that can balance industry-high resolutions with industry-low hallucinations.
Many AI agents can optimize toward either high resolutions or low hallucinations, but struggle with doing both. This is because they often simply apply a ‘wrapper’ to a generative Large Language Model (LLM), instead of building on top of it, because it’s cheaper and easier.
However, this approach ignores the tendency of LLMs to hallucinate and expose your customers to incorrect or irrelevant information. Without a system like Intercom's that refines the LMM’s inputs and outputs, an AI agent cannot be effectively optimized for accuracy and reliability.
How it works
Phase 1 - Refine query
In order to optimize the accuracy of an answer that an LLM generates, the inputs the LLM receives must be refined for comprehension. The clearer and more understandable the query, the better the output.
Often in customer service, customers can write in to support without fully explaining or contextualizing their query. To solve this problem, the Intercom AI Engine™ has been designed to refine the inputs that are sent to the LLM. This ensures that each customer message is optimized in terms of its meaning and context so that the LLM has the best possible chance of producing an accurate answer.
In addition, the Intercom AI Engine™ performs checks to see whether a Workflows automation or Custom Answer should be triggered based on the topic and context of a customer's query, as well as performing safety checks to filter out anything that Fin shouldn’t be answering.
Check for safety and relevance
The AI engine performs a comprehensive check of a customer query, in order to filter for anything that Fin shouldn’t be answering such as requests for confidential information, irrelevant questions, malicious actors, data harvesting and more.
Optimize query comprehension
The AI engine optimizes the customer message to make it more searchable and easy to understand for the LLM. The AI engine performs many checks, and executes the correct optimization based on whether the query intent and topic is clear, if the query needs to be reworded, and much more.
Check for Workflows automation
The customer’s message is checked for the presence of any pre-configured conditions which trigger a specific automation (such as a complaint).
Check for Custom Answer
An additional check is performed to detect whether a pre-configured Custom Answer is needed for the customer query.
Phase 2 - Generate response
Once a query has been checked and optimized the next stage is to generate a response using the LLM. For this task, the Intercom AI Engine™ has been designed to use a bespoke and enhanced retrieval augmented generation architecture (RAG for short).
RAG is a process that involves retrieving relevant information from a data source and combining it with a user's prompt before passing it to an LLM. This additional context helps improve the model's output by enhancing its base knowledge, and in doing so reducing the risk of inaccuracies like hallucinations.
Intercom’s application of RAG is highly unique. The AI group at Intercom has invested heavily in optimizing our unique application of RAG and continuously tests both the accuracy of the LLM and the individual steps of RAG to improve overall performance.
Optimize retrieval:
The AI engine searches the available information, actions, or data and determines what is most relevant to the nature of the query and what is needed to solve the question or problem. Information sources include:
Content - such as past Intercom conversations, help center articles, PDFs, and HTML/URLs that have been approved as accurate and safe sources.
Data - internal or external to Intercom, including dynamic information that Fin can use to personalize the customer experience.
Integrations & actions - determine whether any actions will be necessary on third party systems as a result of the customer queries intent.
Integrate and augment
The retrieved information is then integrated and augmented with the optimized query or ‘input’. This step ensures that the generative model has access to the most relevant and up-to-date information before producing a response. The augmented input is structured in a way that maintains the context and relevance of the retrieved information, making it easier for the model to understand and use.
Generate response
Clarify and disambiguate
If the output from the model doesn’t meet Intercom’s AI Engine™ parameters for certainty, then a response is generated to ask the customer to clarify their query. This disambiguation step helps avoid risks like hallucinations as the response generated is contextual and grounded in the facts from your company’s available knowledge resources and support content.
Take an action
If an action is required as a product of the user query and intent, the action will be performed using the necessary information, data, integrations and systems.
Generate answer
The generative model uses the augmented input to generate an answer. By incorporating the retrieved information, the model can produce more accurate, contextually relevant, and detailed responses. The generated answer may undergo post-processing to ensure clarity, coherence, and alignment with the user's query.
Phase 3 - Validate accuracy
In the final step of the process, Intercom’s AI Engine™ performs checks to understand whether the output from the LLM meets the necessary response accuracy and safety standards. Many checks are performed, covering whether there is enough confidence in the response, the response is accurate enough, and whether the response is grounded enough in reality to address the question adequately.
Validate the response
Compare the generated response to the original customer query.
Determine if the generated response answers the query well enough.
Determine if the generated response is grounded in the knowledge of your knowledge resources and support content as a source of truth.
Respond to customer
Send the generated response back to the customer through Fin.
Engine optimization
To calibrate and enhance engine performance, the Intercom AI Engine™ has advanced integrated tools that help optimize answer generation, efficiency, precision, and coverage.
Fin customization and control - Intercom has incorporated features and tools designed to help users customize and control how Fin responds, what it can do, what information it can use and much more. Each of these pieces plays a part in how well Fin performs. The more it knows how to do, the more it can do, and the more you will be able to automate your support with human-quality customer experiences.
AI analytics and reporting - The Intercom AI Engine™ has been designed to facilitate analyses on the effectiveness of each stage of the answer generation process. This gives the AI group at Intercom the tools they need to improve each stage of the process and overall performance. A rigorous amount of testing happens before any changes are made to the AI engine architecture, which takes into account how each small change impacts the engine as a whole. In addition, the AI engine offers Intercom users access to pre-built and customizable reports that help to understand where Fin is working well, and what can be improved.
AI recommendations - the AI engine offers recommendations to continuously improve performance over time. This ranges from identifying which content could help fill gaps in Fin’s knowledge, to highlighting underperforming content that should be further optimized, or suggesting actions should be set up to help Fin can resolve more queries for customers.
Safety and security
Relying exclusively on the generative capabilities of an LLM to answer or solve a customer problem is not a reliable way to serve customers. Without the proper safeguards in place, LLMs can be open to manipulation or hallucinations, which could then impact your customers. To ensure safety and reliability, Intercom’s AI engine™ has been designed with strict safety controls at each stage. If the necessary parameters for safety have not been met at any of the steps or stages, Fin will let the customer know that it cannot answer the query and escalate to human support.
Intercom has implemented state-of-the-art security measures to protect Fin against a wide range of LLM threats, including those identified by the OWASP LLM Top 10. By consistently testing a variety of high-end LLMs, and deploying rigorous internal controls, security protocols, and safeguards, Fin is able to achieve the highest level of security and reliability while avoiding potential limitations and threats.
That means you and your customers can always trust Fin’s answers as the safest, most accurate, and reliable of any AI agent.
Learn more about Fin’s safety measures on trust.intercom.com.
Fin AI Security
Comprehensive overview of the security measures and testing protocols implemented for Intercom’s Fin AI features available here.
Regional hosting
Fin AI Agent is available on US, EU and AU hosted workspaces.
Compliance
Intercom has international accreditations and controls in place in order to ensure the highest standard of safety and security, including:
ISO 27001, ISO27701, ISO 27018 and ISO42001 (ETA Jan 2025)
HIPAA compliance.
SOC 2 Report - SOC 2, Type II audit report covering controls specific to security, availability and confidentiality.
HDS certificate - Certification of compliance with the HDS Referential Version 1.1 (English and French version)
Penetration test summary - Summary of detailed penetration tests on Intercom’s application and infrastructure by third-party security experts.
Cloud security alliance assessment - Security and privacy self-assessments based off of the Cloud Controls Matrix and the CSA Code of Conduct for GDPR Compliance.
Third-party LLM data usage, transfer and storage
Customer data is not used for model training by LLM providers. Any data submitted to an AI Product becomes an Input, used to generate an Output (as those terms are defined in our Additional Product Terms).
Full details available in our legal and security guide for AI Products/Features.
Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts