Test your content readiness by seeing how Fin responds to actual customer queries with the Batch Test feature. Review the sources and settings that shape Fin’s answers, and receive tailored recommendations to improve answers to deliver the highest quality support for your customers.
Features available for this beta:
Bulk answer import & testing: Easily assess Fin’s accuracy by testing responses against real customer questions. Import queries directly from your support inbox or upload them in bulk to get confidence in Fin’s performance at scale.
Content targeting: Choose specific content to test with Fin to understand how Fin's answer varies based on your existing audience targeting.
Answer inspection: Understand why Fin responded in a certain way, with a snapshot view of the sources and configurations (like Fin Personality and Guidance) used to form the response.
Multi-brand: Test content for multiple brands in your Intercom workspace and see how Fin answers brand-specific questions.
Language & realtime translations support: Not seeing Fin’s response to the question you asked in the language you were expecting? Now we offer the ability to update your language and translations settings from the test page so you can ensure the responses match your expectations.
Automation support: If you're using Fin Tasks, Data connectors, or Custom Answers, you can now see when they would trigger within a Fin response and follow a direct link to edit or update them.
Batch Test is designed for all customers who use Intercom to support conversations – whether you’re an existing Fin customer or simply looking to explore its potential.
To access Batch Test for Fin, teammates must have a full seat and their conversation access permission set to "All conversations".
How to access Batch Test
Go to Fin AI Agent > Test
You can create a new group from the dropdown here 👇
Set the Test settings by:
Selecting the Fin audience(s) you've applied to content to see Fin's responses to customers who match the audience.
Selecting the brand on your workspace you want to test. (You may only have one brand. Brands are connected to specific Help Centers).
Then run a test on Fin's answers to customer questions by choosing to:
Note:
Generating from past conversations requires a minimum of 10 conversations.
You can upload up to 50 questions at a time via CSV or manually, enabling you to test multiple queries simultaneously.
Using content audiences requires setting up audiences and content targeting for Fin.
Features of Batch Test
Filtering
Easily filter test results to identify which questions were answered and which were not. This feature is particularly useful for testing large volumes of questions and prioritizing areas requiring attention.
Answer status: See where Fin successfully provided a direct response, or was unable to generate a direct response.
Answer rating: See Fin's responses based on a rating you gave. (You need to review and add the ratings first.)
Answer inspection
Dive deeper into how Fin’s responses are generated by reviewing configurations such as Guidance and Personality settings. You’ll also see which content sources influenced the answers, making it easier to refine content for optimal performance.
Language and real-time translation settings
To ensure you have the correct language and real-time translation settings applied, we’ve made it easy to review and update them as needed. Clear messaging will explain why you might be seeing a Fin response in a different language than the customer’s question, or different from what you expected.
Example with language not added to supported languages and Real-time translation disabled. | Example with language added to supported languages and Real-time translation enabled. |
Automation support
Batch Test also shows when a Fin Task, Data Connector, or Custom Answer would be triggered as part of a Fin response. While the testing tool doesn’t allow you to edit an automation directly, it does provide visibility into whether trigger conditions are set up and firing correctly. For Data connectors that require further verification, this information will be surfaced in the answer details panel.
Answer rating
Fin’s responses can be evaluated at the question level. You can rate a response as “Good”, "Acceptable" or “Poor” and, if poor, select a reason from a predefined list of options. You can also provide additional context for any rating, with all feedback captured in a downloadable report to help prioritize updates effectively.
Note: By rating a response "Good", "Acceptable" or "Poor" you’re not training the AI Agent. These ratings are for your own reference and are used when downloading/sharing the report with your teammates.
Recommended suggestions
When you rate an answer as “Poor,” Fin will offer tailored suggestions based on the reason selected, making it easier to address any gaps quickly and efficiently.
Convenient test controls
Export your test results as a CSV file, including questions, answers, ratings, and content sources.
You can also reset the test, re-run the test, or re-test specific queries after updating Fin’s settings clicking the refresh option.
FAQs
Will I be charged for resolutions when using Batch Test?
Will I be charged for resolutions when using Batch Test?
No, the Fin AI Agent > Test page is free to use and you won't be charged for AI answers generated through the Batch Test. 👌
Can I simulate a specific user or customer when running a test?
Can I simulate a specific user or customer when running a test?
Not yet. While you can run a test using content tailored to a specific Fin audience or brand, it’s currently not possible to simulate a specific user or customer and see results only relevant to them. However, we’re working on a feature that will allow you to simulate a persona and view Fin’s responses exactly as that user would see them.
Can I test Fin’s image recognition capabilities (Fin Vision)?
Can I test Fin’s image recognition capabilities (Fin Vision)?
No, testing Fin’s image recognition (Fin Vision) is not currently supported, and there are no plans to enable this feature at the moment.
What is the difference between resetting a test and re-running a test?
What is the difference between resetting a test and re-running a test?
Resetting the test will allow you to choose another batch, either from conversation history or an upload. Re-running the test will re-generate answers based on any content changes or answer ratings you provided in the batch so you can continue to refine the performance.
Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts