Skip to main content
All CollectionsOutboundBest practices
A/B test different message variations
A/B test different message variations

Learn when and how to improve your message with A/B testing and control groups

Beth-Ann Sher avatar
Written by Beth-Ann Sher
Updated over a month ago

Intercom makes it super-easy for you to A/B test your messages so that you can fine tune them to be as effective as possible. All A/B tests should start with a question; i.e., 'will A or B be better at achieving X?'.

Here are some examples:

  • Which content layout will result in more replies?

  • Which content will result in more users upgrading?

  • Which content layout will help me capture more visitor emails?

  • Which subject line will get more users to open my email?

  • Will a button generate more click-throughs than a text link?

There's lots more to why you should A/B test. We wrote a blog post all about it here.

Test, test, and test again

You should always be testing, and learning. For example, if you test a button against a link, and the button wins, next you should be testing the button colour. Then when you have a winner from that, test the button copy. The key point is that the more you test, the more effective your messages will become. Always be testing.

How to set up your A/B test

First, go to Outbound from the main menu and click + New message in the top right corner.

Before putting it live, make sure that you have at least:

  1. Given your message a name

  2. Selected your audience.

  3. Selected your channel.

  4. Added the content of your message. This message will be message "A" in your test.

Note: You can start an A/B test against a message that's already live. It doesn't have to be when you create your message for the first time.

Now, start your A/B test

Once you have your message A set up, click 'Run a new test' and choose the option for A/B test.

This will create a draft of your B message. Initially, this is just a duplicate of your existing message. You can now edit the duplicate with whatever it is you're going to test (e.g., different subject lines, different text, different button colours, or different images, whatever you want). You can even have the two messages appear from different teammates!

What can be A/B tested?

You can A/B test anything that lives within the Content creation step of the message:

Best practice

We recommend that you only compare one difference at a time. This way, if there is a clear winner, you'll know the reason, and you won't be left guessing about which variant caused the difference.

Finally, put it live

Once you're happy with the two messages, put them live.

Monitor progress

You can check the performance on the auto message list page, or on the message page. On the message page just toggle between version A and version B to see the stats for each. As time goes by, you'll be able to see which variant is performing better in the metrics you care about most. For example, if you were A/B testing subject lines, you would be likely concerned with open rates.

Choose a winner once you have a result

Once you're happy that one message is outperforming the other (for example when one message's goal % is higher than the other) select the 'Pause' button on the message you'd like to stop. 100% of recipients will now get the winning message. This will automatically pause the other message and people will no longer receive it (unless you decide to put it live again).

FAQs

How does this work with A/B messages within a Series?

When a user enters a Series and matches for a message that has a A/B test, it will be random as to what variant they receive. If multiple messages within a Series are being A/B tested, this means that a user can get a mixture of As or Bs.
If you wish to A/B test a sequence of messages within their Series, you can use the split test feature if it's part of your subscription. If you do not have access to this feature, a workaround is to set up two different Series and split the Series audience in half.

What happens if I make a small changes to version A or B after going live? Will we reset the stats for that version?

What happens if I make a small changes to version A or B after going live? Will we reset the stats for that version?

Making changes to a version after the message has gone live means that you could skew your results.


Any changes to a message variant will not reset the stats for that version. If you wish for the stats to reset, you would need to delete that version and create a new version (and be aware that you will not have a record of the previous version stats so you may want to export your results first)

If I set a message live and then create a version B after, what will happen to the message stats?

The message stats for version A will be all of the sent messages to that date, and version B will start from scratch.

Setting a version B late after the message has been set live means that you will not get a true 50/50 split, and might not be able to pick a clear winner.

Is it possible to set the A/B test live, but have the audience do to a small set of users at first before picking a winner?

Both versions of the message are sent out concurrently. We do not send to a portion of the audience, wait for a message to be a 'winner', and send all remaining matches to that version.

A workaround would be to tag a portion of users and have the tag in the audience rules when setting the message live. After it has been sent to the smaller audience, You can remove the tag from the rules and have only the A or B go to the rest of the audience.

Why do I not see an equal 50/50 split in the sent statistic?

There are a few things to check here:

1: Check if one of the versions was set live long before the second version (therefore the majority of sends will be from the first version).

2: Check if the message uses company data in it's content - since all users from a company are expected to get the same version in this scenario, then the split may be skewed depending on the amount of users in the company.

3: If the audience of the message is quite small, then the split might not appear as 50/50. The size of the split is down to probability. The bigger the audience of the message gets, the more likely the audience of both variations will even out.

Is it possible to do an a/b test where half the audience gets the message, and half the audience gets not message?

Yes, you can do this with a control group.


💡Tip

Need more help? Get support from our Community Forum
Find answers and get help from Intercom Support and Community Experts


Did this answer your question?