A/B Test Report

A/B Test Report

Our A/B Test Report is designed to help you determine whether the results of your A/B test are significant, not significant, or need to run for more time.

1380

To give a real example for a SaaS product, let's say you want to test the difference between two welcome emails that get sent out when a customer signs up for a trial.

📘

Example:

Here's how it could play out:

  • For new trial signups going forward, give half of your customers a long, detailed welcome email, and give the other half a shorter welcome email.
  • Record which version they saw.
  • Record whether these people end up paying for the full version. This is the key performance metric you're trying to improve with the test. The test answers: "Did the email version influence more people to convert?"
  • Report on the results, comparing how the two conditions performed. Make sure enough people have run through the test to show that the results are actually due to the different emails, and not a result of chance.

What Every A/B Test Needs

You can run an A/B test using Kissmetrics' JavaScript Library, or set up the test through a homebrew solution on your servers, or use a service like Optimizely, VWO, or Google Website Optimizer.

Regardless of how the test is run, every A/B test needs the same three things:

  1. Each visitor is randomly assigned to a variation, either A or B.
  2. Make sure that the visitor always sees either only A or only B. Otherwise, your results are tainted if someone ends up seeing more than one variation.
  3. Record which variation this visitor saw, so that you can refer to it when checking the results of the A/B test.

That covers the test itself.

👍

There's one last thing:

Record the end goal that you are interested in.

Are you interested in increasing signups with this test? Measure Signups. Are you interested in whether people continue to return to your site? Measure the number of Site Visits.