Our A/B Test Report is designed to help you determine whether the results of your A/B test are significant, not significant, or need to run for more time.
To give a real example for a SaaS product, let's say you want to test the difference between two welcome emails that get sent out when a customer signs up for a trial.
Here's how it could play out:
- For new trial signups going forward, give half of your customers a long, detailed welcome email, and give the other half a shorter welcome email.
- Record which version they saw.
- Record whether these people end up paying for the full version. This is the key performance metric you're trying to improve with the test. The test answers: "Did the email version influence more people to convert?"
- Report on the results, comparing how the two conditions performed. Make sure enough people have run through the test to show that the results are actually due to the different emails, and not a result of chance.
Regardless of how the test is run, every A/B test needs the same three things:
- Each visitor is randomly assigned to a variation, either A or B.
- Make sure that the visitor always sees either only A or only B. Otherwise, your results are tainted if someone ends up seeing more than one variation.
- Record which variation this visitor saw, so that you can refer to it when checking the results of the A/B test.
That covers the test itself.
There's one last thing:
Record the end goal that you are interested in.
Are you interested in increasing signups with this test? Measure Signups. Are you interested in whether people continue to return to your site? Measure the number of Site Visits.
Updated about a year ago