After running an A/B split test, when the results are in, you always have to make the decision:
Do I switch to the new version or not?
If the difference in conversion is large, the decision is easy: just switch to the best version.
If the difference is small, the decision may be harder, but then again, which way you go doesn’t matter much anyway.
However, I have just done an A/B test where the difference in results is large, but still it’s not clear whether I should stick with version A (my control), or switch to version B (my challenger). Here’s my test:
Sign-up for the trial boxes
Our product pages have a “sign-up for the trial” box in the side-panel on the right and an additional big sign-up box in the content area, a little way down the page. The big extra one doesn’t look too great and it kind of disturbs the “flow” of the content.
So I wondered whether removing that extra box would make a difference. The test:
- Version A: “Sign-up for the trial” boxes in both the side panel on the right and in the wide content panel.
- Version B: Removed the big sign-up box, so just a “sign-up for the trial” box in the side panel remains.
These are the numbers for version B over version A, after 14 days.
In other words, removing the big additional sign-up box caused:
- Sign Ups: 15.6% less
- Sales: 1.5% more
- Average Purchase: 5.9% higher
- Total Profits: 8.0% higher
As expected, removing the extra sign-up box caused a decrease in the number of sign-ups. (Though I did not expect the numbers to be this dramatic.)
However, profits are up and considerably so. Not sure why. Maybe it’s the better looking and better flowing content story? Maybe making the trial less prominent causes an increase in the number of immediate purchases?
So now what?
Do I opt for version B and thus remove the extra sign-up box? More profits is good right?
Or should I go for more trial users and I leave my site as is (A), with 2 sign-up boxes?
What do you think? What would you do?
Alwin, are you sure that more sales and profits you got were caused by that sign-up box removal only? There are many other things can affect them. Say, Friday the 13th 🙂
May be you should continue the test for another 14 days? After second run you will be either more confident or more confused 🙂
Max, external influences would have the same effect on A and B, that is the whole idea of A/B split testing: to rule those out.
But yes, running the test longer would be a good idea, especially since one can expect the decrease in trial sign-ups to have its effect on sales in the long run.
How long is the trial? What is your trial sign up to conversion ratio? Was this factored into what you listed as profits?
Chris, our trial is not limited in the number of days. It has a database limit (e.g. max 100 movies for Movie Collector). IMO a time-limited trial is only a good idea if you *really* can’t think of a better trial limitation.
My trial sign-up conversation rate (from visitors to trials) is around 6%.
But I am not sure how either of these would matter in making the decision? Profits are profits, that is, the total profits generated by actual *sales* in the test period, generated by the new visitors in that test period.
Nice example of Revenue vs Economic value, #2 of Avinash’ latest blog:
Jeroen, thanks for that interesting link.
However, the only KPI for me is the bottom line: profits.
Sign-ups by themselves are without value. They only become valuable once they convert into actual sales 🙂
Maybe more went to purchase directly because they overlooked the smaller “sign up for trial” box?
Maybe you should test removing that one too 🙂
I would go with the more profitable result. *But* is 14 days long to test this?
If a significant number of trial users take >14 days to convert, then you might get a significantly different results if you run the test for >14 days. i.e. would customers converting from trial to sale on days 15 to 28 make a significant difference to the profits? I am guessing they would.
If you know how many users convert after 14 days you should be able to work out the numbers without even re-running the test.
After 14 days of testing, what will be your results if A and B pages are completely same? Did you made such test in the past? Just curious how accurate these results are
Chester, the volumes of visitors, sign-ups and sales are high enough to be accurate. After 14 days of testing with A equals B, my results are normally around plus or minus 2%, maximum.
Minus 15% *is* significant, as is plus 8%.
Very interesting results! Good thing you tested both goals.
I have seen something similar with a test I recently run comparing video vs no video and not having a video actually increased opt-ins but had less people confirming their subscription, go figure.