Copy
GoodUI Newsletter

A/B Testing Isn't Easy, Especially When Traffic Drops

We know it's not. We just finished a 3 month long a/b test where we struggled a bit. Things looked promising at first when we ventured out to improve the purchase rate of e-books for one particular client. We calculated our sample size, fired some tracer metrics, came up with our safe bet variation and thought we'd be done in less than 2 months. And then the traffic started falling ... 

It was a e-book page that was riding the wave of a new product launch, so it was meant to happen. First it had 8k unique visitors each month and then trickled to half of that, and then even further downwards. The fact that it was a teacher focused site and we tested through December (where teachers probably don't want to think about school, nor books) also did not help. Nevertheless, we did not give up. Instead we did a few things to save the test.

1. We Ran The Test Longer

Time is your friend when you a/b test. Luckily, we did not have to stop at a fixed date and so we didn't. The longer the test, the more data, the higher the chance to detect something significant. Simple.


2. We Mirrored The Test
We quickly realized that there were other e-book product pages very similar to our control. With that, we duplicated the test onto one additional e-book page and setup the metrics in the exact same way, along with the exact same UI changes. We've never tried this strategy out before, but it worked out well. It essentially provided us with more data, which at the end of the project we were able to sum up from both tests.

3. We Fell Back On Softer Metrics
Instead of relying on hard and scarce “page visits” to a post-purchase thank you page, we moved one step backwards along the funnel and looked at "clicks” on the final payment buttons. This provided us with a bit more data to work from even if the effect might have been slightly magnified (as it did not reflect properly validated purchases). Clicks on the final payment buttons (imagine two buttons labelled: Checkout With Paypal and Checkout Using Visa) still captured the purchase intent quite strongly, we felt. They were also "protected" from quick fingers while residing in a modal overlay. One more reason why we resorted to "clicks" as the primary metric was actually that our "page visits" also become corrupted, technically speaking. Perhaps that's another risk of running a test for too long. Nevertheless, with this tactic we received even more data to look at and every bit helped.

In The End, We Achieved What We Were After
Once the extended testing duration was over, we finally arrived at a +46% increase to e-book purchases with a p-Value of 0.03 (with only a 3% chance that the effect was a false positive). There were at least 5 good reasons why we suggested to implement the winner in the end. These can be read about in complete and latest Datastory if you're curious (along with all the juicy UI changes to the variation and other process learnings).

We Know It Can Be A Struggle
Because we understand that a/b testing has its challenges, and not everyone has 3 months to spare, we do share our best stories with you with all their details. I'll be honest, we're still learning about new testing strategies, what works, what doesn't and how to setup tests. We are definitely getting way better at it and we also very much enjoy writing about our best insights while saving people loads of time in the process. :)

Thank You,
Jakub Linowski
www.linowski.ca
@jlinowski


You are receiving this email because you opted in at goodui.org. Thanks!

Unsubscribe  |  Update your profile  |  View in browser

Linowski Interaction Design Inc. 2814 Bloor Street West Toronto, ON M8X1A7 Canada