4 Optimization Experts Share Advice on How to A/B Test Effectively
Jack Maden | December 7, 2015
Jack Maden | December 7, 2015
In the first of our three-part expert interview series on A/B testing, we examine the favored testing strategies of CRO experts. Should you run as many tests as possible regardless of what they're based on? Or should you focus on thoroughly researching your hypotheses before testing them?
We asked four optimization experts the following question:
Do you think A/B testing strategy should be ‘test as much as possible’? Or should you only test when you have researched thoroughly?
Find their answers below!
“The effectiveness of a testing program is measured by 3 things: the number of tests run (per year), the percentage of winning tests, and the uplift per successful experiment. So you absolutely do want to run as many tests as possible - BUT you don't want to test silly things that make no difference (and bring down the percentage of winning tests).
“It's vital that you do conversion research to figure out what matters, and come up with data-backed test hypotheses (to win more tests). While you're conducting the research, you can run tests based on heuristic analysis. You don't want to have time periods where you have no tests running at all.”
“Of course getting as many tests live in order to get as many results and insights as possible is something every optimization team strives for, but there is no point in continuously pushing out tests that end up with inconclusive results. On the other hand, you can run into the same hamster wheel by endlessly doing research without some sort of process or framework; the pursuit for that ‘perfect’ hypothesis could hold back potential for results and insights.
“Setting a proper framework in prioritizing tests will guide the research along with it. Different areas of websites will require different types of research. Defining these areas by considering the ROI potential, importance and ease of testing helps uncover this. For instance, site-wide navigation and checkout forms may focus more on user interaction and UX research, whereas landing pages and product detail pages research would be more suitable to visitor polls and customer surveys.
“As with everything in life, there needs to be a balance. Balance of testing and research will ensure efficient testing with relevant results and insights.”
“Keeping up your velocity of testing would in my opinion definitely be important if you have a high traffic site. If you wouldn’t go for that strategy you’re wasting the opportunities that might be out there as you will have the chance to just test more.
“For both high and low traffic sites it’s always about making sure that the tests you run are high quality and make sense. The longer and more you are testing the easier it will get to prioritize this for yourself. If you always have 1-2% increases in a certain area and for other ones you can achieve more growth then I would definitely decide to focus on these.”
“Don't test randomly and don't go from test to test – that's for sure. You always need a strategy that works for your business objectives so you can get the most out of testing. Testing needs research, knowledge about consumer psychology and good documentation. You have to ask the right questions and choose the best approach to get an answer.
“An A/B testing strategy is a good thing to have but it's useless when you don't have an optimization culture in your organization. If a company hires me they want me to setup A/B-tests to lift their conversion rate; but I see that this is not the real, long term solution for many businesses. You have to learn from your tests, share the results, document it well and make it an ongoing vibe in your culture.
“An optimization culture will bring the company to the next level and get it in front of the competition. Testing has to be everybody’s job.”
Our experts all agree that finding a balance between depth of research and testing frequency is absolutely crucial to establishing an effective testing strategy. To sum it up in a (devastatingly clever and disarming) Immanuel Kant-esque statement, we could say this: research without testing is empty, testing without research is blind.
For higher traffic sites, tests do not have to run for as long to achieve statistical significance, so such sites should make the most of this by running as many tests as they can. But, again, this does not mean that testing hypotheses should be rushed and uninformed by data: develop a progressive strategy based on the results of past tests.
Finally, it is important to establish a testing culture throughout your organization. Having the interest and support of your colleagues will lead to better ideas and allow more time for a testing strategy to grow and return both financial and temporal investment.
The next instalment in our series on A/B testing will ask our experts how they go about developing hypotheses for testing – should they always be based on data rather than conjecture? Join us next time to find out!