A/B Testing: 4 Optimization Experts Reveal How to Create Effective Hypotheses
Jack Maden | December 9, 2015
Jack Maden | December 9, 2015
Part two of our three-part interview series on A/B testing asks four optimization experts on how they develop testing hypotheses. While part one looked at overall testing strategy in general, part two focuses on a particular part of it: coming up with ideas or changes that are ripe for practical testing.
Here’s the question we asked our experts:
What do you think is the most effective way to come up with testing hypotheses? Should they always be based on data rather than conjecture?
And here’s what they said!
“This depends on how terrible the website is. If it's completely amateurish, looks like your Grandma designed it and has copy that no one can understand - you don't need data. You can spot the low hanging fruits right away. But as the basic problems get fixed and the site gets more optimized, it gets more and more difficult to guess correctly what might improve the site. That's where you start needing data.
“The most effective way to come up with solid test hypotheses is by using multiple data inputs. I personally use the ResearchXL framework, which uses 6 types of data input: technical analysis, heuristic analysis, digital analytics, qualitative surveys, mouse tracking and user testing. Data from those sources will feed into a central pool of insights that you can use to come up with winning hypotheses.”
“Coming up with experiment hypotheses is a combination of art and skill, equal parts data and inspiration. Ideas often come from a straight-line logical deduction, but also sometimes from a lateral thought that is based on unrelated experiences. That’s why the best optimization strategists have broad curiosity that isn’t specific to one subject. The most creative ideas come from the rich soil of varied interests.
“It’s easy to come up with ideas based off past experiments. If you’ve done your design of experiments correctly, you’ll have plenty of insights and questions to follow up with. On the other hand, you don’t want to end up down the rabbit hole of continually testing on top of what you already have. It’s often a good idea, after several rounds, to step back and just do a freeform brainstorm with moonshots and radical ideas. These variations are more risky, and usually require more time and effort; however, they could lead you down an entirely new path of results and insights to explore.
“Ultimately, what matters most is that you have a hypothesis going into each experiment and you design each experiment to address that hypothesis. The source of those hypotheses may be data, feedback from users, neuromarketing tactics, and so on.”
“There is nothing wrong with coming up with ideas based on conjecture but you at least want to make sure they’re confirmed by data. At The Next Web we look at the data to make sure that the hypotheses that we come up with make sense. You can try to test whatever you like but you want to make sure that it’s coherent and will have the expected outcome that is eventually as close to your hypotheses as possible.
“We come up with a lot of ideas based on the data that we see, different traffic sources have different behavior so why not test specifically on them. If people in our team come up with ideas then we’ll also check to make sure that the changes will have a big enough impact to warrant keeping them around.”
“Testing hypotheses have to be born out of curiosity. That's a personality trait you really need if you want to be a great optimizer – or a great company for that matter.
“Every day I want to know more about the customer. If I see something in the data or I hear a complaint at customer service I want to know why this is. We will do further research about it and the team will come up with ideas about what we can do to improve it.
“When I'm analyzing A/B test results I have follow-up questions and ideas to test. That’s effective testing. When you are curious and want to learn everything that is to know about your customer, you have hypotheses to test.”
Our experts tend to agree that testing hypotheses can be based on conjecture but, as Martijn advises, they should at least be confirmed by patterns observed in customer data. Peep goes further to suggest that even referring to the data – or testing altogether – can be bypassed in some situations. Indeed, when there are areas of a site that are so bad it looks like ‘your [presumumbly tech-unsavvy] Grandma designed it’, using data to create hypotheses is unnecessary, and testing a fix to a typo or image render is simply a waste of time.
Basing tests on past experiments is important, but Nick notes that you should try to avoid getting too bogged down in one area. Take a step back sometimes and, as Amanda agrees, allow your curiosity to guide your hypotheses creation.
Do you have any favored approaches to coming up with new hypotheses for testing? Let us know on Twitter or in the comments below!
The third and final instalment in our series on A/B testing will ask our experts about future trends in the industry. What’s going to happen to testing with the emergence of new technologies? Will automation take over? Will there be a drift towards multi-variate testing? Join us next time to find out!