A/B and Multivariate Testing are not Proxies for Decision Making
Jared Smith | May 27, 2016
Jared Smith | May 27, 2016
I've watched the CRO industry grow over the past few years and the most disturbing trend is that companies, from marketing managers to business owners, are abdicating their decision making privileges to a split test.
This is a logical progression from testing’s inception to its industry dominance, so I'm not blaming anyone. Rather, I'm calling for an end to this behavior.
Since the 90s more and more companies have aspired to be "data driven". A pursuit that, for the pure of heart, means using data in your evidence procedure. Yet, there are increasingly more companies that have become digital bit chasers, simply following the digital herd. They aren't using data to drive their company, they're being led around in circles by it.
This has spilled over into split testing. "Let the data decide [the winner]," a common turn of phrase, that threatens to spoil the industry.
If a split test can make decisions, then why not run all of our decisions through it?
- What's the best title for this article?
- Whats the best color for this button?
- Whats the best mix of payment options?
- Whats the best layout?
- Form fields?
- Call to action?
I die a little every time I hear this conversation...
"Should we say 'Drive it further' or 'Drive it Straighter?'" [Golf]
"I don't know. We should test that!"
AHHHGGG!!!!! DONT TEST THAT!
What's the context? What's the potential value? Why does this test matter?
It's not that 'further vs straighter' is inherently a bad test, it's that there was no thought. The decision was simply abdicated to the crowd.
I'm sorry to be the one who has to tell you... You can't crowd source design and you can't crowd source the future of your company.
If you have a sufficiently large crowd and you want to make a pizza, one that is perfect for everyone, your pizza will suck. Think about it. Someone is likely vegetarian, so no meat. Someone is likely lactose intolerant, so no cheese. And someone is likely on a gluten free diet, so no bready pizza crust. You're left with tomato sauce on a cracker.
Don't turn your website into a tasteless cracker. There's a better way – and it starts with asking questions, giving the test context and purpose.
Ensure your tests have specific purposes
What are the results of the test going to affect? Just the spot of the test, or do they have larger implications? Who should see the test? Everybody? Or just our best customers since we want more people like them?
Start with a theory. Golf players who just want to hit the ball straight are likely beginners, while those who want to hit the ball further are likely a little more advanced. Since this is a beginner’s course, straight should be more attractive to the visitors who benefit most and thus be most likely to buy.
Now, even if straight doesn't have a significant effect on your visitors, you have a much deeper idea that you are working to understand its effects on your visitors. It also has significant value to every other offer, matching the copy and design to the level of player you're trying to sell (assuming your assumptions are proven correct). What other changes could you test to further support your theory that subtly speaking to beginners will increase conversions?
Split testing isn't about making decisions, it’s about understanding your customers.
** mic drop **
This is an excerpt from Jared Smith's upcoming book "Don't Test That". To sign up to be the first in line for "Don't Test That" go here.
Do you agree with Jared that testing should not dictate your decision making? Let us know in the comments below – or join the conversation on Twitter.
Alternatively, if you'd like to explore conversion optimization further, then be sure to check out Decibel Insight's comprehensive free guide. It's packed full of industry insight, techniques, and expert advice. Hit the image below to download it now!
Written on May 27, 2016 by:
Jared is founder of conversion optimization agency CountourThis.Follow