<img height="1" width="1" src="https://www.facebook.com/tr?id=466594813771579&amp;ev=PageView &amp;noscript=1">

Key Optimization Takeaways from Click Summit 2016

Phil Haslehurst
by Phil Haslehurst

Click Summit 2016 brought together analysts, optimizers and marketers from across the USA for a 3 day symposium centered on testing and optimization, in the tranquil setting of Raleigh, North Carolina.

Click Summit takes the form of peer-to-peer learning, with a sprinkling of keynotes thrown in to keep things varied. The result is an in-depth, practical learning experience that feels unique and refreshingly candid.

The agenda brought together round-table conversations, rather than presentations, on a diverse range of topics – mobile testing, the upside of losing, using data to make decisions, the significance of significance. There was something there for everyone. Each attendee that I spoke to commented that the atmosphere of sharing made the summit extremely valuable. Kudos to Brooks Bell for organizing such a slick, enjoyable and informative event.

Despite the diversity of discussion points, common themes emerged in each of the conversations that I was a part of. To my mind they are a snapshot of the state of the optimization space at this point in time – and show the challenges and opportunities facing enterprise organisations as they mature in their own testing programs.

Politics Politics Politics

Maybe it’s because it’s a presidential year, but politics was never far from people’s minds. Thankfully we weren’t talking Trump vs Clinton, but rather the internal politics of building out a successful testing program.

Getting buy in across the organisation for a testing program remains a challenge for optimizers. It’s one thing to create understanding and acceptance of testing in principle, but quite another to get that acceptance to hold out under the pressure of real life circumstances.

Leadership teams understandably tend to see losing tests as wasteful, which shakes their confidence in the value of testing overall. The result can be that in the early stages of a test, when results can be sporadic – the “crazy zone”, as one panelist called it – stakeholders can lost their nerve and abandon the test altogether. Time and again this fear-factor came up in conversation as a big issue.

crazy-zone

Communication is key. Sharing insights and test results, including discussion of what was learned from losing tests, helps to build understanding and acceptance throughout the organisation. But it’s not easy in a competitive business environment where revenue is critical and short-term thinking can prevail.

It can also be helpful to run AA or “double control” tests to illustrate that all tests have an initial “crazy zone” – even when the two variants are identical to each other. Over time, the results normalize, the lines smooth out, and the results of the test can be relied upon. That’s the point at which decisions can be made about the right way forward.

Test Everything

Testing has gone meta.

It’s been fun getting to grips with testing stuff like page copy, call-to-action design, form layouts and even things like pricing and promotions.

But it’s time to think bigger.

What about testing the product itself? Thursday’s keynote came from Christian Rudder, co-Founder of dating website OkCupid and author of Dataclysm: Who We Are (When We Think No One’s Looking)

Christian shared some brilliant (and hilarious) insights into how online-daters behave – and also how the online dating website optimized the experience it delivered by testing its matching algorithm.

The lesson – testing is bigger than individual page elements or page designs. Testing whole experiences is the next level.

More than Revenue

Testing and revenue are inherently linked – but does it have to be that way?

At Click Summit we were urged to think the unthinkable. To stop looking at testing as a route to short term revenue gains, and instead consider it as a way to deliver better visitor experiences, loyalty and lifetime value.

What’s needed to do this?

First off, a change in the metrics we focus on. Rather than optimizing for increased conversions, what are the metrics that indicate improved experiences? Those metrics could be gleaned from voice of customer tools, NPS, surveys – or they could be found through your analytics tech: identifying the high value moments in the customer journey and optimizing towards them, or identifying the behavioral symptoms that indicate customer satisfaction, and building out those experiences for other visitors.

Second – and yes, we’re back to politics in a way – what’s needed is a cultural shift. For senior management, and the broader organisation, to embrace a long-term vision for testing that goes beyond simply hacking a few more sales.

This boils down to customer-centricity and seeing the value of data beyond the bottom line.

romance

At OkCupid these two ideas combined in an obsession with increasing the number of “four-way” communications that took place.

A “four-way” communication is where person X messages person Y, then person X replies and then then the whole cycle repeats one more time. Simply put, 4 messages are exchanged between 2 people.

That level of reciprocity became a benchmark of success for the entire product, and optimization efforts were driven towards increasing the number of those engagements. Nothing to do with signups or revenue – a lot to do with satisfied customers.

US events Click Summit

Phil Haslehurst
Written by Phil Haslehurst

Phil is Head of Marketing at Decibel Insight.