You hear it over and over again — test, test, test — but how many of you are actually following that advice? Testing is a critical component to website optimization, and two companies that have taken heed of this fact are casual footwear retailer Crocs and women's fashion apparel brand Aritzia.
In a session yesterday at the Demandware XChange conference in Las Vegas, Scott Keller, senior web developer at Aritzia, and Haley Nemann, senior manager of global e-commerce user experience at Crocs, discussed how their brands are using A/B testing to increase sales, decrease costs and increase customer engagement.
At Aritzia, testing is used to prove (or disprove) a hypothesis. For example, the Vancouver, Canada-based retailer recently tested whether making its "Add to Bag" buttons on its product detail pages would drive more clicks to checkout. The prevailing thought at the brand was that it would. The test ran for four weeks, and its audience was segmented 50/50 for those who saw the larger button vs. those who saw the standard-size button.
The results came in after four weeks and Aritzia's hypothesis proved true — having a more visually dominant "Add to Bag" button led to more people clicking it and advancing on to checkout. Consumers in Canada clicked on the larger button 3.97 percent more than the smaller button, and visitors in the U.S. clicked on the larger button 2.93 percent more.
The test required just one day to two days of development time and was deemed a success, said Keller. Even if a test doesn't yield the result you thought it would, it can still provide valuable data that can help guide decisions, he added.
For example, a test in which Aritzia's hypothesis proved to be wrong was the value of adding a mini summary to its bag/cart page. The retailer theorized that the percentage of visitors who would leave the purchase funnel would increase by adding a mini summary to its bag page. Aritzia tested this and found the opposite to be true — 3 percent more people who saw the mini summary completed their purchase.
A Testing Strategy
Crocs runs lots of tests, and those tests seek to answer the following question: How far from optimal are we — i.e., how far will the results of this test move the needle? Nemann provided some guidelines on how Crocs decides what's worthy of being tested. First the retailer looks at the ease of the execution of the test, as well as the risks associated with it (e.g., lost sales). It then examines the potential impact the results of the test could have on customer reach and lift. It weighs those factors and prioritizes which tests should be conducted.
Don't spend time testing the minutia, Nemann advised. Look at your purchase funnel, identify where the drop-offs are occurring, then test to see how you can optimize, she added.
Crocs recently tested the impact moving from a five-page checkout to a single-page checkout would have on its business. While not as simple a test to execute as say changing the size or color of a "Add to Cart" button, the effort validated what Crocs already believed — the easier the checkout process, the more conversions the merchant will get. Those who were given a single-page checkout started the purchase process 6 percent more than those with a five-page checkout.
Takeaway Tips
Keller and Nemann wrapped up their presentations with some tips for the other retailers in the audience on how they can best leverage A/B testing:
- Front-end testing (e.g., placement of a button) typically requires less development effort than testing business rules (e.g., product recommendation rules).
- Look at industry benchmark data to get insight before testing.
- Run your tests for a minimum of two weeks to four weeks to get statistically valid results.
- Failed tests shouldn't be considered failures.
- Design tests with a purpose, but optimize for multiple paths.