When it comes to catalog marketing, I don’t like to leave anything to chance. Just about everything can and should be tested, including promotional offers, cover designs, minimum-order requirements, etc. Knowing what and how to test and retest is important to the success of any catalog. This month, I’ll review a few basic rules of testing, analyze the impact that a test of purchase minimums has on special offers and show how you might set up a test of your own.
Bucking the Minimum
I often see the minimum purchase to qualify for promotional offers set too high. Instead of encouraging people to order, order minimums can actually have the opposite effect. If your average order size is $65, that doesn’t mean 50 percent of the orders are more than $65 and 50 percent are below. The typical distribution of orders is really as follows:
The chart above shows that 70 percent of the orders (37 percent of the dollars) fall below the average order size. Therefore, if you’re offering free shipping on all orders more than $99, most orders fall considerably below this amount. It’s too much of a stretch for someone to reach the minimum, and it’s still a reach to set the minimum order at $70, slightly more than the average order size in our example.
When you consider the percentage of orders that fall below the promo order minimum you typically set, it makes a great deal of sense to test no minimum. This is a scary thought, but test after test has supported making offers without minimums, or at least setting minimums much lower than your typical average order.
Based on the testing I’ve done, you can expect a significant increase in the revenue per catalog (RPC). Your actual results may vary, but both the response rate and average order size will most likely be higher with no dollar minimum. Most of the benefit comes from an increase in response since more people are able to qualify for the offer. That means more people are ordering and being added to your 12-month buyer file. Shown below is a summary of actual test results for no minimum vs. a $99 minimum vs. the control (i.e., no offer).
The percent lift is calculated from the control in the chart above. To the housefile, for example, free shipping with no order minimum increased the RPC 33.58 percent compared to no offer. Free shipping with a $99 order minimum increased the RPC 21 percent over no offer. Also, notice how much better the no-minimum results were compared with the $99-minimum segment.
With regard to prospects, I often see an increase in both the response rate and average order size. When there’s a minimum, customers try to get to the minimum. Once they do, they stop buying. With no minimum, customers shop and spend more.
Simple Testing Rules
Test an offer with no minimum against your normal minimum order size. Or do a three-way test: the offer with no minimum vs. the offer with a minimum vs. no offer at all. This should be tested to both the housefile and to prospects. Execute the test properly so you can read the results. Follow these best practices I’ve found over the years that will ensure an accurate and measurable result and a sound conclusion:
1. Clearly define the purpose of the test; define the objective.
2. Prepare a pro-forma; do your financial analysis.
3. Always test against a control.
4. Only test one variable at a time; keep everything else the same.
5. Run tests during your off-season whenever possible.
6. Always retest against a control or another offer.
7. Make sure your sample size will yield statistically valid results.
8. Source code the control group and test groups properly.
9. Read the results and act on what you see!
Keep all elements, such as creative, the same except for the variable you’re testing. Always test against a control group — i.e., a group that receives exactly the same treatment you’ve been doing. If you haven’t been making any offer, this is the control. If you always offer free shipping, that becomes your control.
Use the Same Catalog
A group receiving something different from the control — for instance, offering a promotional incentive to a group of customers or prospects — is the test group. The same catalog should be mailed to all test panels on the same mail date. If you’re promoting the offer on the front cover for one test panel, do the same for any others. Promoting your offer on the cover for one group and on the inside order form for another doesn’t constitute a valid test, unless you’re testing where best to promote an offer.
I always feel it takes a minimum of 100 orders from any one group to accurately read the results. Sometimes it’s less, depending on the leap you’re willing to make when rolling out.
A minimum of 100 orders per panel at a 1 percent response rate to prospects would mean your panels need to be 10,000 each. If you’re testing two promo offers against a control, print 10,000 copies for the A group (the control group) and 10,000 each for the B and C test groups. Then assign key codes to all three panels. This will enable you to track the results. Select a representative sample across all ZIP codes.
Also note that for reliable results, other variables must be identical for each group. All three groups need to be selected from the same list universe. They all need to be mailed on the same day. Then once the mailing is at least 50 percent complete, you can predict the winner with confidence.
Knowing how to test is critical to your success — it’s the only way you can determine what worked and didn’t work. Knowing what to test is critical, too. Most things can be tested. Don’t assume you know what will or won't work. Test. Test. Test! The results might even surprise you!
Stephen R. Lett is pesident of Lett Direct, a catalog consulting firm specializing in circulation planning, forecasting and analysis. He's the author of the Catalog Success-published book, “Strategic Catalog Marketing.” You can reach him at (302) 539-7257 or by e-mail at www.lettdirect.com.
- Companies:
- Lett Direct Inc.
- People:
- Stephen R. Lett