Strategy: Anatomy of a Marketing Test
Testing is the way to the promised land. Without testing, you’re up a creek without a paddle. Although some elements are easier to test than others, just about everything can be tested.
Properly structuring a test is a whole other matter. It means testing only one variable at a time. It requires focusing on keeping all elements the same, except for the one variable being tested.
If you’re testing two offers against one another and against a control group, for example, be sure all variables and conditions are the same for each test group. As far as the difference between a control group and a test group, what you’ve been doing — offering no promotional incentive, for example — would constitute the control. Therefore, doing something different, such as offering a promotional incentive to a group of prospects, would be the test group.!
Promote It the Same Way
Also, the same catalog should be mailed to all test panels on the same date. The offer should be promoted in exactly the same way. So, if you’re promoting the offer on the front cover for one test panel, do the same for on the other test panel. Promoting your offer on the cover for one group and on the inside order form for another group doesn’t constitute a valid test; the offer’s design must be the same. Thus, the only variable in your testing should be the copy describing the offer.
I’ve developed rules for testing orders. I’ve found that following these simple rules ensures an accurate, measurable result and a sound conclusion.
1. Clearly define the purpose — your objective.
2. Prepare a pro forma; do your financial analysis.
3. Always test against a control.
4. Only test one variable at a time.
5. Don’t test during peak season (unless necessary).
6. Always re-test against a control or another offer.
7. Make sure your sample size gives you statistically valid results.
8. Properly source code the control and test groups.
9. Read the results and act on what you see!
A/B Split Tests
A/B splits are the basis for all testing. An A/B split is when a group of buyers and/or prospects are divided into two equal groups on an every-other-name basis. Be sure your sample size gives you statistically valid results.
Determine which groups you want to test — housefile or prospects or both. Then, once you identify the test group, the next step is to determine the quantity.
As a general rule, you need at least 100 orders from any group to have a valid read on results. But sometimes it depends on the leap you’ll make when rolling out. Assuming you’re mailing smaller quantities and looking for a minimum of 100 orders per panel at a 2 percent response rate, you need a minimum of 5,000 copies per segment or test cell.
If you’re testing, say, three segments of your housefile and five different outside lists, you need to print at least 40,000 copies for “A” group (the control group) and 40,000 for “B” group (the test group).
Next, assign key codes to every segment for the control group and for the test group; this will enable you to track your results. Proper coding is a must! When writing your merge specifications, carefully sel-ect a cross section of the housefile segments and outside prospect lists so you choose a representative sample across all ZIPs. Select both groups from the same list universe. Just change one variable at a time for a valid test.
Now you’re ready to mail. Mailing results should be tracked at least weekly. And it needs to be at least 50 percent complete before any conclusions can be drawn.
With split tests, note the net benefit of selective binding to keep the mailing in one ZIP stream to maximize your postal discount. Generally, at quantities of 300,000 or more, it’s more cost efficient to selectively bind.
Ask your postal service bureau to give you the postage estimates on both ways, to be sure where the cutoff is. If the postage savings outweigh the price of selective binding, then selective binding is the way to go. How you handle A/B splits is even more important today considering the higher mailing costs that just went into effect.
In my chart (see below), the housefile, rentals and cooperative database lists are split into offer vs. no offer. “No offer” is the control group and “offer” is the test.
Contribution Analysis
What’s shown are the rolled-up results or totals by offer vs. no offer. In all cases, the offer beat the control as measured by the RPC (revenue per catalog). Based on this, we can draw a conclusion and confidently roll out the test.
The results of any test should include a contribution to profit and overhead analysis. On the surface, test results might look good. But you need to know if the offer and/or test can be cost-justified after all expenses have been considered — including the cost of the offer.
Do your homework. Know how much you need to increase the response rate and/or average order size to cost-justify an offer. For any test to be successful, it must increase the contribution to profit and overhead.
Testing is critical to success. It’s a way to separate opinion from fact. Don’t just assume something will work. Test, test and re-test! «
Stephen R. Lett is president of Lett Direct, a catalog consulting firm specializing in circulation planning, forecasting and analysis since 1995. He is the author of “Strategic Catalog Marketing,” a Catalog Success book published by Target Marketing Group Publications. You can reach him at (302) 537-0375 or at www.lettdirect.com.
- Companies:
- Lett Direct Inc.