Over the years, we’ve guided brands with A/B testing initiatives that help drive growth and insights, educating and empowering our clients along the way. A/B testing is an email marketing initiative that can have profound long-term impact but is widely misunderstood. Let’s identify and correct the most persistent misconceptions about A/B testing in email campaigns.
You Shouldn’t Expect Instant Results.
One of the biggest misconceptions is that each A/B test might just produce a silver bullet for quick and immediate wins.
You might run plenty of solid, well-structured tests that don’t produce any kind of winner. This doesn’t mean the test was flawed; it simply means the variable you introduced, like changing the copy format to bullet points, didn't move the needle one way or the other. If you test a great idea that falls flat, it can be hard on the ego — but it’s also important knowledge that you need to file away for reference. If you’re structuring your tests well and getting information to optimize future sends, the wins will accrue steadily.
Even when you do find a clear winner, you won’t necessarily see a performance spike. If half of your test audience knocks it out of the park, the other half is underperforming in comparison.
Remember that the aggregate value of A/B testing comes in the insights you can apply over time, and there’s great value in every kind of insight. Eventually, the best practice will be you reframing the word “failure” with “opportunity.”
Even Winning Campaigns Need Improvement; Stop Reusing the Same Test.
A winning test from six months ago might not produce a winner today. I’m not advocating that you test in rigid six-month intervals, but I am recommending that you keep an eye on any winning campaigns to make sure their performance remains strong. It might be your signal to take another look if you see engagement trending steadily down over time.
For Most Marketers, 'A/B Email Testing' Starts and Stops at the Subject Line
Yes, subject line tests are valuable — even though iOS 15 has made it harder to assess changes in open rate. The truth is, though, there are a number of variables you can and should test. I recommend periodically updating your list with other elements to test. Remember: the more you test, the more you learn.
We tend to align variables with their impact, like so:
- variables that affect open rate: day of the week, time of day, and subject lines; and
- variables that affect post-open engagement and conversion: body copy, headline copy, offer/promo, call to action (CTA), background colors, the order of content blocks.
Don’t consider this a comprehensive list; do some brainstorming to figure out other elements within your campaigns that might be worth testing.
Testing Everything, Everywhere, All at Once is Just Too Much.
Another common misunderstanding is that you can test multiple variables at once (e.g., switching up a subject line and time of send and day of week). In reality, testing several means you won’t know which one/s made a difference. Clean A/B tests involve one variable, and even beyond that, they require focus on specific elements within that variable.
Let’s consider a subject line example. If wildly different subject lines are performing differently, you might not understand exactly why. It’s much cleaner to test specific things like emojis vs. no emojis, question vs. statement, etc. This will give you a clear takeaway that you can use to optimize future sends.
Work With Guidelines and Be Prepared for Some Surprises Along the Way.
Speaking of universal truths, while we love a good best practice at DMi Partners, we also know there are exceptions to every rule that you won’t know until you test. For instance, for one client we personalized a CTA to reflect the person’s location (e.g., “Find retailers in {city}!”), and performance was far lower than the email with the non-personalized CTA.
It’s Not Always Time for A/B Testing
If you need to push out an email for the sake of engagement and performance, put your best foot forward — informed by all the learnings from past A/B tests, of course. An email campaign with high stakes and expectations probably isn’t the right fit for a test that risks a less-than-optimal performance from 50 percent of the list.
Email is a tried-and-true part of the media mix for a reason — actually, make that reasons given today’s emphasis on first-party data and steep media costs. An effective A/B testing email campaign can raise engagement in the long haul and is one of the best investments you can make in your marketing growth.
Lauren McGrath is the associate email strategy and execution manager at DMi Partners, a digital marketing agency.
Related story: 3 AI-Driven Email Marketing Techniques to Get the Gift of Holiday Revenue
Lauren McGrath is an Associate Email Strategy and Execution Manager at DMi Partners, which she joined in 2021. Lauren, a New Jersey native who earned a bachelor's degree in fashion merchandising and an MBA from Thomas Jefferson University, builds and optimizes email campaign execution processes for a portfolio of brands across B2C, CPG, and B2B verticals. She began her email marketing career in 2018 and is passionate about raising the bar for email performance standards across the marketing industry.