A/B Testing Your Email Campaigns: Benefits and Drawbacks

A/B testing your email campaigns is a move in the right direction, the only direction, but you have to know how. Here's your manual to doing it right

Imagine for a moment, that you live in a perfect world. A world in which you follow the best practices for email marketing, send out your campaigns, and watch your open and click-through, and conversion rates skyrocket. Needless to say, this pipe dream of a vision is up there with winning the lottery and scoring a date with Anna Kendrick on the same day.

While there are some universally accepted best practices for creating a successful email, much of what will make or break your specific campaigns comes down to what works for your target audience. In other words, you can’t know for sure what’s going to work in regards to your email marketing campaigns based solely on what “usually works for most companies.” To figure out what works best for your company, you’ll need to get your campaigns running and assess their effectiveness along the way.

What is A/B Testing?

In the simplest of terms, A/B testing is the process of distributing two versions of a piece of content – in our case, an email newsletter, announcement, etc. – in which all but one feature of the content remains the same.

(Source)

In the simple example above, the text of the email remains the same except for the copy expressing urgency. The point of this test is to determine which phrasing causes more customers to take action and hopefully make a purchase.

Another popular method for testing content is multivariate testing. In contrast to A/B testing, multivariate testing tests multiple combinations of tweaks and changes to a piece of content in the hopes of finding the optimal combination of features.

(Source)

In this example, the subject line, image, and button color are all tested and tweaked simultaneously to discover the clear-cut “winner” amongst the eight combinations.

While we’ll be focusing on the benefits and drawbacks of A/B testing throughout this article, we’ll circle back to multivariate testing throughout to better illustrate why one or the other is the better option.

What are the Benefits of A/B Testing?

While you might already have an idea of some of the benefits of A/B testing, it’s worth taking a deeper look at each of these benefits in greater detail.

Test Any and All Aspects of Your Email Campaigns

As you surely know, there’s a lot to think about when creating an email for marketing purposes:

  • The subject line
  • The “From” name and email address
  • The greeting and body copy
  • Images and multimedia included in the body
  • The call-to-action copy and button(s)

As if that’s not enough to think about, you also need to consider logistical factors, such as the time of day and day of the week that you should send your email. The good news is that A/B testing can be utilized to test how a tweak or change to any of these factors increases (or decreases) the effectiveness of the email overall.

Case in point, the email marketers at Designhill, a peer-to-peer crowdsourcing platform that works to connect graphic artists with new clients, used A/B testing to determine the best way to structure their emails’ subject lines. After splitting their audience into two groups, the team sent the same email to each group – with one seemingly minor change:

The subject line of one version read “Top 10 Off-Page SEO Strategies for Startups in 2015,” while the other version’s subject line read “Check out My Recent Post – Top 10 Off-Page SEO Strategies for Startups in 2015.”

The discrepancy between the open and click-through rates of each version was clear:

(Source)

As you can see, the marketing team at Designhill discovered that simply using the title of the blog post as the subject line – without including any additional copy – led almost 700 more recipients to open the email. What’s more, nearly 1,600 more people ended up clicking through to the company’s actual website after engaging with the email that included this optimized and simplified subject line.

In this case, once the team discovered which subject line structure led to higher engagement rates among their target audience, they would be sure to use this same structure in similar emails moving forward. Furthermore, after discovering an optimal subject line structure for their emails, they could then begin focusing on optimizing other aspects of these emails – eventually resulting in their ability to develop a template for “the perfect email.”

Draw a True Correlation Between “Tweak” and “Success”

A/B testing also allows you to come to a nearly-certain conclusion that a specific change directly impacted the effectiveness of the overall email.

 

In the above example, Designhill’s marketing team set a goal for what they would consider to be a “successful” subject line, including that it would lead to a specifically-defined increase in overall open rate for the email. As shown above, the tweaked subject line caused the email’s open rate to jump a respectable 2.57%. Since all other aspects of the email (i.e., the day and time it was sent, and the “From” field) remained the same, Designhill could be almost 100% sure that the change to the subject line was solely responsible for the boost in open rates.

Now, it’s also worth noting that the marketing team had set another goal before implementing their subject line test – this one revolving around an increase in click-through rates. We may not be able to attribute the increase in CTR specifically to the change in the subject line. Yes, the increase in open rate led to more people seeing the actual content of the email, which led many of these individuals to click through; but we can’t say for sure that those who clicked through did so specifically because of the subject line. That said, we can say that those who didn’t open the less-successful email in the first place inherently contributed to this email’s lower click-through rate.

Nevertheless, using A/B testing can enable you to draw a correlation between a specific change and a specific outcome – as long as you focus on one change and one outcome at a time.

Compared to multivariate testing, A/B testing is a more effective way of drawing a correlation between a specific change and the ensuing level of success the campaign has. Essentially, because multivariate testing involves making multiple changes at a single time, it becomes much harder to nail down which specific change actually caused an improvement in the email’s success rate. With A/B testing, this information becomes clear.

Easy Implementation

In contrast to multivariate testing, A/B testing your email campaigns is quite simple to implement. To illustrate this point, let’s go back to the examples we provided earlier:

In going the multivariate route, you’ll juggle a ton of information all at once.

For one thing, you’ll need to ensure that you’ve accounted for and included all possible combinations of all the features you intend on testing. Since the idea of multivariate testing is to determine the single most effective combination of features, overlooking one (or more) combination could mean overlooking the combination proven to be most effective.

Also worth noting is since multivariate testing requires you to…well, test multiple variations of an email, this means you’ll need to segment your audience into an appropriate number of groups (according to the number of variations you’re testing). Additionally, you’ll need to ensure that these segments are large enough to provide statistically significant results (i.e., results that are truly representative of your entire population).

A/B testing is much less intensive.

It’s inherently easier to focus on a single aspect of your emails at one time than it is to juggle multiple changes to multiple features. As we noted when discussing the case study earlier, A/B testing allows you to focus on one factor at a time, then calmly move onto the next feature to be tested when the time comes and ensure you’ve covered all the bases.

Also, since you’ll only be testing two versions of the email in question, you’ll only need to segment your testing audience into two groups. Not only is this easier to do from a logistical standpoint, but these two groups will almost certainly be statistically significant and truly representative of the entire population of your audience.

A/B testing allows you to easily test every aspect of your email campaigns in a way that enables you to confidently draw conclusions about the changes that need be made to your email campaigns.

However, there are certain pitfalls to watch out for when implementing A/B testing…

What are the Drawbacks of A/B Testing?

As with pretty much everything in this world, A/B testing is not without its fair share of downsides. However, these downsides need not persuade you away from utilizing A/B testing. As we discuss these potential pitfalls, we’ll provide some advice as to how to avoid – or at least minimize – the negative impact they have on your overall initiatives.

Potentially Time- and Resource-Consuming

Thoroughly implementing A/B tests can be a time- and resource-consuming venture. You’ll need to create a single email to act as a baseline for further testing and determine a specific goal to focus on. You’ll then need to create an alternative version of every piece of the baseline email (the creation of which we’ll discuss in a bit).

Once you have the content set, you’ll need to create and segment your testing audience, deliver both versions of the content, and collect and analyze your results. Then…you’ll need to do it all over again, focusing on the next feature.

While you’ll need to create even more variations of your baseline content when implementing multivariate testing, you can at least deliver each variation to each of your test audience segments simultaneously. There’s no need to wait for results of an initial test before sending out a different version – meaning you’ll be able to collect the resulting data and information rather quickly.

You can streamline the A/B testing process by:

  • Determining the minimum amount of results you’ll need for your data to be statistically significant and representative of your entire audience
  • Have a ballpark idea of how long it should take to generate this minimum number of results
  • Have a concrete plan in place for interpreting – and acting upon – the data you collect

Which brings us to…

The Need for a Strategic Approach

Perhaps the best way to illustrate this point is to again begin by discussing multivariate testing.

Speaking in general terms, multivariate testing can often end up being like the “spaghetti test.” That is, you have a number of variables to test in tandem with one another, and you throw them together at random to see which combination “sticks.”

Basically, multivariate testing boils down to trying all possible combinations until one proves to be most effective. You need not be a marketing guru to figure out the best course of action when using multivariate testing – you simply need to exhaust all your options until the best combination stands out above the rest.

On the other hand, A/B testing requires that you approach each testing session strategically, through the lens of an experienced marketer. You’ll first need to know where your email campaigns currently stand in terms of effectiveness, as compared to how they could or should be performing.

To gauge your current campaigns’ effectiveness, you’ll need to ask questions such as:

  • Is your open rate lower than your industry’s average?
  • Is there a discrepancy between your open rate and click-through rate?
  • What factors are most likely contributing to a higher- or lower-than-average open or click-through rate?

This last question is incredibly important, as it will tell you where to focus your energy first. For example, if you’ve determined that your click-through rate is lower than it could be and you recognize that the CTA within your emails may be a major factor, then you’d want to first implement an A/B test focusing on two different CTAs. After completing this test, you’d then move on to the second-most important feature of your emails, and so on.

Remain vigilant when implementing A/B testing to ensure that you’re focusing on the features that influence your audience’s actions most before moving on to the less-influential features.

Not All Feature Combinations are Considered

A/B testing focuses on finding the optimal version of a single feature, then keeping this optimal version in all subsequent tests moving forward. For example, if we determine that Subject Line A works better than Subject Line B, we’d discard Subject Line B, and use only Subject Line A in future emails when testing different body copy, calls-to-action, etc.

The problem, here, is that there’s no way of knowing if one version of a subsequently-tested feature would have been more effective had it been coupled with the discarded version of the first feature. For example, perhaps Body Copy B would have been more effective had it been coupled with Subject Line B; since we’re no longer using Subject Line B at all, we’ll never find out.

This isn’t to say that you should keep the less-effective versions of your features around and test them in conjunction with other features – that would go against what A/B testing is in the first place.

Rather, you want to understand why a certain version of a feature was more or less successful than its “competitor.” Instead of saying “Okay, this worked, so let’s go with this,” you want to be able to say “Okay, this worked because our audience expected x, y, and z.” In turn, you can then assess how well the rest of your email caters to these expectations and make the appropriate adjustments moving forward.