Marketers, Avoid These 10 Major A/B Testing Gaffes

There’s more to A/B testing than setting up experiments. Here’s how to steer clear of the biggest pitfalls

What’s worse than neglecting A/B testing? Going about it the wrong way. While 72% of marketers believe A/B testing is “highly valuable,” it takes more than setting up experiments to get it right. In this article, we’ll take a look at some common A/B blunders and how to avoid them.

Testing With a Hypothesis

The first step in any A/B test is to create a hypothesis. Without it, you’ll waste time and money analyzing random ideas. Below are three essential elements of productive hypothesizing:

  1. Conversion Problem: Determine what you’d like to change by utilizing customer feedback, analytics, and user tests
  2. Proposed Solution: Establish what will be done to fix your conversion problem. For instance, changing your CTA text from “Buy” to “Buy Now” should hike up online sales.
  3. Impact Statement: Define how the proposed solution will affect conversion outcomes

Not Using Segmentation

Forgoing segmentation is a big mistake when A/B testing. A variation that doesn’t outperform overall may have different results within a particular segment. That’s why 56% of marketers find segmentation useful for improving conversion rates. Here are three tips to keep in mind when using segmentation in A/B testing:

  • Segmentation Methodology: Segment your users between source, behavior, demographic, or outcome. When building your segmentation strategy, examine which sectors have the highest LTV for your business.
  • Refine Your Segment: Don’t segment to such a granular level that the results become useless. Make sure the sample size of your smallest segment is large enough to detect the expected difference.
  • Avoid Multiple Comparisons: Comparing multiple segments increases the probability of error. Choose a big enough sample size for each segment, compare significant segments, and make sure your data is relevant.

Calling A/B Tests Early

Ending an A/B test as soon as you see a winning variation will create false positives. Check these boxes before pulling the plug:

 Sufficient Sample Size: Don’t stop your test until you’ve reached the necessary sample size. Determine your sample size in advance, making sure the sample represents your regular traffic.

Statistical Significance: Make sure that you test the difference between your control and test version and can rule out the chances the result wasn’t due to error.

Tip: Wait until you’re finished testing to record a final significance level.

Test Duration: Run the test for *at least* a few weeks before making conclusions. Test for as long as you need in order to include all audience segments and experiment as long as it is still economically viable to discover a percentage lift from variation.

Tip: If you need to prolong a test, do it by a full week.

Forgetting it’s Holiday Season 

The holiday rush can affect traffic and conversion rates alike, skewing your test’s validity. This means that if you’re running a test during Christmas, your winning variation might not remain on top when the holiday dash dies down. To prevent skewed test results, keep the following in mind:

  • Watch Your Data: Consult past data to see when traffic and conversion increases begin and end. Knowing when to expect high traffic will help you prepare test plans before the holidays kick-off.
  • Repeat the Test: Run repeat ‘winning holiday tests’ during off-seasons to confirm your results. When re-testing, evaluate how you set up the original test and correct any mistakes.
  • Run an ABN Campaign: An ABN campaign allows you to compare A to B, but you’ll have a control group as well, which will show how customers are likely to behave without a campaign during the holiday season.

The Effects of Statistical Power

Statistical power is the probability that a test will detect a real difference in conversion rate between offers. Without understanding statistical power, you can’t implement revenue-generating changes to your site, nor reduce false positives. In addition to statistical significance, there are two more variables that affect statistical power:

Sample Size: A small sample increases the likelihood of a false positive. Use a sample size calculator to ensure your sample is large enough to power your test.

Power Level: To avoid under-powering your test, you need a confidence level of 95% and a statistical power of  80%

First, check out our other articles on A/B testing:
5 Major Testing Flaws to Avoid In Your Marketing Initiatives
Testing, Testing, A/B/C
A/B Testing Your Email Campaigns: Benefits and Drawbacks

Forgetting About Novelty Effect

Novelty effect is used to describe a positive effect that’s entirely due to a change, such as a new design feature. As a result of novelty effect, any recorded upswing will wear off with time. To prevent this from skewing your results, do the following:

Segment: To distinguish between a novelty effect and actual inferiority, that you segment your hits into new and returning visitors and compare the conversion rates

Test Duration: Extend your test duration to allow the novelty to wear off and to regulate test results

Shrugging Off Prioritization

 As a significant number of A/B tests produce either negative or neutral results, increase the odds of achieving a positive impact on your conversion metric. To save time and resources, prioritize what you test. Just be sure to keep these tips in mind:

If it’s Obvious, Don’t Test it: Don’t waste time testing elements that don’t matter to your consumers, such as random images and text. Test elements like page layout, navigation, and your checkout process.

Create a Framework: Using a framework increases transparency, sets the right expectations, and reduces opportunity cost. You can use prioritization frameworks such as ICE, PIE, or PXL — or create your own.

Test Money Pages First: Your money pages such as sales pages or checkout pages should take priority when A/B testing

Conducting Fewer Tests

You can’t conduct just one or two tests per month and expect improvements. You must run tests regularly to get results. 83% of companies that run frequent tests see improvements in conversion rates.  Things to note when testing:

Testing Strategy: Devise a clear testing strategy and ensure it includes simple A/B tests and complex experiments.

Schedule: Stick to a regular testing schedule that details which tests you’ll run, what you aim to optimize, and start and finish dates. Regardless of their complexity, consider running between 3-5 tests per month.

Maintain Momentum: Whether or not you see significant results, keep testing. Consider directing part of your budget to constantly testing your online sales platform to encourage the understanding that A/B testing is an ongoing process, not a lone event.

Two More Quick Nos

Not Testing the Entire Customer Journey: Don’t limit your testing to websites, landing pages, or checkout forms. Test elements on other channels that pertain to all stages of the customer journey, so that you can optimize it completely.

Not Allocating Traffic Equally: Sending unequal allocation of traffic to your ‘control and variation’ will lead to an inefficient or failed test. Even if your tools allow for unequal allocation of traffic, stick to a 50/50 split to achieve conclusive results.

Test the Right Way

Never assume that you categorically ‘know’ your customers. Always test to confirm your instincts. When it comes to A/B testing, use metrics that align with your business goals and the specific AB campaign’s goals (for example, if it’s making a purchase – test that, if it’s another activity like placing a bet in gaming, test relevant KPIs). Implement winning tests fast, stay open-minded, and don’t be afraid to fail.

Happy testing!