The 7 Biggest Testing Flaws in Marketing

Tests, on tests, on tests. Be sure to not mess up the best ways to learn about customer preferences

While marketing testing errors may not exactly leave you with a smoky lab and pieces of equipment flung about, they can be just as disastrous when it comes to grandly missing your goals. Mistakes can skew your results and limit your success with future campaigns and conversions. So before you begin experimenting with different tactics to improve your website, ads, and emails, make sure you don’t fall victim to the following testing flaws.

Not setting a goal

What do you want to find out by performing the test? For example, is your goal to improve conversions on your website, or do you want to increase your email opt-in rate? Your goals will determine which test options yield the best results. Consider these samples:

  • Test A: 10% email opt-in rate, 5% convert into customers
  • Test B: 5% email opt-in rate, 10% convert into customers

Which test page is the winner? It all depends on your original goal, which is why it’s important to set one from the beginning. Knowing what you’re trying to achieve will also help you tailor the design and content.

Performance before and after testing

Instead of testing two different elements against each other as with A/B testing, a before and after test measures conversions on the site pre change and post. Before/after allows for the  introduction of new variables like the length of the test and when it’s performed. Strictly A/B testing isn’t definitively accurate because it’s not comparing apples to apples. For example, your control page might convert 10% one day and 15% the next—even without incorporating a change. If you were to then add a test element, you wouldn’t know if the change was the reason for a conversion increase, or if there were other elements at play—such as day of week, time of day, or other factors.

A too-small sample

The validity of your results will be on shaky ground if you haven’t collected a large enough sample size or sum total of unique visitors during the test. The exact numbers vary depending on the test and how it’s performing over the control. The more the test version is outperforming the control, the smaller the sample size.

The A/B testing platform should provide you with a way to calculate your minimum sample size, or you can use free tools from sites like Create Research Systems, Optimizely, and Calculator.net.

Test duration errors

After you’ve hit your sample size number, you can focus on running your test long enough to achieve statistical significance. The industry standard for statistical significance is at least 95%, and to achieve this you’ll want to capture results from a variety of visitor types, which can vary based on the day and time. The test needs to run for at least one business cycle, which for most enterprises is typically a week. If you are running a test longer than one cycle, make sure it runs for a full cycle — meaning, either one or two complete cycles instead of 1.5.

Testing too many variables at once

When you run an A/B test with too many variables, it can become hard to tell which element is responsible for which results. Was it the CTA button color, placement, different text, or design that increased conversions? Compare one element at a time for each test.

If you’re set on comparing multiple variations simultaneously, you can use multivariate testing, which includes full-factorial, adaptive, and fractional factorial testing. But to consider this broad type of testing, your site needs to have at least 100,000 monthly visitors.

Testing on holidays

To pick up the highest possible traffic during your test, avoid running it during major holidays (government and religious) or industry events. That includes holidays both where you’re based, and also those celebrated in countries where your customers are located. The goal is to capture data from your typical traffic (or as normal as it gets), and during holidays, users’ schedules are atypical.

Focusing on the wrong pages

Not all pages are created equal. Don’t waste time testing pages that won’t bring you the results you’re after. In other words, if they aren’t directly linked to making conversions, don’t spend the extra effort running tests on them.

The most important pages to test and optimize are the ones that receive the most views. These generally include your home, about, contact, and blog pages. Test the pages that are a part of your sales funnel, which can also include product pages for eCommerce sites.

Never stop testing

Whether you’re performing A/B tests on a web ad, CTA placement, or anything else, you’re learning more about what your visitors want and what it’s going to take to earn that conversion. Constantly running marketing tests means you’ll stay up to date on customer preferences and consumer trends. Once you’ve completed your tests, use that data to make core changes with a constant drive to improve UX and conversions.