Marketing - Sky Advertising

Is A/B Testing Really Worth It?

Written by Jimmy Cintron | Apr 19, 2019 7:10:51 PM

Not even the savviest marketing experts can identify what will yield the most engagement with 100% certainty. Some marketers skip the testing stage, and proceed based on intuition, what has worked in the past, or try to emulate the competition. You have to test these things out.  Instead of basing marketing decisions on a gut feeling, you should test your marketing campaigns before running them at full scale. The simplest test you can put an ad through is the A/B test, also known as “split testing.” 

What is A/B Testing?

A/B testing is an experiment that helps you understand how to optimize your ad campaign by pitting two versions of an ad against each other. Your “A” version will be your control, while your “B” will be an alternative version with something different about it—whether that be copy, layout, graphic, etc. It’s very important that your “A” version and your “B” version have only one differing variable, that you show the campaigns to audiences that are identical in demography and in size, and that you show them at the same time—otherwise, your data will be corrupted and you won’t be able to tell which ad is actually performing better. However, if you’re able to keep the testing conditions consistent, this simple test will yield incredibly useful data.

How can I use it?

A/B testing does not in itself optimize a campaign. Rather, the information that it provides will empower you to make changes that will make your campaign more popular. You can use A/B testing to increase conversions, to decrease your bounce rate, and to increase traffic to your website overall. 

A/B testing will allow you to work smarter, not harder. Instead of plugging away with a sub-optimal strategy, A/B testing gives you the ability to test the influence of even the smallest detail of your campaign, like the color of the buttons.

What are the pitfalls?

While split testing is an incredibly potent way to test a campaign, it’s important to know when to use it. Most notably, you should only use A/B testing once you have enough traffic that it will actually be useful. If you’re not getting very many visitors, you won’t have enough of a sample size for A/B testing, so you should just focus on driving traffic by any means necessary.

A/B testing has caught on in digital marketing because it is, overall, quite easy to execute. However, not everything needs to be tested, and if you’re a small or a medium sized company, running A/B tests on every aspect of a campaign, chances are you’re wasting precious time and resources. Figure out what’s the most pivotal aspect of a campaign, perform a split test, implement changes accordingly, and then move on.

True data can be corrupted if you change multiple variables, or if you show them to radically different audiences at radically different times. However, there are more nebulous threats to your validity such as the “novelty effect” in which any “new” change will pique an audience’s interest without it actually being “better.” 

Putting it all together

A/B testing is a simple, effective way to begin to gather data about what’s working in your campaign and what’s not. It’s also a safeguard against risk, since you can test any modifications to an existing campaign with a sample audience before implementing across the board. Although some choose to, you don’t need to build your own infrastructure to run your A/B tests—you can always use CRO (Conversion Rate Optimization) software to aggregate and interpret your data.