If you are into marketing, you probably would have come across this famous quote:
“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”
A/B test, also known as split test, tries to solve this problem. It is a simple way to determine the effects of changing a variable on a product. It is important for marketers to conduct A/B test so that they don’t end up throwing darts in the dark.
Conducting an A/B test is quite simple, but there are certain finer nuances that you must keep in mind, else the results will be inaccurate. In this article we’ll see how to A/B test Facebook ads.
How to do an A/B Test on Facebook
Conducting an A/B test on Facebook ads consists of the following steps:
- Decide the metric that you want to optimize for. This could be conversion rate, cost per conversion, engagement rate, cost per like or whatever metric you care about.
- Decide the component of the ad whose impact you want to study. This could be image, ad text, targeting etc.
- Run different variants of ads, changing one component at a time so that you can pinpoint what change was responsible for the improvement in the metric.
- Check if the results that you obtained are statistically significant or not. This is an important step and one which is often missed.
- At the end of the experiment stop the under performing ads and move the budget to the ads that performed well so as to improve the ROI.
At What Layer Should The A/B Test Be Conducted?
As we saw in the previous post where we did a deep dive on Facebook Ad Objectives, Facebook users 3-layer approach to advertising:
- Campaign: to define objective
- Adset: to define targeting, budget, schedule and bid
- Ad: to define image, title, text and placement
Conducting A/B Test at a Campaign Level
Generally, conducting an A/B test at Campaign level would not make much sense. For instance, if your objective is to get page likes it wouldn’t be useful to run two campaigns, one with page like as an objective and another with page engagement as an objective. However, there could be cases where you might want to test two different objectives. For e.g., you might want to check if Offer Claim objective results in better conversion than Website Conversion objective.
Conducting A/B Test at an Adset Level
Conducting an A/B test is quite simple and useful at an Adset level. Targeting plays a huge role in the performance of ad campaigns, so testing multiple audience sets before zeroing down on one is beneficial.
Note:To get accurate results, make sure that there is no overlap between the different audience segments that you have chosen. You can do that through exclusion targeting feature that Facebook provides.
For e.g., I want to test if targeting Custom Audience performs better than targeting fans of the page. I’ll create two adsets- in the first Adset I’ll include Custom Audience and exclude the fans. In the second adset I’ll include the fans and exclude custom audience. So now I have two adsets with no overlapping audience (well almost!).
Note that in order to have apples to apples comparison, it is important that all the adsets have the same budget allocated to them. It is also important that all other components, such as campaign objective, bid type, duration, ad image, ad title etc remain constant. The only component that should vary across all the adsets is the one we are focusing on- which in this case was targeting.
Conducting A/B Test at an Ad Level
You can test different variations of image, ad text, title or placement at the Ad level.
There is a small problem when it comes to conducting A/B test at an ad level. Facebook very often tends to decide very quickly which ad is the best within an adset and allocates most of the budget to it. Now comparing an ad which spent $10 to one which spent $100 wouldn’t make much sense. To conduct a successful A/B test at an ad level, we must try to have nearly the same spend for all the ads.
As we know from the previous post, budget is defined at an AdSet level. So to ensure that each ad gets the same budget to spend, we need to have one ad per adset.
Note:Some people may argue that since Facebook’s algorithm is smart enough to figure out the best performing ad within an adset, there is no point in creating multiple adsets with one ad each. Instead, create one adset with multiple ads to figure out the best creative. This is a very valid argument but I feel that in order to conduct more reliable split tests, all ads should be given equal opportunity (i.e. budget). I have also noticed in the past that an ad which got outshone by another ad and hence received lesser impressions performed quite well when run alone.
Note that in order to conduct so many A/B tests you will have to spend a considerable amount of money to get statistically significant results (don’t worry if you are not familiar with the term ‘statistical significance’. We’ll come to that in a minute). So you must carefully pick the elements that you think could impact the performance of the ad. For e.g., you might not see any substantial difference if you target one ad to age group 18-20 and another ad to age group 21-23 (but again, that depends on what you are advertising, there cannot be one blanket rule.)
Having said that, let the cost of running these experiments not demotivate you, because in the long run this will help you improve the ROI.
Checking Statistical Significance of the A/B Test
You ran some experiments and found that targeting custom audience gave a better conversion rate. But are you sure that this did not happen by chance? Can you say with conviction that if you were to run another ad targeting the same audience, you’ll get an optimum conversion rate? Or was this just a one-off case?
This is where the concept of a result being statistically significant comes into play. You might have heard people (mostly data & business analysts) say, “the result is/isn’t statistically significant.” What they mean by this statement is that the result that you obtained could have happened just by chance and not necessarily because of the changes that you made to the variables.
Note:Statistical significance is the probability that an effect observed during an experiment or test is caused by changes made to a specific variable, as opposed to mere chance.
So, in order to say that Targeting XYZ will give us a better conversion rate than Targeting PQR, we need to see if our result is statistically significant or not. Calculating statistical significance is a topic for another day, for now you can use this online calculator that does this job:
From the above example we notice that even though Group A has a better conversion rate, we do not have a clear winner. This could happen either due to a small sample size (as is the case here) or there might not be enough of a difference between the two ad’s performances to see a statistically significant result. If it’s the former, you can increase the sample size by increasing the reach.
Note:You can increase the reach of your ads either by increasing the budget or by running the ad campaign for a longer duration.
Had we not tested our result for statistical significance we would have incorrectly assumed that Group A performed better than Group B. So, always check your results for statistical significance otherwise you might make an incorrect business decision based on bad data.
Key Points to Note about A/B Test
- A/B Test should be done simultaneously instead of sequentially so that the impact of holidays and other seasonal variations can be nullified.
- Instead of testing multiple variations of the same element all at the same time, start with broader variations first and then refine the experiments.
- In order to get a statistically significant result, the sample size should be large. So, to get more accurate results, increase the budget and/or run the ad campaign for a longer duration, say a week, before coming to any conclusion.
- Using A/B test you can test the impact of one variable at a time. If you want to understand the impact of multiple variables together, A/B test will fail to provide that kind of information. That’s where Multivariate Testing comes into picture.
That was all about optimizing Facebook Ads through A/B testing. Do share your thoughts and questions through the comment box below.
This post first appeared on my blog Creative Jugaad.