PPC (Pay-Per-Click) Ad Testing Strategies

PPC (Pay-Per-Click) Ad Testing Strategies

Many times a client will come to you (an agency) expecting you to know all the answers. And whether or not we think we know all the answers, the safest way to end up with a happy (and successful) client is to make sure everyone is on the same page so that realistic expectations are set before the start of any campaign or project.

Let’s take a look at the testing of a paid search, sponsored ad or display/banner ad campaign. If you set the clients expectation that you may have some good ideas upfront, but that much of the long term value will be discovered along the way – it’s highly likely you will end up with a more successful campaign and a happier client because they will be willing to look at results as the outcome of a strategic test or plan, rather than as a standalone factor upon which to pass judgment as to your overall value.

Once you’ve worked with the client to determine the overall strategy of your campaign, it’s a good idea to define the test(s) early. Make sure to remember the basics of testing and statistics. For reliable results in an A/B testing scenario (which is where you test one ad “A” versus another ad “B”), you will need to define a few things:

  1. A variable to test in both ads
  2. A static element to hold consistent between ads
  3. A result to measure the results against
  4. A sample size that is large enough to be statistically significant (or at least meaningful)
Of note, in a multivariate testing scenario, you can test multiple elements simultaneously and determine correlations between elements. To effectively do Multivariate testing, you need a significantly large sample size.  So let’s develop a sample test ad:

  1. If we’re doing a pay-per-click or banner ad, ad then we might write a different headline for both ads, perhaps: “Subway Lunch Meals Now $5 Off” and  “Subway Lunch Meals Now 20% Off”
  2. We’ll hold the rest of the ad constant so we can see the difference when we chance just this one element. Perhaps the rest of the ad reads “Find a local store and pickup something for the whole office!”
  3. We now need to set a result to test. There are many options. If we’re running an ad with a specific end results, such as downloading a coupon, signing up for a newsletter or buying a gift card, then we would set any of those items as the goal. We would then track our results of the two ads against this goal. If, however, we’re doing a campaign that doesn’t have a specific conversion goal as the end result, then we might track something like “click-through-rate” to determine which ad was more compelling to the consumers who saw the ad. This could help build an understanding of what form of message is more compelling to consumers. But for this example, let’s assume we’re tracking coupon downloads.
  4. We then need to determine a good sample size. Determining this size, should be based on what you will do with the data. If you are trying to find which ad is more compelling, you probably don’t need an endless sample size to have a pretty solid understanding. Keep in mind to know what a good sample size is, you really have to understand the market place and the meaning of the results. If, for example, you are doing an ad test and plan to use results to develop a product or brand (so this is more of an R&D function) then you would want a much larger sample size. You may want to read up on how to develop a statistically significant sample size based on the population and acceptable margin of error. Here is a good, quick and free, calculator to help you. But to keep this simple, let’s say we’ve decided that 200,000 impressions is enough for us to see the results of an ad versus another.

So we run our ad and here are the results:

testing ppc ads

After running this test, we’re able to deduct a few things. You can do a statistical analysis and see that your margin of error is a little over 2%. Is that good enough? Well, it depends on the importance of your test. But for our test, let’s assume this is good – just think, they forecast the winner of the presidential election based on polls with a margin of error larger than that.

So if 2% is acceptable then let’s look at the data. If the purpose of our test was to guage consumer interest, to test the value offer of “$5” versus “20%” off, then we definitely notice a difference. We see that the headline in Ad 1, has nearly twice the click-through rate of Ad 2. This would likely indicate something about consumers preferring to see dollar values rather than percentages, and indicate that Ad 1 is the better ad. The rest of the inference you can apply will be based on what else you know about the category and other things related to the campaign.

But…what if our test was being tied to another metric? Let’s assume we were actually testing consumer response to the headline and tracking it to actions. We want to know how many people actually downloaded a coupon after clicking on the ad. Let’s look at the rest of the data.

pay-per-click ad testing

So now we see a different story. While Ad 1 drives more visitors to the website, people who click on Ad 2, are more likely to convert. Perhaps this is because 20% off provides more value when combined with the messaging “…the whole office”, because it will save quite a bit more than $5. Again, the strategic findings derived from a review of the results will depend on a variety of factors. But what we know is that if we’re trying to drive visitors, Ad 1 is going to perform better. But if we’re trying to minimize cost and maximize coupon downloads, then Ad 2 is going to perform better.

And here’s a free tip – “Like many things in marketing, testing isn’t glamorous, so at least make it look pretty”. In other words, many clients often find ad testing to be boring, laborious and extremely time consuming. If you want to get their attention and longer term support, I recommend presenting the results more visually to capture their attention. Here’s a sample. Your client will see this and want to know what Ad 2 is all about and how you brilliantly figured out […insert something great here]!

ppc testing results

Thoughts, suggestions?


Note: this article was originally written by Lucid Agency, for Agencyside.net. It has been updated slightly and edited to read better in the context of this website audience.
Scott Kaufmann
[email protected]

Scott is Partner at Lucid Agency and a lover of all things technology, marketing, investing and entrepreneurship. Scott volunteers on the board of the Denver-based Nonprofit Celebrate EDU and as a mentor for SeedSpot (a Phoenix-based social startup incubator).

No Comments

Sorry, the comment form is closed at this time.