9 Common Issues with Data That Make Your Ad Testing Worthless

By ,

121 SHARES

At a recent meeting between Google and one of my clients, we were presented with a staggering statistic: conversion rate increased as our average position increased in the account.  Based on this data we had a simple course to chart to climb all the way to the top of the entire Internet: bid higher.  When in doubt, just keep bidding.  It’ll make all the difference in your ad testing.

It was shocking to see, presented as it was in one of Google’s sleek and no frills Power Point decks.  Not only did click through rate increase, conversion rate increased as well.  While the conversation continued on all about me, I stared at the screen and felt my mind melting inside my skull.  All we have to do is bid up?  Just keep bidding higher?  Is this what ad testing was?  Was everything that Gerald B. Watson ever told me a lie?

After another couple of seconds of mouth-agape world questioning, I realized what was happening.  Our branded keywords were the only ones consistently in average position 1-2.  I waited politely for a pause in the presentation to ask if, in fact, this set of statistics included branded keywords.  After an awkward pause, the very nice gentleman from Google conceded that yes, this data included branded terms.  Branded keywords will almost always have a higher CTR and conversion rate than their non-branded counterparts, and if I had followed Google’s simple advice to bid higher on everything it could have meant a disastrous month for me and my client.

It’s important not to silo your data, but it’s also important not to treat all data the same.  There’s a reason people talk about wheat/chaff and cream rising to the top and Oreos being milk’s favorite cookie.  Some campaigns/positions/targeting methods are just better.  You can’t lump data from one in with another; that’ll wreck everything.

The first step is having a high degree of awareness about your accounts so you know what changes have been made recently.  From there, identify which of these variables is most important to performance and filter your data accordingly.  Don’t make the same mistake that Google did – actively seek out ways to make your test results can be as good as they can be.  These are all different factors that can shed light on performance if you use them wisely.

1. Average Position

Higher click through rate in a higher average position is almost a given, so don’t let your new aggressive bidding strategy fool you into thinking your ads are now somehow groundbreaking.  You’re just like people using Fast Passes at an amusement park – cutting in front of the line cause you’re willing to spend more money.

2. Types of Campaigns

We talked about this at the opening of the article.  Different types of campaigns can have wildly different results, so don’t let bad performance in one combine with great performance in another to make middling data.  Create meaningful aggregations to provide insights.

When I’m trying to get an overview of ad text I find it useful to break my ads into campaign types:

-Branded

-Non-Branded Search

-Remarketing

-Non-Remarketing Display

(And if you want to get even more specific, competitors campaigns should probably be broken out from non-branded search.) People have been breaking performance out by network for a long time, but it’s similarly important to break out campaign groups within those networks.  Don’t let Branded campaigns skew the true performance of your ads.

3. Seasonality

Education accounts will take off when people are thinking about school.  Cruise lines may have a great run when the weather’s particularly terrible.  Oreo cookies may take off when an industry-specific blog with dozens of readers mentions them in a post.  Be aware of when your high and low points are so you can adjust your testing results accordingly.

4. Quality of Deal Offered

At times your company/client may be offering a great deal that almost sells itself.  “Buy one, get one free” will get clicks like crazy.  “Half off shipping on orders over $10,000” won’t be as enticing.  Despite the wording of your ad, the quality of the offer itself could cloud your results.  Experimenting with different headlines during times of a bad promotion may lead you to exclude something that could work really well under better circumstances.

5. Ad Extensions

You may have extensions on some campaigns but not others.  You may have different wording on your sitelinks depending on the campaign and the targets.  Don’t let these small differences be the reason one ad looks like a star performer.  Be aware of the bells and whistles you’re running.

6. Devices

This one is almost as given as segmenting by network.  Know the relative performance across the devices you’re targeting so poor performance in one, or really strong performance in another, doesn’t give you the wrong idea about an ad.

7. Time of Day

This one applies particularly if your campaign is limited by budget.  If your settings are still on accelerated even though you’re consistently running out of budget, early morning users might have a different reaction to your ads.  The first step would be to check your settings (as running accelerated in a limited campaign is a bad idea), and then after that get to know how people respond to your ads depending on when it is.

8. Search Partners

This one could surprise you.  Search partner sites (and the users thereon) can have wildly different intents than the main search results page.  If you have campaigns that skew heavily toward search partners, be aware of them and filter your results accordingly.

9. Sample Size

This one’s basic.  The last one is also the most important one.  Don’t act unless you have enough data to be reasonably sure of what to do after your ad testing.

With all of these qualifiers it can start to feel paralyzing.  You may think you know what works but then want to check it against yet another segment.  This isn’t a post saying that you should check all of these in every case.  It’s just a reminder that there are things worth considering.  That meeting with Google could have been a turning point in the account.  If would have been a wrong turn, so it was good that different angles were considered.

At a certain point you just have to pick a winner and move forward.

Twitter Facebook LinkedIn Google+ Email Print More
  • 9thwotw

    That is shocking, however if we spend more money on PPC, G make more money so ofcourse their advice is going to be “bid higher, bid higher!”

    Good Article none the less :)

    • http://www.hanapinmarketing.com/ Sean Quadlin

      I thought that was a little shocking too. It’s helped set up my expectations for advice from Google. Sometimes it can be good, but keep in mind that their end goal is always more spend and higher bids.

      Thanks for reading!

  • jnscollier

    Why would you link to google at the beginning of your post. That’s so idiotic. Like people reading this post don’t know what Google is.

    • http://www.hanapinmarketing.com/ Sean Quadlin

      Great question. Thanks for reading. I appreciate the feedback!

      I’m actually a little surprised that you found my link to Google more idiotic than linking to Oreo’s Twitter. Only two hours ago they tweeted “Ever wish @Oreo cookies grew on trees?” Now THAT is a stupid link.

      Final thought of the day: Why would Oreo tweet at itself?

  • http://allmarketingsolutions.co.uk/social-media-marketing-services Ayaz

    Excellent post and love reading it, you have provided great points and feel like to consider them for my new site.

    Thanks for the sharing.

  • Paulo Rossini

    Hi Sean,
    Fantastic ! At the beginning o the post you’ve just written my mind. Great to know I’m not the only one with these feelings when having a meeting with Google reps.

  • Stefan Klein

    I am apalled that google would actual try and “scam” you and your client like that with such a cheap trick, combining brand and non brand traffic to suggest that a higher position correlates to a higher CR, when all you have been seeing is a peak or valley in one of your campaign segments.

    Thats even worse than getting a call from your google rep suggesting to just duplicate all your search campaigns into GDN content match campaigns.

    • http://www.hanapinmarketing.com/ Sean Quadlin

      I wouldn’t go so far as to say that they were scamming the client. I think it was just top-level (some might say lazy) analysis. And when it allowed them to pitch to the client “bid higher,” it took away all incentive to investigate why there were such differences. All they had to do was point to the numbers and try to rev up our bids.

      I appreciate the enthusiasm, though! Thanks for reading. It’s always nice to hear about other people’s interactions with reps so we can ensure that we’re making the most of their advice (which can be a lot fairer and more insightful than what I talked about here).