At a recent meeting between Google and one of my clients, we were presented with a staggering statistic: conversion rate increased as our average position increased in the account.  Based on this data we had a simple course to chart to climb all the way to the top of the entire Internet: bid higher.  When in doubt, just keep bidding.  It’ll make all the difference in your ad testing.

It was shocking to see, presented as it was in one of Google’s sleek and no frills Power Point decks.  Not only did click through rate increase, conversion rate increased as well.  While the conversation continued on all about me, I stared at the screen and felt my mind melting inside my skull.  All we have to do is bid up?  Just keep bidding higher?  Is this what ad testing was?  Was everything that Gerald B. Watson ever told me a lie?

After another couple of seconds of mouth-agape world questioning, I realized what was happening.  Our branded keywords were the only ones consistently in average position 1-2.  I waited politely for a pause in the presentation to ask if, in fact, this set of statistics included branded keywords.  After an awkward pause, the very nice gentleman from Google conceded that yes, this data included branded terms.  Branded keywords will almost always have a higher CTR and conversion rate than their non-branded counterparts, and if I had followed Google’s simple advice to bid higher on everything it could have meant a disastrous month for me and my client.

It’s important not to silo your data, but it’s also important not to treat all data the same.  There’s a reason people talk about wheat/chaff and cream rising to the top and Oreos being milk’s favorite cookie.  Some campaigns/positions/targeting methods are just better.  You can’t lump data from one in with another; that’ll wreck everything.

The first step is having a high degree of awareness about your accounts so you know what changes have been made recently.  From there, identify which of these variables is most important to performance and filter your data accordingly.  Don’t make the same mistake that Google did – actively seek out ways to make your test results can be as good as they can be.  These are all different factors that can shed light on performance if you use them wisely.

1. Average Position

Higher click through rate in a higher average position is almost a given, so don’t let your new aggressive bidding strategy fool you into thinking your ads are now somehow groundbreaking.  You’re just like people using Fast Passes at an amusement park – cutting in front of the line cause you’re willing to spend more money.

2. Types of Campaigns

We talked about this at the opening of the article.  Different types of campaigns can have wildly different results, so don’t let bad performance in one combine with great performance in another to make middling data.  Create meaningful aggregations to provide insights.

When I’m trying to get an overview of ad text I find it useful to break my ads into campaign types:

-Branded

-Non-Branded Search

-Remarketing

-Non-Remarketing Display

(And if you want to get even more specific, competitors campaigns should probably be broken out from non-branded search.) People have been breaking performance out by network for a long time, but it’s similarly important to break out campaign groups within those networks.  Don’t let Branded campaigns skew the true performance of your ads.

3. Seasonality

Education accounts will take off when people are thinking about school.  Cruise lines may have a great run when the weather’s particularly terrible.  Oreo cookies may take off when an industry-specific blog with dozens of readers mentions them in a post.  Be aware of when your high and low points are so you can adjust your testing results accordingly.

4. Quality of Deal Offered

At times your company/client may be offering a great deal that almost sells itself.  “Buy one, get one free” will get clicks like crazy.  “Half off shipping on orders over $10,000” won’t be as enticing.  Despite the wording of your ad, the quality of the offer itself could cloud your results.  Experimenting with different headlines during times of a bad promotion may lead you to exclude something that could work really well under better circumstances.

5. Ad Extensions

You may have extensions on some campaigns but not others.  You may have different wording on your sitelinks depending on the campaign and the targets.  Don’t let these small differences be the reason one ad looks like a star performer.  Be aware of the bells and whistles you’re running.

6. Devices

This one is almost as given as segmenting by network.  Know the relative performance across the devices you’re targeting so poor performance in one, or really strong performance in another, doesn’t give you the wrong idea about an ad.

7. Time of Day

This one applies particularly if your campaign is limited by budget.  If your settings are still on accelerated even though you’re consistently running out of budget, early morning users might have a different reaction to your ads.  The first step would be to check your settings (as running accelerated in a limited campaign is a bad idea), and then after that get to know how people respond to your ads depending on when it is.

8. Search Partners

This one could surprise you.  Search partner sites (and the users thereon) can have wildly different intents than the main search results page.  If you have campaigns that skew heavily toward search partners, be aware of them and filter your results accordingly.

9. Sample Size

This one’s basic.  The last one is also the most important one.  Don’t act unless you have enough data to be reasonably sure of what to do after your ad testing.

With all of these qualifiers it can start to feel paralyzing.  You may think you know what works but then want to check it against yet another segment.  This isn’t a post saying that you should check all of these in every case.  It’s just a reminder that there are things worth considering.  That meeting with Google could have been a turning point in the account.  If would have been a wrong turn, so it was good that different angles were considered.

At a certain point you just have to pick a winner and move forward.