October 18, 2016
The release of expanded text ads rocked the world of Google PPC in early 2016. And shortly thereafter Bing joined in on the fun, bringing expanded text ads to so many of us in the PPC world.
But some of us got screwed. That’s right, some of us have a bone to pick with these magnanimous platforms that, yes, provided us with a whole new set of messaging characters. And sure, we’re seeing an average CTR lift of 40% or more. And of course we may have seen improved quality scores from our access to so much more relevancy within our ads. But we’re still mad folks.
Why, you ask? Because they seriously messed with our ad testing.
For years we’ve shared tips on how to approach ad testing. From lists of top ad testing recommendations, to instructions on how you might evaluate your test scores, the authors of PPC Hero, Search Engine Land, the Clix Marketing blog and Wordstream’s content have left no stone unturned for you ad testers.
So how have ETA’s unraveled the work we’ve done and what considerations must you make going forward? That’s exactly what we’ll cover today.
To begin, some of the rules have changed with ad testing. The popular recommendation of testing Display URLs is no longer so simple. Where we once saw clear ad test success in using terms like Mobile, Deal, and product specific language, now we have two Display URL paths to content with. Factor this into the many other aspects of a text ad test and we’re suddenly moving into exponential testing.
To follow this, the idea of emojis in ads was given and quickly taken away. Although this is concept is a delicate one, we certainly read of the success some were seeing with these little guys. But before many of us could reap the benefits of meeting our users through colorful, relevant emoticons, they were removed from the running.
All lighthearted impact aside, the loss of mobile preferred ads has been the most significant loss in the appearance of expanded text ads. Where we once were able to create a text ad and select the mobile preference to indicate that this ad should be shown to users on a mobile device, no more my friends.
With expanded text ads, Google and Bing expect that mobile users will benefit from improved context to their queries. The added characters provide trustworthiness and information mobile users need to decide if this ad is the most relevant for them. Yet the mobile control is no longer in your hands.
Or is it?
This is where our bad luck takes a turn: There are a few solutions to this issue.
We’ve discussed how one might utilize ad customizers to add product details or geographic specificity in text ads. But the secret hack of ad customizers is being able to create an ad specific to a user’s device.
By doing this, you are selecting the ad text to be used when a specific device category is recognized. An example ad might show the following:
From here, we trigger an ad specific to the mobile device target in our data feed, yielding an ad like this:
This workaround gives you the opportunity to continue your mobile language focus, that many of us have reaped the successes of, while following the structure of the new ETA format.
Device specific campaigns
This topic is one for the hot box. Google explicitly told us that device bidding was being given back to us if we solemnly swear to not just go back to device-only campaigns. Or at least, it sure was implied. Did we listen?
When your performance varies drastically from desktop to mobile device to tablet, this warrants the use of our new controls. Creating a campaign in which desktop and tablet devices are bid down to negative 100% allows for specific targeting of mobile-classified devices.
Does this strategy work? Although we’ve been able to create these for a few months now, there is shockingly little evidence that it’s worth the segmentation. And as your elders might suggest, going against the “recommended” best practice of our kindly platform advisers seems unwise. So we wait. We use the device modifiers and work to structure campaigns that play into the devices that work best and how they work best.
And maybe that’s the lesson we’ve learned: That despite our best efforts, perhaps trying to over-segment into our own views of what’s right and what’s best is undermining the end game. Perhaps utilizing bid adjustments, ad customizers, and other tools constantly provided to us (i.e. cross device conversions, attribution modeling, etc) is actually the best way to go with the flow. Work within the system?
Sure, that makes sense. But I know that I for one am still awful bitter about the impact this is having on all my ad tests….