Last August, Google switched up the options for ad rotation settings. We all saw it. And we all, for the most part, adjusted the settings to suit our needs, (I say “for the most part” as I to this day onboard clients with the unsupported “Optimize for Conversions” selected). Mary wrote a great piece, The End of 2 Ads Per Ad Groups. Google said, “This is changing!”. We said, “Okay!” and moved on.
In this post, I’d like to explore how that seemingly small change impacted the way I think, not only about ad copy, but about how I approach making changes in my accounts. I will explore:
- Why manual A/B tests are flawed
- Why I gave up trying to write “perfect” copy
- How writing more ads has freed up my time
A/B? C-ya later!
I know I have talked about it before: I am a process junky. I love symmetry. And straight lines. And clean tests. Everything has a place and an edge. The problem is PPC is not a place where clean, defined lines exist. PPC is messy. Everything touches everything. The peas are in the mashed potatoes and there isn’t a darn thing you can do about it. When I started in PPC nearly 5 years ago, our A/B ad testing process was a gem! We had a guideline for how to run a test and we stuck to it and we made definitive statements:
- “This worked!”
- “This didn’t work!”
- “Your consumer likes ‘Buy’ instead of ‘Shop’!
- “Interesting. In ad group A, your audience preferred ads with proper case. In ad group B, your audience clicked on ads with sentence case almost 3x as often.”
- “I will change that period to an exclamation point and see what happens!”
I thought it was so interesting to see the results of tests and to introduce new ads to test. In my mind, I made up a lot of sociological garbage about why this audience did or didn’t click certain copy. I didn’t actually know why. I didn’t stop to ask how the sausage was made, I just moved on to the next test. So when Google made the switch last year that basically told advertisers, “Listen, you can keep your ‘rotate indefinitely’ setting, but you will likely be left in the dust,” I may have had a slight nervous breakdown. I had A/B testing down! I was a machine!
- Write two drastically unique ads per ad group.
- Run for 30 days.
- Calculate statistical significance.
- Declare winner and pause loser.
- Write new ad by taking the winning ad and changing something small.
- Lather, Rinse, Repeat
Did you know humans analyze about 70 million signals in the blink of an eye? So which one of those signals made Jared click on Ad A and Jian-Yang click on Ad B? And why did Jian-Yang convert and Jared did not? I don’t know. What was the weather like when they each saw the ad? And while I would like to say, “It is because Ad A had a CTA in the headline and Ad B had a CTA in the description,” I do not actually know.This is the problem with trusting the results of manual A/B testing. We don’t take the time to think about our audience when making the decisions.
I realize the way I describe my love of process and organization makes me sound neurotic at best. While I do love clear guidelines, it is ambiguity, uncertainty, and imperfection keeping me in this field. The gray area is why PPC is both frustrating and exciting. Perfection? Overrated, of course.
Writing one perfect ad copy for hundreds of thousands of consumers is, um, shall we say, a “lofty” goal. So why did we as marketers try to do this over and over again with our A/B testing? Well, we didn’t know any better. We were doing the best we could with the information and technology we had.
Somewhere along the journey, Facebook came along and said, “Hey! Isn’t it so much better to write an ad when you know exactly who your audience is?” And we all said, “Yes!” And then machine learning jumped in and said, “I am strong enough, let me carry the burden of figuring out which ad to serve Gavin and which ad to serve Laurie.” And we said, “No! These are my precious ads and I won’t turn them over to you, machine!” And then we said, “Oh, okay, yeah, you probably know more than we do, here you go. But I’m still going to check in all the time.”
So we rolled up our sleeves, wrote a couple ads, then a couple more and flipped the settings to optimize and let it go. And it is working. I have yet to hear how more ads have lead to a negative impact on overall progress to goals.
Instead of trying to write 2 perfect ad variations, I now write 4-5 ad variations. Some use all the character space. Some don’t. Some have really strong CTAs. Some have a softer approach. All of the ads I write keep in mind brand integrity, voice, and proper control over grammar. When it makes sense for me to use ad customizers, I use them.
Takeaway: let go of the struggle to write good ad copy. Get in, write ads, get out, and get on to rocking your strategy.
More ads, less time
I am not entirely sure when my brain figured it out. As someone who was in love with A/B testing, everyone in the office came to me for advice on writing ads. But writing ads for manual A/B testing was wearing on me.
If you write your ads yourself, without the help of a program, you know it is a very time-consuming task. In the age of A/B, I would spend the majority of ad writing time thinking how to write a singular ad that could make sense across multiple ad groups to ensure the integrity of the a/b test. Depending on the size of the account and the tightness of the structure, that was a tough thing to do.
Now, no matter what the state of the structure, I am writing ads specific to the ad group, speaking to the keywords in that ad group, and I am breezing through the process. I try to write similar ads, but if from one ad group to the next the syntax doesn’t work, I don’t erase the work I’ve already done, I just rework the ad in question and move on.
My system isn’t perfect. I try to use formulas whenever I can to make writing any part of the ad easier. I like using DKI when I can. But mostly, my method now is quick and dirty. Get a bunch of ads in my spreadsheet, edit, make improvements, and let the machine do its work.
I am still working out how to analyze what the results are telling me about my audiences. I’ve been playing around with n-grams to see how groups of words perform. Mostly, I am going through this post-A/B test world with the rest of you and trying to see where it takes me.