I like automated bidding, but I can’t help but side-eye many methods with a healthy dose of distrust. When I feel this way, it’s typically because I don’t have all of the information I want. As a deeply curious person, I don’t like the black box nature of automated bidding. I want to tear down the fourth wall and peer into automation and see what the heck the bots are doing back there. A question that lingers in my mind is if everyone utilizes an automated strategy, how does that bidding war play out? So, when a client came to me with a specific request to understand the veracity of their multi-brand strategy, I came up with a test that could help answer a big question for them, as well as let me get a peek into what automation is up to.

For context, my client is in the staffing industry. It is highly competitive, especially now in a time when unemployment rates are at a 49-year low. This client has been running a multi-brand strategy for years. Each brand lives in its own account, offers the same benefits to employees, and has its own website. But, they’re really fronts. All leads across all brands aggregate into one source internally. So, to them, all brands are agnostic. They wanted to know if there was a way to see which Brand was the absolute best to run on PPC. These brands have developed over time. So, some have a longer account history than others, and while all have the same structure and campaign themes, over time and through years of tests and optimizations, they have different sets of keywords, ad copy and of course, landing pages. The client also wanted to understand if they could see if one Brand had greater recognition than the others.

The Test

The crux of the question was how the brands compete against each other in auction. Was one more formidable than the others by virtue of brand recognition, auction performance, or was it something else? I tested target impression share bidding because I wanted to know what it would look like if all brands bid on the exact same keyword set,  had the exact same ad copy, bidding strategy and budgets. I wanted to equalize as many variables as possible, so I also set up the campaigns as brand new as opposed to allowing a longer historical record or quality scores to influence the data. The variable that could not be equalized was the landing pages. I set the test to bid for 65% impression share and placed a bid cap that was 4X my avg. CPC costs. The test ran for 3 weeks as we had a budget cap for the experiment.

The Results

PPC Target Impression Share Bidding Results

  • Brand C was the winner by conversion volume and conversion rates, even though it did not have the highest impression share. It also garnered the highest volume of impressions. This was eye-opening to the client because Brand C has not historically been considered a major player for them. Brand A has.
  • Brand A performed poorly despite historically having a great presence for the Client both in monthly budget and reputation. It also had the highest impression share in this test but was not as efficient as Brand C at converting.
  • Despite bidding for 65% impression share, none of the campaigns got even close to that. You can see that budget was primarily to blame. When I did some greater digging, I found that if I wanted to improve performance, I had to lift my bid cap. Which I had already set to be 4X my avg. CPC. If you want to get super competitive, it will cost you using this bidding method.
    • Avg. CPCs for the experiment were 3X the brands’ average CPCs
    • CPLs for the experiment were 2.8X higher
  • Impression share was particularly low for mobile-specific campaigns and all mobile campaigns lost impression share due to budget. My conclusion here is that mobile is far more competitive in this landscape as also evidenced by its much higher CPCs.
  • Auction Insights for these campaigns were very interesting:
    • Brand B showed up in both Brand A & C’s auction insights, but not competitively.
    • Brand A did not show up in Brand C’s auction insights and vice-versa
    • It was clear that Brand B was trounced by Brands A & C

We didn’t continue the test beyond its test budget because the CPLs were too high. This test was not perfect, but it helped provide a myriad of insights that were important to my client. It also gave me a better understanding of how automation works. My final consensus is that you really have to pay to play with an automated bidding strategy like target impression share as it can drive up costs well beyond the averages. One final note I will leave you with is that this bid strategy will be great for clients where there is a lot of competition for Branded terms, like B2B SaaS. I have a version of this experiment running, but it’s in its early stages. Early results show higher CPCs, but higher conversion rates and conversion volume.

– –

If you enjoyed this post, check out our Toolkit for Companies with Multiple Brands