Without getting too deep into the problem of attribution, I want to put something out there for consideration...
NB: This is a viewpoint from Nicholas Ward, president at Koddi
Your attribution model in travel marketing might be hurting your meta search performance, and not in the ways you might think. Luckily, there’s a simple solution.
As a little disclaimer - over the past decade, I’ve had the great opportunity to work on some of the most advanced, sophisticated and productive digital marketing campaigns in the world for huge global brands.
Around four years ago, the topic of attribution started to come up more and more among the digital and ecommerce teams at those brands.
With it came a lot of drama and opinion. Only more recently has that turned into data and actionable facts.
This is not about attribution in that sense, and only so that my biases are clear, here are my own opinions:
- Attribution is a problem best left to the analytics guys.
- Attribution is kind of like economics. Incredibly important, and with no absolute answers.
- When it comes to attribution, cleaner data and organizational alignment around that data is more important than being clever.
As I stated, this isn’t about attribution. Instead, it’s about optimization, and more specifically, an optimization challenge with meta search.
The challenge with optimizing meta search has two distinct parts:
1. Meta search traffic is by definition almost “all tail, no head.” For instance when speaking and working with large meta search advertisers, we find that on any given day usually no more than one percent of total traffic comes in on one property. Things fragment from there very quickly.
2. The booking process for travel is complex, taking place across about 22 different websites on average, during almost ten separate sessions. (Source: Google’s 5 Stages of Travel. Once you add in that additional segments book differently, you end up with even more complexity!) For the advertiser, this creates a lot of opportunities for inbound traffic from multiple sources.
Now imagine you’ve combined these two challenges with the last click attribution model.
In essence, you’re taking a very user that is highly likely to need to visit your site through multiple channels and then purposefully throwing out information about their purchase path, often times without even knowing it.
From the marketing standpoint, you’re taking a sparse data set that’s difficult to make decisions from and arbitrarily making it more sparse.
Why arbitrarily? Isn’t that a bit harsh?
I don’t think so. Take from people far smarter than I, last click leaves a lot on the table. When you use last click to measure performance in meta, you’re essentially optimizing from a data set that considers every other click in the research process as worthless.
The opposite of that, which is just as bad, is that you’re optimizing from a data set that assumes the last step in a user’s purchase process was worth a lot!
A standard (but too basic) optimization practice used in metasearch optimization is to bid down or remove properties that aren’t performing well.
If you’re running a Hotel Price Ads campaign, you might have two properties with performance as below, the only difference is in the second case an alternate partner (let’s say TripAdvisor) was the last click and given credit for the conversion:
Example: Last click
[table id=3783 /]
The problem here is that if you remove this hotel from our HPA campaign, you might adversely affect your total revenue without ever knowing.
The fact is that a conversion happened for this hotel. Because the last click came through another partner does not mean that clicks from this partner to this hotel are worthless. (In fact, given the insight that the hotel converted elsewhere, I’d argue that the opposite is true!)
If you remove it from our campaign, you’re leaving revenue on the table.
Example: Linear distribution
[table id=3784 /]
In this (much simplified) example, Hotel Price Ads and TripAdvisor would share credit for the conversion. As an advertiser, you now have the ability to make a much more informed decision about whether to include or exclude a property from your advertising portfolio.
Imagine this is happening hundreds or thousands of times per day. It quickly causes you to either under or over value the performance of specific properties, or even an entire campaign.
The most frequent result of last click optimization on a sparse data set is a skeleton campaign, where your spend, bookings and revenue have been reduced to a trickle and ROI is highly volatile because volume is so low.
Solutions
It may not be realistic for you to change your attribution model. Even if it is in your control, it’s something to test through, not something to do on a whim.
The most basic solution for this problem is to consider using two data sets for the campaigns. Your optimization data set might operate on a linear distribution, while your reporting data set can continue to be on last click.
This sounds complex, but sometimes it’s just as simple as using a third party set of tracking tags on your campaigns.
It can also be accomplished by consuming raw traffic logs from analytics providers and setting up your own custom reporting, which is not easy, but gives you tremendous control over your optimization process, allowing you to scale your campaigns up, vs. optimize them to nothing.
NB: This is a viewpoint from Nicholas Ward, president at Koddi
NB2: Hotel booking image via Shutterstock