X

Subscribe

November 1, 2017

To Whom am I speaking? Part I

No matter how far back you go, successful marketing has been characterized by three inviolate rules: Who, What and How. Decide: Who do we want to persuade, What do we want to say, and How do we get the message to them.

Martin Luther attacked the Catholic Church in the 1500’s by nailing a manifesto to church doors. His document was printed on a Guttenberg Press. That was good marketing. No matter how far back you go, successful marketing has been characterized by three inviolate rules: Who, What and How. Decide: Who do we want to persuade, What do we want to say, and How do we get the message to them. Every marketer knows this.

Roll forward 500 years. We do the same thing, except we do it with data. At the beginning, the inviolate first rule is to speak to people who will care, people to whom the message matters. Not only does speaking to the wrong people waste the effort, it may invite backlash. Understanding your audience and your prospect is the first rule of marketing and always has been.

Today, we gain that understanding by examining the point of view of people who might care. (Focus groups, for example). We gain that understanding by looking at how people who bought our products live, and what they believe, and what they feel. We gain this understanding by observing how people shop, or drive, or make decisions generally. We do this to serve them, not to stalk them. And it all ends up as data.

The enemy: oversimplification
As data began to take hold as a mechanism for targeting, it became clear that data containing a strong indication of a person’s likelihood to buy something was scarce. Advertisers wanted more reach. Thus, we created models to expand targeted reach. Mostly they were so-called lookalike or act-alike models. People with the same behaviors as pet-owners became pet-owners in a segment, whether they exhibited pet-owning behaviors or not. This sold a lot more media, so was a good thing for people in the media business. For advertisers, though, the advancement was marginal.

Usually, a segment of modeled pet-owners was richer with pet owners than some demographics like suburban homeowners. So, as data become more important in planning, we began to assign words to describe the qualities of data. Among them were “Deterministic” (raw behaviors), and “Probabilistic” (modeled consumers, or inferred attitudes), both of these descriptions have issues.

People glom on to the difference between “deterministic” and “probabilistic” as some sort of magical line in the sand, but the false sense of comfort is dangerous. Yes, facts exist (deterministic data), but riddle me this: If my credit card shows I shop at Whole Foods, am I rich? Any inference made from deterministic data sends us right back down a statistical rat hole.

Another schism divides planning and activation. I might know for sure that certain cookies associate with a click on a hotel ad. Planners may decide they want to contact within that group. For television, there is no choice but to reduce that audience to a demographic. For online, the exact people can be activated, but does clicking on a hotel ad make you a good prospect? In this case, it probably does. But the population of clickers is a tiny subset of those who might actually need a hotel room.

Grim reality
The CEO of a large agency recently said to me, “Everyone tells me their data is great. How would I know?” Indeed, how would he?

It’s time for advertisers to hold themselves and their suppliers accountable for the quality of data and the conclusions derived from it. The risk of not getting this right will be the commoditization of data (and ergo, consumers). That stands, in my opinion, high on the list of strategic risks for the online ad industry.

So, I present here, a listicle of quality attributes for advertising data:

  • Recency: If you are buying, say, “beauty category buyers,” it’s pretty safe to say that they are still interested in beauty after six months. But, if you are buying six-month-old “Auto Intenders,” there might be a pretty good chance they no longer need a car.
  • Veracity of the inference: How is the purported meaning related to the data? For example, say you bought a segment of people interested in adhesives (glue intenders!). To validate, ask some of them their intent. What will they say? “I like glue?” or “I studied principles of adhesion in engineering school?”
  • Observation vs. declaration: Some data is derived by observing what people did (for example, cookie, panel). Other data is derived from what people said. Third-party data sites, which collect observations (see: http://www.bluekai.com/registry/) are pretty good at nailing interests from website behaviors; demographics – not so much.
  • Likelihood of actual reach: There are several reasons a cookie in a DMP may never translate into actual reach. The user may never show up in the bought impression footprint, or through simple cookie deletion. You could buy 50 million users and only find half – or a tenth – of them. Time helps, but this is a serious impairment.
  • Fit with actual prospect density: You might buy a segment of 20 million new pet owners, but are there really that many out there? Inflated estimates of prospect density are the first symptom of naïve hope.

In our next segment, we will talk about how to address the problems with data. Until then, have a deterministic day.