Rise of the Machines: Programmatic Pricing is the Future of Sample

Mar 26, 2019 | Featured, Marketplace, Monetization

By: Patrick Comer, CEO, with contributions by Andy Ellis, CRO, MD, North America

“Sample” is a simple word that is intrinsic to all the moving parts that are involved in research. The truth is, sample is not so simple. Rather, it is the result of many complex factors such as surveys, audience, respondents, platforms, incidence rate, length of interview, and time. And these same components also determine the cost of sample.

So, what specifically impacts sample cost? It all starts with you and your surveys. Since the beginning of survey time, we have largely based sample pricing priced on required audience – as indicated by incidence rate (IR) and length of interview (LOI). In other words, difficult-to-find audiences and longer surveys cost more, while easier-to-find audiences and shorter surveys cost less. In a manual world, it was challenging (if not impossible) to evaluate all of these simultaneously-competing factors in real-time, all the time. So we, suppliers and buyers, accepted an uneasy truce and settled on a “good-ish” proxy: IR + LOI = price.

Through programmatic sampling, we have learned that each survey – and, more specifically, each survey and buyer combination – carries a unique digital fingerprint containing its DNA for all to read. This digital fingerprint contains what we at Lucid lovingly refer to as a “survey’s health metrics.” We also expose this information to suppliers via API. And guess what? Suppliers voraciously consume this information, which informs decisions across all survey inventory. All of this takes place in real-time, all the time.

And guess what else? No programmatic supplier really prices on IR + LOI. So, when buyers focus on pricing sample (instead of individual surveys), they act under a false assumption and fail to optimize their individual buying power. Ultimately, this drives up the sample costs and decreases delivery rates.

One of the least effective methods for sample pricing is the classic rate card.

I’ll explain.

The Rate Card Process

Every year, it’s the same: Large market research agencies ask sample suppliers to fill out rate cards. Weeks of work, and sometimes, in-person negotiations, all lead to preferred vendors and preferred pricing. Then, most of that work is promptly ignored for the real process of delivering sample into surveys.

Why are rate cards such a failed mechanism for buyers and sellers to lock in price, quality, and value? From the sample buyer’s perspective, rate cards are a way to increase price certainty into the research fielding process. Additionally, the profit margin of buyers is typically driven by the cost of sample so any decrease in sample costs is good for profitability. Most of the time the sample supplier is viewed as a vendor (not as a partner), and procurement usually creates a process to drive the negotiated price down.

The key failure point of a rate card is that the inventory in question – the research survey – isn’t a commodity. Currently, there is no way to rate the quality of surveys in order to use a negotiated rate. In other markets, the inventory is codified and, therefore, able to be traded at wholesale and volume prices. This simply isn’t true in market research.

As a result, it can be difficult for the buyer and seller to agree on price and volume for quality of survey before the survey goes into field. This is true for ad hoc pricing as much as it’s true for rate cards.

Let us begin with a simple rate card:

The goal of the rate card is for the supplier to fill in the CPI for any survey that fits within the LOI and IR. Once the supplier has entered the price, the buyer negotiates the given cell down with promises of larger volumes and extra special attention. Procurement loves this step because it’s a) simple to understand and b) shows immediate improvement to CPI to the buyer.

But, buyers are shooting themselves in the foot by not understanding the connection between survey quality, price, and volume. They are actually decreasing available delivery and increasing the overall cost of sample.

Let’s first look at the obvious failures of a rate-card-driven process.

No one can agree on IR (and sometimes) what LOI actually is.

First, IR – no one can agree on what IR is or how it’s calculated. I could write another blog post just on the different definitions of incidence. Do you include qualified respondents, pre-screening, routing, security terms, or quality terms? All these can be included in incidence, or not… and are typically poorly understood across buyers and sellers of sample.

Even LOI can be miscalculated – is this median or mean? Does this include speedsters or cheaters? Does this include the time to qualify, which can add 1 – 3 minutes and be especially painful?

The ranges are always a problem – there’s a huge difference between an average 2 min LOI and 8 minutes. Incidence is even more dramatic. A 10% incidence is ten times as likely as 1%, so how can they be priced similarly?

Many problems come from misunderstanding of the terminology and how to calculate it.

But it’s worse than that, because:

Every survey is a special snowflake.

We all know that there are many other factors besides LOI and IR that can drive price and delivery. Some are respondent-related: country, ethnicity, age, gender, and an infinite list of other audience target options. Some are survey-related: survey design, mobile optimization, survey platform, PII collection, and the list goes on.

The simple reality is that NO ONE knows how a survey is actually going to perform until it’s in-field. And we are TERRIBLE at predicting survey performance.

Below is a graph of 40M interviews across 129K surveys showing the difference between the IR at the time of BID (before the survey goes into field) and Actual IR once the survey is completed. The vertical axis holds the ACTUAL IR. The horizontal axis holds the mean BID IR. The bars represent one standard deviation above and below the mean.

What this shows is that for every IR from 1 – 100, the ACTUAL IR in field tends to be drastically different. Only 21% of surveys had a BID IR that was + / – 5% from the ACTUAL. That means the incidence rate is *way* wrong 80% of the time.

Here’s the same 40M interviews across 129K surveys now showing the difference between LOI at the time of BID (before the survey goes into field) and Actual LOI once the survey is completed. The vertical axis holds the ACTUAL LOI. The horizontal axis holds the mean BID LOI. The bars represent one standard deviation above and below the mean.

As seen with incidence rate, LOI is *way* wrong. Only 43% of surveys are within + / – 2 minutes of LOI between BID and ACTUAL.

Then, there’s a different reality: IR and LOI of survey always change over the course of fielding. Some of the reasons include over-quotas, targets, etc.

To demonstrate, we pulled 100 random surveys, all with a BID IR of 30%. After 24 hours, the ACTUAL IR ranged from 6% to 75% with an average of 23.3%. Rebasing them all to their day 1 ACTUAL IR, we can see how they changed each day.

Conclusion: only 10% of surveys that go into field are accurately projected at the BID state on both an LOI and IR basis.

No survey is an island.

Buyers and suppliers have vastly different perspective when it comes to surveys.

Every survey ever designed was done so in a vacuum. Each is precious to the researcher, the company, and the customer. For good reason, they believe the sun revolves around that survey. The buyer’s book of business is seen in the same light. Whether the buyer consumes $100K or $10M in sample a year, each believes their book of business is the most important and the suppliers will focus all their attention on it. Right?

Wrong.

In fact, suppliers have numerous opportunities for their respondents beyond a buyer’s single survey. First, there could be multiple individual surveys that a supplier has already bid on. Second, marketplaces (like Lucid) have thousands of surveys available to review. And finally, suppliers can use flat-rate platforms to push traffic to. Point being, suppliers have options not only when it comes to choosing a survey, but also the delivery mechanism.

There is no guarantee from the supplier to deliver a complete.

Rate cards are not a commitment from a supplier to deliver any number of completes to a survey; it’s simply a commitment to a price “if” they send a complete that fits into a definition of LOI and IR that the buyer creates. This means that if a survey is a terrible opportunity for the supplier and their respondents, they just stop sending sample. Typically, the buyer has no recourse except to abandon the rate card framework and price on an ad hoc basis.

Programmatic survey ranking is what’s really going on.

Sample suppliers are ranking every survey, every quota, every rate card, and every single opportunity across all platforms they have access to in real-time – recall the digital survey fingerprint we discussed earlier? They have complex, sometimes machine-learning algorithms that judge the relative value of each survey. If your survey falls down in ranking, guess what? You don’t get sample.

And here’s the kicker: they aren’t ranking your survey on IR and LOI.

Simply put, the only time your rate-card-driven survey gets more traffic is when it’s overpriced – meaning you’re willing to pay more than all the other buyers.

Conversion rate also determines the value of earnings per click (EPC).

Most survey ranking uses a version of EPC to determine the relative value of all surveys. A myriad of other variables are in the mix and each one is weighted based on the special sauce of the supplier. But, at the core, the math is simple: earnings (i.e. CPIs) / respondent sessions. In order to calculate this, you need to understand the conversion rate (i.e. the number of respondents that the supplier sends that actually complete the survey).

Let’s take three surveys:

Buyer A: 5 min LOI, $1.00 CPI, 20% Incidence
Buyer B: 10 min LOI, $5.00 CPI, 10% Incidence
Buyer C: 25 min LOI, $10.00 CPI, 2% Incidence

Which one is more valuable?

Well, we can’t know for sure, unless we understand the conversion rate, which is correlated to incidence rate – but only loosely – and the correlation is highly dependent on how the buyer defines incidence. For the purposes of this example, we will assume that incidence = conversion rate.

Buyer A has a $.20 EPC (CPI * Conversion)
Buyer B has a $.50 EPC – Round 1 winner!
Buyer C has a $.20 EPC

What happens if the supplier values the time of the respondent and divides through by minutes?

Buyer A has a $.04 Earnings Per Minute (EPM = EPC / LOI)
Buyer B has a $.05 EPM – Round 2 winner!
Buyer C has a $.008 EPM

Finally, each buyer has a Reconciliation Rate history. Meaning, each buyer typically rejects a certain amount of sample for each survey for data quality. The CPI that a buyer is offering is actually projected to be worth an amount less, based on their reconciliation history. Simply put, if you normally reject 20% of data, then a $5.00 CPI is only worth $4.00 to the supplier over the long term. Yes, your data cleaning impacts your survey ranking and, therefore, price.

Let’s assume the following Reconciliation Rates

Buyer A (10%)
Buyer B (40%)
Buyer C (5%)

Then the projected Earnings per Minute are:

Buyer A Survey = $.036 Project EPM – Round 3 winner and CHAMPION!
Buyer B Survey = $.03
Buyer C Survey = $.0076

Here’s the fundamental problem: the buyer is completely out of step with the actual value of the surveys they are launching, which leads to a myriad of pricing and delivery challenges. Rate cards are vestiges of a pre-marketplace process before programmatic was the norm.

The answer? Convert to a floating price.

The Lucid Marketplace doesn’t work on rate cards, and we stopped supporting automated rate cards more than five years ago. Because each survey is unique, and its value is virtually unknown before launch, pricing needs to be dynamic.

Lucid has implemented a reverse auction method – meaning that the buyer presents a survey and the price they are willing to pay for it (a.k.a. CPI). However, as is the case with a rate card, pricing does not stop there. Rather, once the survey goes into field, both the buyer and supplier can continually project its relative value based upon actual performance. In other words, surveys that perform well can be updated in real-time to clear at a lower CPI (which saves money) and surveys that are harder than anticipated can be updated with a higher price to attract more supply. The most sophisticated buyers are now matching the suppliers on ranking their opportunities and adjust pricing on the fly. Not only does programmatic buying mean the automation of fielding process, but also real-time pricing by the buyer and seller.

What does this mean, ladies and gentlemen? Ultimately, this all serves to better sustain and grow the long-term health of our research ecosystem. Proper pricing for proper audience equals long-term sustainability. As we are seeing at Lucid, it also means that it becomes more, not less, feasible to find those difficult audiences you are looking for.

Let’s look at the average daily CPI for Males 18-34 in the US in 2018 for survey with an LOI from 10 – 20 Minutes. Which ‘rate-card’ price would be most appropriate? None – buyers and sellers must dynamically price.

We’ve seen buyers adopt a variety of modern strategies. Our most sophisticated partners use our APIs to automate per-interview pricing based on the conditions of their survey, their time-in-field, and their budget. Others, including our own Marketplace Services team, give project managers training and discretion to fill a project efficiently based on the throughput of the marketplace. If you’ve already built your sample procurement around rate cards, don’t worry. We have a solution for you.

Follow Lucid on LinkedIn and Twitter to keep up with the latest company news and updates!

Recommended Posts

Stay in the know with LUCID. Subscribe to our newsletter.