ARF CEO Scott McDonald on New Initiatives on Data Quality

Oct 13, 2018 | Media Measurement, Press

(Originally posted by The ARF, October 2018)

The ARF has announced two new efforts to tackle data quality questions:

  • Data Labeling Initiative – with the ANA’s Data Marketing & Analytics division, the IAB Tech Lab, and the Coalition for Innovative Media Measurement (CIMM).
  • Data Validation Initiative – independently conducting research to determine whether surveys can be used to measure the accuracy of those targets.

Data Labeling Initiative

The trade group collaboration seeks to increase transparency of third party data sets bought in the open market by creating labels that indicate what is in the data set, similar to how nutritional labels display the ingredients in food. The labels will supply information in four sections:

  • Data Solution Provider and Distributor Information – Who provided the data segment and appropriate contact information;
  • Audience Construction – How the segment was constructed with relevant details such as audience count and applicable modeling or cross-device ID expansion;
  • Audience Snapshot – What audience segment the label describes, including the data provider’s branded audience segment name, the most relevant segment name from a newly standardized taxonomy, a top-line audience description, and applicable geographic coverage;
  • Source Information – Where the original data components were sourced with information including data provenance, data collection techniques, refresh frequency, and event lookback window.

A centralized database will “house the label information, as well as an associated compliance program that will govern disclosure, certification, and validation.” The consortium has established a six-month test period in which companies can test various data sets and submit comments.

Below is a conceptual version of a company’s data labeling –

Data transparency facts
More information is available at

Data Validation Initiative

The ARF partnered with Lucid, comScore, LiveRamp, and ODC [Oracle] to determine if a simple, cost-effective method could be developed to assess data quality. Specifically, they examined how much a specific dataset improved marketers’ odds of reaching their targets relative to a random approach.

Their initial work examined five segments: auto intenders, cereal bar buyers, Delta SkyMiles participants, movie goers, and consumers with investable assets of more than $50K. Other companies, such as Lotame, Sequent Partners, and Survata, worked independently and will share results, enabling a broad industry input into the interpretation of the methods.

Despite complicated questions such as what to use for a benchmark, how to develop the best sampling strategies, and how to deal with various data characteristics (e.g., behaviors vs. attitudes vs. attributes), early results are promising. For example, findings showed that including gender in a data set improved targeting.

Results are still being evaluated, but the tentative conclusions are that the survey tool appears to be viable, with caveats such as:

  • Targeting is more efficient for low-incidence categories like frequent flyers and high-income investors.
  • Behaviors are easier to study than attitudes or intentions.
  • Hybrid targets like demographics plus behavior may be better than either characteristic alone.

Additional work will expand upon these early findings and will be shared with the advertising community.

Recommended Posts

Stay in the know with LUCID. Subscribe to our newsletter.