Lucid Supplier Quality Scores: Q3 2018 Results

Oct 31, 2018 | Quality

Elevating Sample Across the Globe

The Q3 Quality Scores release includes over 55 sets of published suppliers across the US, UK, and Germany. With Germany published for the 2nd quarter, we have added consistency scores for that data set. This quarter, we’ve also completed the beta test of the Quality Program in Australia and shared initial scores with suppliers in that country as planned. Fielding in Australia is now officially launched, and we remain on target to publicly release the Q4 results with our next release early next year.

The latest scores can be viewed here.

Onward and Upward

Every quarter, we look for consistent improvement in the quality of sample provided by our supply partners. One of the key benefits of the Quality Program is the feedback and insight it gives suppliers – providing actionable guidance on their sample and delivery.  As a result, US QScores have continued to climb steadily over time. With results in the books for Q3, we’re happy to report that 28 of 34 US suppliers’ scores increased over Q2, increasing over 1 point on average!

Stability over Time

One of the methodological foundations of the program is the notion of central tendency, that is, the more data that is collected, the more that data will gravitate toward a normal distribution (a bell curve). As such, with a growing amount of data as a source of comparison, we can get a clearer picture of the distribution of a population.  If you closely look at the QScores for Q3, you’ll be heartened to see a normal distribution of scores – giving everyone a solid sense of how they compare to the population. This pattern appears each quarter, reinforcing program and data stability.

Another area of stability we continued to see this quarter is the absolute consistency over time of the five compositional items: brand affinity, value-seeking behavior, technology adoption, gaming, and movie and music usage. When looking at line charts for the past five quarters (when data has been collected in modules) the trends are amazingly flat. Aggregate brand affinity scores have stayed within a range of .9% over five quarters, while movie and music usage has moved the most apparently due to seasonal usage. But even for that measure variation has been just over 3%, with aggregate quarterly scores ranging between 54.2% and 57.7%

Always Looking Ahead

Over the past several months, we have also turned our focus toward improving fielding and the automation of reporting for 2019. These enhancements will tremendously advance program efficiency – with an objective to update Quality Scores on a much more frequent basis. Along with automation, we plan provide more sophisticated reporting on the program overall.

Scoring Criteria

Supplier scores are measured by the following criteria:

  • QScore: Quality “grades” based on standard survey response measures (e.g. time spent on survey, attentiveness, open-end response quality). QScore is calculated from a survey module with a minimum of 240 responses per supplier, including gender and age quotas. QScore questions are embedded as part of survey pre-screening, providing respondents with a seamless survey experience. The assessment provides benchmark data on the quality of sample suppliers.
  • Acceptance Score: Measures the acceptance rate of survey completes for individual suppliers. “Acceptance” refers to completes that meet buyers’ quality standards and, as a result, are accepted by those buyers. To define a benchmark for acceptance, a weighted average of all accepted completes in the Lucid Marketplace is measured. Then, to calculate Acceptance Score, individual supplier completes are measured against each other, at the project level, to determine how close they are to that benchmark.
  • Consistency: Measures the composition of respondent attitudes/opinions over time. Composition is measured by attitudinal responses to elements including brand affinity, value-seeking behavior, technology adoption, gaming, and movie and music usage. Each supplier is compared to an aggregate of all suppliers on these dimensions, using a three-quarter rolling average that is normalized to a score of 100. The Consistency Score evaluates the quarterly change in a supplier’s Composition scores, and was developed to benchmark sample suppliers against themselves over time.  As such, it gives buyers insight into the quarter-to-quarter reliability (consistency) of a seller’s respondents. This can be especially helpful in tracking research where variations in sample composition need to be minimized at every opportunity.

Recommended Posts

Stay in the know with LUCID. Subscribe to our newsletter.