Reflecting on the Lucid Quality Program’s 2019 Expansion

Déc 12, 2019 | Marketplace, Quality

If the Lucid Quality Program’s 2019 momentum is any indication of its future growth, I think we’re in for quite a ride next year. We’ve explored a ton of new territory and uncovered sample quality trends that were previously unknown to the industry. 

You may be wondering why this is significant – so here’s a bit of background on measuring sample quality: before the Lucid Quality Program, the industry had no clear metrics to determine what makes “high” or “low” quality sample. Our program defined those metrics. Then, we took it a step further and began creating scorecards for our supply partners to help them track their quality over time. Now, we’ve expanded our fielding and reporting to hundreds of supply partners all over the world. 

The goal? To learn more about global quality trends and use those findings to improve online sample quality for everyone. 

2019 Breakdown

To date, the Lucid Quality Program is fielding in 15 countries: 

  1. United States 
  2. Australia
  3. Brazil
  4. Canada
  5. China
  6. France
  7. Germany
  8. India
  9. Italy
  10. Japan
  11. Mexico
  12. Netherlands
  13. Russia
  14. Spain
  15. United Kingdom

Within those regions, we’ve gathered an incredible amount of data from 70+ unique global supply partners.  The program produces 250 scorecards every four weeks, with supply partner and buyer versions of each one. Every month, we collect over 60k completes for analysis, totaling nearly 750k annual completes in ten different languages.

What We Learned

QScore is derived from questions that benchmark respondent engagement. Think of QScore in tiers: Excellent (100), Very Good (90-99), Good (80-89), and Problematic (<80). We were pleased to find that of the 15 countries measured in 2019, 11 fell into the “Very Good” tier and 4 were “Good.” There were no countries in the “Problematic” range.

As the primary quality metric of the program, the QScore is calculated from a set of algorithms that evaluate specific quality responses in the sample on a gradient – examining degrees of ‘appropriateness’ of response. A continuum of scoring is applied across respondents, generally speaking – but the data for any given survey element is classified as: Acceptable, Questionable, or Worst. 

Based on the severity of inattentiveness and/or deviation from observed response patterns, items carry different weighting that impact Quality Scores. So, what patterns do we see across our 15 countries? Although we track a substantial range of response patterns, this post will only focus on four trends that we tracked.

1) Low Incidence Products & Fake Products

The quality of response between low incidence products and fake products track country to country with greater ranges on ‘fake product’, particularly in Italy (75% pass) and China (80% pass). Pass rates on refraining from selecting fake products are highest (95%+) in the US, AU, CA, DE, JP, and the UK. In every country, respondents do not select low incidence products 95% or more of the time.

2) Attention Checks

More variation is notable on our attention check where the US leads with a 90% pass rate. Most other countries are in the upper 80s while India, Japan, and Mexico fall into the lower 80s. When we look at what we think of as the worst offenders on the attention check (those that select ‘none of these’ rather than making a simple error of selection), we find that all countries only have frequencies of up to 4% of the time with the exception of Japan where we observe it 6% of the time – indicating Japan had the highest level of satisficing.

*Russia is removed from the attention check analysis as translations in Russia are undergoing further review

3) Open-ends

Thorough open-ends are highly sought after by researchers when they must ask them in survey instruments. At present, we are revisiting how we measure this in China and Japan given the difference in written word against most other country languages. For now, the frequency of thorough open-ends without those two countries included ranges from 52% (Germany) to 72% (Mexico) with other countries falling in between. Conversely, gibberish and blank open-ends come in at very low frequencies in all countries with none over a combined frequency of 5% (Germany) with most falling between 2 and 3%.  For 2020, enhanced algorithms are in place that leverage academic research to thoroughly contextualize open-end response based on differences in languages.

 4) Acceptable Length of Interview (LOI)

The frequency for acceptable LOI in our quality module ranges from 79% (China and Mexico) to 88% in Spain with most countries averaging in the mid-80s. This is interesting regarding Mexico especially given they tend to give the most verbose open ends. Our quality program also has an LOI check on a particular exercise within one of the modules and when we compare that specific page to the full module LOI, we do observe very similar patterns with the exception of Japan, where the acceptable LOI for the exercise comes in much lower than the overall module. Perhaps people in Japan tend to speed through list types of exercises.

Heading into 2020

Needless to say, 2019 has been a huge year for the Quality Program. As we move into 2020, we believe we will observe supplier consistency scores generally trending up in our newer markets as has been the case in the US and UK as those markets have matured on our exchange. As well, look for growing numbers of suppliers to be listed in each market as more suppliers provide enough sample to be included.

We intend to take a hard look at the Acceptance Score measure for improvement so that we retain the quantified buyer perspective, but in a way that is easier to digest and more actionable from a decision-making perspective. Finally, we will increase consulting opportunities to our suppliers so that they may review and learn from the patterns we observe of their respondents in our program.

2019 was a year of intense automation and scale for our Quality Program. With those goals achieved, our focus will now shift to program enhancement to continue supporting the health of our marketplace.

Recommended Posts

Stay in the know with LUCID. Subscribe to our newsletter.