Written by: Patrick Comer, Founder & CEO and Andy Ellis, COO
Here at Lucid, we’ve spent the last nine quarters architecting, implementing and constantly refining the industry’s first public quality measure – The Lucid Supplier Quality Program. The goal of this initiative has been twofold:
The funny thing is, initially we didn’t believe quality was our problem to solve. Wasn’t it up to the buyer to determine what quality sample looked like? Wasn’t the buyer better equipped to determine what elements of quality mattered most to them? Was it our obligation to broker peace between buyer and supplier on our platform? This sounded awfully messy… and something we were more than happy to avoid.
On the other hand, we saw a unique opportunity. For years, the sample industry touted quality through various means – e.g. recruitment and registration methods, invitation frequency, double opt-in participation, respondent management, and incentives. These methods were sufficient at a point in time, but as technology improved so did our ability to detect and action on quality.
As our marketplace grew, and we received more feedback from buyers and sellers, our position began to change. We realized that not only did we, Lucid, have the capability to signal the quality of sample in new and interesting ways – our users expected it from us. The quality measurement standards that had been commonplace were inadequate in the new programmatic sample landscape we had created. It became apparent that, a well defined, rigorous and independent measure would reinforce the healthy ecosystem we were building.
Fortunately, after taking the proverbial sample brick to the head one too many times, we woke up. Not only did we have an obligation, as the largest sample marketplace ever created, we were well equipped to solve the quality challenge in an entirely new and unforeseen way.
Enter the Lucid Supplier Quality Program. Utilizing the independent expertise of Chuck Miller of DM2, we created v1 of a quality program.
The aim was simple. Measure and openly share, on a quarterly basis, the quality of a sample source. Specifically, we identified three main areas to focus on: quality of respondent data (Q-score), consistency of source composition (Consistency), and overall data quality at the survey level (Acceptance).
The journey from a ‘hands off’ approach to developing the Quality Program has been informative. Undoubtedly, the driving force behind our program is to equip buyers with the highest quality sample, ensuring that they are able to consistently provide accurate insights.
Additionally, we want to make sure that partners continue to provide responsive and consistent supply. We’ve collected more than two years worth of quality data and provided rankings each of the past four quarters.
We are proud to announce the sample suppliers that have topped our quarterly list in the last year:
This list includes suppliers who have been in the Top 20 for at least 3 quarters.
To celebrate the first anniversary of this groundbreaking program, here is a look back at the year:
*Notes: To provide this annual view, we have updated the methodology of our rankings. Historically, we have averaged the three main scores to create the Composite Score: Q Score, Acceptance, and Consistency. Within this retrospective ranking, we’re using z-scores for our Q Scores and Consistency Scores. Z-scores are a standard score, which enables a more accurate measure of supplier ranking. Additionally, we normalized the Acceptance rate within each survey. In previous rankings, we would measure a supplier’s total acceptance rate across all surveys (accepted completes/all completes). For these rankings, we only considered surveys with completes that were tossed out and compared each supplier’s performance to each other. For, example a Supplier A had a 5% rejection rate while supplier B had a 2% rejection rate, with the average being 3.5%. Supplier B would be given credit for better acceptance compared to all other suppliers in the survey.
Why isn’t panel ABC ranked, I know they are high-quality?
Not all panel companies participate in the rankings. Fulcrum members are required to participate in the quality program itself as a part of gaining access to the marketplace.
How has the Quality ranking impacted the industry?
For the first time, buyers are able to measure relative quality of supply over time. As with any marketplace, buyers tend to only purchase from high quality suppliers. Suppliers are sensitive to this, and work hard to provide consistent and thoughtful respondents. This requires Lucid to actively monitor, consult with, and in some cases, remove poor performing suppliers in order to ensure a high quality pool of respondents.
Can suppliers “game the system”?
Sure… they can try. However, Lucid has taken several steps to eliminate “gaming”. For instance, along the way, Lucid introduced the ability to field the quality survey in a fully blinded capacity utilizing our robust supplier API integrations. Lucid also uses absolute measures, such as, Acceptance Rate, or the inverse of rejected survey completes, to score supplier strength. Finally, the use of composite scoring at the source level, provides a dynamic view into changes in source composition over time that neither suppliers or respondents can obfuscate.
When are you launching new markets outside of the US?
We started collecting data in Canada and the UK four quarters ago and will have the data required to release those rankings in these markets in the near future. We will begin collecting data in new international markets soon – let us know if you’d like to see anything in particular.
Honestly, we under-appreciated the unique role that Lucid could play in not only reporting, but improving sample quality. We now recognize, and fully embrace, our responsibility to serve as a gatekeeper of sample quality. To drive this forward, we’re launching a new fielding methodology that will improve the program’s scientific rigor, respondent experience and fully automate its fielding.
How will we accomplish this? Respondents will now be randomly assigned to the Quality survey throughout the quarter. By inserting the quality measurements in the Fulcrum prescreener, our new methodology ensures improved pacing for increased representativity over time.
Additionally, we made sure to fully automate the program – that means no manual intervention by Lucid, buyers, or supply partners. We implemented a double-blind methodology to ensure clear objectivity. Best yet – we’ve taken our twelve-minute quality survey and reduced it into short modules, improving the respondent experience, while maintaining data integrity – welcome to the sample future.
Quality is a core component of the human answers industry. Lucid is determined to advance the industry with a robust and rigorous quality program. We will continue to fine-tune this program based on client feedback and reinforce successful practices.