The Standard for Quality in Research Technology

How can we ensure that you’re getting high quality survey responses? The Lucid Quality Program provides a unique solution for evaluating supply partners and respondents. Using independent, third-party data specialists we measure our supply partners on the three most important aspects of survey sample: response quality, accepted completes, and consistency.

This exclusive offering allows Lucid customers to run projects with confidence, knowing they’re getting high-quality survey responses. In the spirit of transparency, we publish results each quarter – and current scores can be viewed anytime.

team doing quality work

The Science Behind Quality Scoring

We conduct a rigorous assessment of our supply partners that is designed to provide customers (sample buyers) with insight into their sample quality. All eligible supply partners are required to participate in the program. 

After being evaluated, supply partners are issued a QScore, Acceptance Score, and Consistency Score.

QScore

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers

Acceptance

Acceptance Score measures the acceptance rate of survey completes for individual supply partners. “Acceptance” refers to completes that meet buyers’ quality standards and, as a result, are accepted by those buyers. To define a benchmark for acceptance, a weighted average of all accepted completes in the Lucid Marketplace is measured. Then, to calculate Acceptance Score, individual supply partners’ completes are measured against each other, at the project level, to determine how close they are to that benchmark.

Consistency

Consistency Score measures a supply partner’s composition from quarter to quarter. “Composition” is the balance of respondent preferences and opinions within a given panel. These attitudinal responses are determined by respondents’ brand affinity, value-seeking behavior, technology adoption, movie and music usage, and gaming activities. Measuring composition can help researchers with tracking research.

Minimizing Sample Bias

To get accurate data, researchers must take great care to apply quotas – helping to prevent bias from over/under representation of certain groups. Similar care should be taken to manage latent sample characteristics, which are inherent in sample sources (due to differences in respondent recruitment strategies).

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers.

Brand-Pref

A common objective of survey research is to measure the performance and impact of brands. By selecting sample sources that do not skew too brand-favorable or averse, clients can get the best reads on their brands. Lucid regularly creates sample blends that can mitigate this potential source of bias.

Value-Seeking
Similar to brand preference and usage, some sources may skew heavier or lighter in terms of consumers that use coupons, shop sales, or exhibit extreme bargain hunting. Selecting sample sources that lean too far in either direction can provide biased insight not reflective of the population as a whole. This otherwise latent characteristic is quantified by our Quality Program.
Tech

People who are avid, early adopters of technology tend to have different attitudes around innovation and change than the general population. Using a sample source that skews too tech savvy or laggard can bias research outcomes – especially if the study is related to technology. Scoring sample sources on tech usage helps inform buyers so pitfalls can be avoided.

Movie_Music

For entertainment research in particular, obtaining sample that skews too heavy or light in terms of media consumption can bias results. This dimension varies greatly among those who otherwise appear similar demographically – some people simply don’t listen to music or watch many movies. Understanding the tendencies of certain sample sources helps prevent research missteps.

Gamer

Since some sample sources incentivize respondents in currencies used for playing games, a quick check of game-playing proclivity can reveal if a source looks mainstream or more skewed. This is particularly compelling for research in the video game category, but sample sources where larger numbers of people spend significant time gaming could signal other biasing characteristics.

Fully Transparent Supplier Scores

October 1, 2020 - March 31, 2021 Supplier Scores: United States

US-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

US-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

US-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~*90.29-0.448992017292.42
Attapoll~*92.60-0.084225401994.14
Besitos Corporation~*91.95-0.2577706901
Bitburst~**93.01-0.203046197190.06
BizRate~**90.45-1.3654527028
Bovitz~**92.70-0.1068332330
Branded Research*92.920.066941242096.89
Centiment~~**94.160.2042969697
Consultancy Services LLC~~**94.080.388325084792.41
CrowdNautics, Inc.~~**86.89-1.0198263213
Dalia Research*88.26-0.386676208395.92
DISQO90.76-0.288836186593.12
Embee Mobile88.46-0.205973090793.84
General Research Lab~**90.57-0.496409005390.51
iAngelic Research~**92.78-0.0957236689
InboxDollars~*92.70-0.039028112995.34
inBrain.ai~#90.36-0.254240868293.77
inMarket~**86.84-0.1346200657
ISA (MySoapBox)~~**91.300.1086169160
Make Opinion GmbH~*88.54-0.672097633090.49
MarketCube~#85.73-1.232264241893.25
MARU Group (SA)~~**93.020.2484182100
MindField Research~~**92.03
Point Club~#90.42-1.017307629491.76
Prodege~#92.81-0.070730658296.59
Publishers Clearing House~~**92.410.0790297285
Qmee~*92.83-0.094785453189.38
Revenue Universe90.53-0.2232727345
Screen Engine/ASI~*95.400.549216649296.28
SocialLoop~~#88.79-1.069761573074.70
Tap Research#86.81-0.508856383095.31
TapJoy~~*89.02-0.137543096491.02
Tellwut~~**91.980.0955163544
TheoremReach89.65-0.229240912294.07
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Australia

AU-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

AU-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

AU-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media87.68-0.450164584887.24
Attapoll93.020.221913001687.27
Bitburst90.73-0.145517268493.14
Competition Panel93.360.280903903695.35
Dale Network89.720.265783495494.25
Dalia Research89.37-0.111322018096.91
GMO Research#92.750.035106862289.14
GrowthOps89.5494.50
iAngelic Research91.09-0.0902240797
iGain (SayForExample)92.4888.34
inBrain.ai92.08-0.198762633394.19
MarketCube91.76-0.010106764993.72
On Device Research92.13-0.023936245689.88
Persona.ly90.50-0.289670357193.46
Point Club90.880.263450262196.52
Prodege93.480.155444353997.32
Qmee94.540.062206506795.04
Researchify Pty Ltd90.120.593747094583.38
Revenue Universe86.35-0.203430717492.78
Rewardia93.050.104146596695.46
Tap Research88.29-0.302216340994.78
TapJoy90.730.043693722795.57
TheoremReach89.57-0.302070099893.82
Titan84.45
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Brazil

BR-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

BR-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

BR-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International93.080.197150811695.16
AdGate Media90.38-0.111598201192.52
Attapoll91.670.014343394195.69
Bitburst91.04-0.048557938192.28
ClixSense93.69-0.035984521793.63
Dale Network91.920.099361631394.80
Dalia Research91.620.110481830295.46
Dayos92.05-0.0469829574
inBrain.ai90.85-0.0239925253
Make Opinion GmbH91.150.084565239796.85
Market Agent93.140.0285095667
MarketCube92.180.002512720997.29
Mo Web93.650.542408334492.25
Neobux92.78-0.002431895192.59
On Device Research92.15-0.0167511772
Opinaia Panel91.660.015021678391.68
Persona.ly93.360.584627013197.43
Point Club91.09-0.185022063695.29
Prodege92.17-0.294890577491.42
Research on Mobile89.97-0.532102514592.68
Revenue Universe90.99-0.020439894994.85
Tap Research88.81-0.320228452797.16
TheoremReach89.950.019311946997.09
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Canada

CA-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

CA-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

CA-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~88.97-0.379550699996.01
Attapoll~88.45-0.189789810094.77
Bitburst~91.010.016333900495.13
Branded Research93.210.116464667997.77
ClixSense~92.04-0.0417637880
Dale Network89.210.214754380191.62
Dalia Research89.14-0.721073597089.75
Decision Analyst~~92.77-1.151782450695.64
DISQO92.860.116680760998.28
iAngelic Research~~#92.650.030106232394.81
iGain (SayForExample)92.2596.37
InboxDollars (DR)92.870.251546325092.22
inBrain.ai92.880.088618532894.83
Make Opinion GmbH89.74-0.356367984988.99
MarketCube91.060.027633089494.32
MARU Group~93.300.541450733385.14
On Device Research~~90.530.4839934941
Point Club#91.80-1.184372985094.28
Prodege94.090.076390814695.54
Qmee93.17-0.045669541494.16
Research for Good (SS)91.97-1.254592478396.54
Research on Mobile~89.95-0.0748843398
Revenue Universe91.58-0.448508328089.58
Splendid Research90.590.571694795493.35
Tap Research89.90-0.226023630194.19
TapJoy89.31-0.000014424796.72
Tellwut91.850.170681318787.18
TheoremReach89.78-0.122295177795.96
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: China

CH-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

CH-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

CH-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Beijing Youli Technology Co., Ltd.~**87.81-1.132070807296.78
ClixSense~~**88.80-0.169862916684.80
Dalia Research~~**88.26-0.041272783497.28
Data100~~**88.370.290496413490.73
DataSpring~*91.130.279339929587.98
GMO Research~~**89.760.356348623678.87
Ignite Vision~~**90.05-0.2131115427
Interface Asia~~#87.86-0.948402892293.70
Juyanba~~**81.84-0.5640180134
KuRunData~**89.910.050290429596.29
Maiwen China~~**87.7698.16
MarketCube~**89.410.435269081591.19
Revenue Universe~~**84.33-0.362040390391.00
SampleBus Market Research Service Ltd~*86.74-0.2564974344
Tap Research~~**87.46-0.378773478688.34
TapJoy~~**85.89-0.3509863898
TheoremReach~~**86.82-0.883219123184.39
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: France

FR-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

FR-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

FR-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International91.01-0.104151412794.25
AdGate Media~*85.18-0.436554969393.68
Attapoll~87.83-0.140356066297.08
Bitburst~**89.43-0.001152334391.30
ClixSense~~#89.09-0.049515467890.20
CreOnline91.13-0.138835398496.71
Dalia Research86.96-0.316290066292.67
Decision Analyst~~**92.04-0.288256355186.83
Devola*90.06-0.104485831688.34
Gaddin.com90.960.219233463694.88
Gamekit~**83.70-0.580474860592.31
Hiving~~**91.150.214051711888.26
inBrain.ai~~**89.03-0.2710360251
JS Media / Moolineo90.89-0.104947269792.57
Make Opinion GmbH~*87.54-0.591160880391.59
Market Cube88.24-0.196122604492.80
Mo Web90.150.308593701793.49
Neobux~*89.88-0.6324608898
On Device Research~*89.67-0.313544156396.15
Persona.ly~*88.840.082647393486.72
Point Club89.980.251791729591.69
Prodege91.18-0.099050253992.97
Research on Mobile~~**90.540.2372261802
Revenue Universe~*87.17-0.119254264982.51
Tap Research87.90-0.178036111090.32
TapJoy~*88.77-0.074430663696.62
TheoremReach89.73-0.235616916296.79
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Germany

DE-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

DE-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

DE-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International#89.760.071967649393.80
AdGate Media~*89.40-0.8001300950
Appinio~~**92.93
Attapoll~*89.910.115111431594.65
Bitburst~~**92.420.250284605090.30
ClixSense*90.360.0821758128
Dalia Research89.84-0.031604227889.47
Decision Analyst~~**91.140.4067384341
Hiving~~**88.710.2174066283
Make Opinion GmbH90.630.120369331295.07
Market Agent 93.060.0864027156
MarketCube91.400.129475951192.84
Mo Web92.960.2864707585
On Device Research~*92.09-0.042444630189.92
Opinion Capital~~**90.56-0.0588887814
Panel Inzicht~~**88.37-0.0700431381
Persona.ly*91.54-0.2776865104
Point Club*90.70-0.3672337317
Prodege93.580.225142257196.11
Promio~~**90.190.2029125255
Research for Good (SS)**91.26-1.5064983547
Research on Mobile~~**90.01-0.3462905778
Revenue Universe*90.020.1759008964
Tap Research88.54-0.190412019392.10
TapJoy~~**87.470.0672944116
TheoremReach~*89.03-0.114421006384.28
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: India

IN-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

IN-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

IN-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~*84.80-0.305554624387.62
Attapoll*88.280.019454205496.07
Bitburst~**85.97-0.127694170493.91
ClixSense~**86.91-0.141513431296.71
Dalia Research85.82-0.106135687885.90
inBrain.ai~~**82.45-0.222961800688.42
Make Opinion GmbH~~**87.63-0.360508565895.66
MarketCube#87.59-0.063959982995.34
Neobux~~**89.82-0.213662523490.90
Opinion Capital~~**90.440.002941217795.86
Persona.ly~*87.580.066804013694.55
Prodege~*91.210.052584933194.57
Research on Mobile~*86.26-0.245137977381.69
Revenue Universe~**88.23-0.335625499495.54
Tap Research86.15-0.187000205293.65
TheoremReach83.42-0.149931217894.82
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Italy

IT-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

IT-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

IT-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International90.860.088333915097.52
AdGate Media~**88.29-0.2653353020
Attapoll91.240.297295945298.58
Bitburst90.470.160720120196.84
ClixSense91.750.205363007596.42
CreOnline92.290.336130606295.72
Dale Network91.010.735817729794.32
Dalia Research89.430.053908025297.34
Decision Analyst92.571.528494460896.81
Hiving~~**91.390.146754668189.25
Make Opinion GmbH~*90.41-0.0440022697
MarketCube91.30-0.173239736796.95
Mo Web91.330.265582304297.25
Neobux~~#90.350.210408458191.50
On Device Research~*91.310.1918195242
Persona.ly*91.130.246130413592.02
Point Club90.65-1.0482813672
Prodege~**91.700.180898065993.60
Research on Mobile~*89.750.1130355379
Revenue Universe~**88.250.233518026493.12
Splendid Research92.060.449596619293.93
Tap Research89.61-0.134446808292.36
TapJoy*88.97-0.306444103592.86
TheoremReach88.19-0.069098213694.38
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Japan

JP-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

JP-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

JP-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Dalia Research~*88.74-0.586533102086.37
GMO Research92.19-0.198774613989.40
MarketCube~**90.52-0.083262853692.50
Monitas (Gain, Inc.)91.33-0.212156982691.45
Research for Good~*92.97-0.522840684994.59
Revenue Universe~~**90.03-1.1170320794
Tap Research~*91.94-0.169655347896.40
TapJoy~**87.57-0.528961085786.79
TheoremReach*88.30-0.573344415679.58
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Mexico

MX-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

MX-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

MX-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~*92.810.946657519294.22
Attapoll91.020.121467141595.89
Bitburst~**89.940.225224992095.60
ClixSense~**93.340.206112840895.16
Dale Network91.510.235949254992.05
Dalia Research~**89.85-0.333132860994.16
inBrain.ai~~**91.86-0.6720488094
Innovative Hall~~**92.72-2.111558899988.14
Make Opinion GmbH~~**92.47-0.7425744744
MarketCube~91.451.075661963796.92
Mo Web~*92.640.283206348596.83
Neobux~~**94.51-0.303601638293.59
Opinaia Panel92.110.390730675592.87
Persona.ly~~**92.37-1.723828511390.87
Point Club~**90.28-0.118627282688.78
Prodege~~**93.570.689504670893.77
Research for Good (SS)~~**91.500.656929279893.94
Research on Mobile~**90.16-0.586105435190.04
Revenue Universe~~**88.94-0.628099750388.25
Splendid Research~*92.30-0.543195872296.74
Tap Research~*89.03-0.050094724096.68
TheoremReach~*89.340.228730425796.60
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Netherlands

NL-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

NL-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

NL-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~~**82.52-0.728011961494.46
Attapoll87.250.063800198287.94
Bitburst~*88.400.161431439495.57
Dale Network87.940.4348483633
Dalia Research~*87.980.116326781992.41
Make Opinion GmbH~~**88.33-0.2756373906
MarketCube88.660.178571192893.47
Pabble B.V.~~#90.480.382534590095.33
Panel Inzicht~*91.190.693247801490.63
Persona.ly~**85.410.213796436485.90
Prodege~~**89.690.357189170687.52
Research on Mobile87.850.302011375793.20
Revenue Universe85.240.055578243493.33
TheoremReach~*88.150.102952400191.09
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Russia

RU-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

RU-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

RU-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International91.670.119182436594.33
AdGate Media~~**88.680.1539250168
Amry Research93.11-0.532666626591.85
Attapoll~*91.350.0473665552
ClixSense92.590.105941504893.87
Dalia Research~*89.58-0.065260168188.28
Make Opinion GmbH~*91.00-0.1806334990
MarketCube90.420.119477556892.90
Mo Web92.320.155973677595.95
Neobux92.260.447135702097.45
Persona.ly~91.980.088824037994.79
Point Club91.56-4.843935962395.91
Prodege~~**91.80-0.0706327524
Research on Mobile~**90.85-0.026523296389.82
Survey Everyone92.290.137495608189.54
TheoremReach~*89.05-0.214687290893.11
YouThink.io92.110.200459691492.42
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: Spain

ES-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

ES-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

ES-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International93.21-0.004933046898.77
AdGate Media~**88.63-0.3510503013
Attapoll91.06-0.613761358896.89
Bitburst*92.15-0.066422777794.90
ClixSense*94.85-0.330785449795.62
CreOnline*94.300.523868137397.74
Dale Network93.031.018777807394.88
Dalia Research91.95-0.026257660093.28
Decision Analyst~~**93.24-1.667260174986.27
Hiving~**92.76-1.064623484788.14
inBrain.ai~~**92.02-0.1442940541
Innovative Hall~~**93.89-0.5394003011
Make Opinion GmbH91.30-0.527639826393.56
MarketCube92.13-0.433696059296.61
Mo Web93.731.240679279794.12
Neobux~*93.18-0.202267948691.59
On Device Research~*93.20-0.155613464794.13
Persona.ly~*93.310.253542909295.31
Point Club92.840.422819758894.90
Prodege~*94.640.227001222892.97
Research on Mobile~**90.60-0.5242842383
Revenue Universe~*90.59-0.578106967991.56
Splendid Research~**93.400.8095902173
Tap Research90.37-0.425599373695.34
TapJoy~*91.60-1.291386334790.92
TheoremReach91.13-0.040198550696.84
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: United Kingdom

UK-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

UK-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

UK-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~**88.95-0.847314337489.98
Attapoll~*89.47-0.160485146794.83
Bitburst~~**89.45-0.247524156579.42
Branded Research93.62-0.000048323696.14
ClixSense~~**90.19-0.571749247490.53
Dalia Research~*87.66-0.541942130588.44
Decision Analyst~~#92.59-0.123721185990.50
Hiving~**90.64-0.173885892981.95
iAngelic Research~*92.15-0.149209810088.61
iGain (SayForExample)91.6494.64
Inbox Dollars (UK)91.810.033398504795.00
inBrain.ai~*90.44-0.482507170991.71
Lux Surveys~~**90.6593.27
Make Opinion GmbH*90.45-0.361702334694.97
MarketCube89.01-0.152824004191.20
MindMover~~**91.68-0.136877638994.38
Mo Web~~**90.14-0.017139157496.27
MVUK - Kato~~**93.3683.84
On Device Research~*91.51-0.311883962984.94
Persona.ly~*91.68-0.5547796819
Pick Media Ltd~*93.04-0.044891842193.14
Point Club91.05-0.225554018894.37
Prodege94.50-0.015169210596.43
Qmee~*93.27-0.314291504293.16
Research for Good (SS)90.30-0.612525424291.84
Revenue Universe*88.77-0.290469433397.83
Splendid Research89.16-0.448494782893.21
Tap Research87.60-0.621743115897.51
TapJoy~*87.38-0.283401808995.86
TheoremReach*88.33-0.340768627991.50
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

October 1, 2020 - March 31, 2021 Supplier Scores: South Korea

SK-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

SK-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

SK-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Dalia Research87.6-0.2305257186
DataSpring92.030.0757394487
GMO Research92.230.2328223513
MarketCube84.43-0.4459744924
Panel Marketing Interactive89.41-0.2352998348
Research on Mobile86.31-0.5831480092
Revenue Universe84.40-0.5718243285
Tap Research88.55-0.3267823149
TheoremReach85.26-0.6091209556
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Stay in the know with LUCID. Subscribe to our newsletter.