The Standard for Quality in Research Technology

How can we ensure that you’re getting high quality survey responses? The Lucid Quality Program provides a unique solution for evaluating supply partners and respondents. Using independent, third-party data specialists we measure our supply partners on the three most important aspects of survey sample: response quality, accepted completes, and consistency.

This exclusive offering allows Lucid customers to run projects with confidence, knowing they’re getting high-quality survey responses. In the spirit of transparency, we publish results each quarter – and current scores can be viewed anytime.

team doing quality work

The Science Behind Quality Scoring

We conduct a rigorous assessment of our supply partners that is designed to provide customers (sample buyers) with insight into their sample quality. All eligible supply partners are required to participate in the program. 

After being evaluated, supply partners are issued a QScore, Acceptance Score, and Consistency Score.

QScore

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers

Acceptance

Acceptance Score measures the acceptance rate of survey completes for individual supply partners. “Acceptance” refers to completes that meet buyers’ quality standards and, as a result, are accepted by those buyers. To define a benchmark for acceptance, a weighted average of all accepted completes in the Lucid Marketplace is measured. Then, to calculate Acceptance Score, individual supply partners’ completes are measured against each other, at the project level, to determine how close they are to that benchmark.

Consistency

Consistency Score measures a supply partner’s composition from quarter to quarter. “Composition” is the balance of respondent preferences and opinions within a given panel. These attitudinal responses are determined by respondents’ brand affinity, value-seeking behavior, technology adoption, movie and music usage, and gaming activities. Measuring composition can help researchers with tracking research.

Minimizing Sample Bias

To get accurate data, researchers must take great care to apply quotas – helping to prevent bias from over/under representation of certain groups. Similar care should be taken to manage latent sample characteristics, which are inherent in sample sources (due to differences in respondent recruitment strategies).

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers.

Brand-Pref

A common objective of survey research is to measure the performance and impact of brands. By selecting sample sources that do not skew too brand-favorable or averse, clients can get the best reads on their brands. Lucid regularly creates sample blends that can mitigate this potential source of bias.

Value-Seeking
Similar to brand preference and usage, some sources may skew heavier or lighter in terms of consumers that use coupons, shop sales, or exhibit extreme bargain hunting. Selecting sample sources that lean too far in either direction can provide biased insight not reflective of the population as a whole. This otherwise latent characteristic is quantified by our Quality Program.
Tech

People who are avid, early adopters of technology tend to have different attitudes around innovation and change than the general population. Using a sample source that skews too tech savvy or laggard can bias research outcomes – especially if the study is related to technology. Scoring sample sources on tech usage helps inform buyers so pitfalls can be avoided.

Movie_Music

For entertainment research in particular, obtaining sample that skews too heavy or light in terms of media consumption can bias results. This dimension varies greatly among those who otherwise appear similar demographically – some people simply don’t listen to music or watch many movies. Understanding the tendencies of certain sample sources helps prevent research missteps.

Gamer

Since some sample sources incentivize respondents in currencies used for playing games, a quick check of game-playing proclivity can reveal if a source looks mainstream or more skewed. This is particularly compelling for research in the video game category, but sample sources where larger numbers of people spend significant time gaming could signal other biasing characteristics.

Fully Transparent Supplier Scores

June 14, 2021 - October 3, 2021 Supplier Scores: United States

US-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

US-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

US-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media*90.53-0.281871956292.42
Attapoll~**91.630.013573157596.24
Besitos Corporation*91.95-0.4202954869
Bitburst~~**93.33-0.222330319796.71
BizRate88.52-0.0460317389
Bovitz*92.700.2051644960
Branded Research~**95.230.240031475695.47
Centiment~~**94.570.2286728280
Consultancy Services LLC~~**93.020.343677471089.95
CrowdNautics, Inc.~~**88.47-0.4341460213
Dalia Research*89.05-0.145214388095.92
Decision Analyst92.240.549950384881.77
DISQO*90.340.086709021688.81
Embee Mobile88.780.029048837667.89
Gamekit~*92.400.1165356974
General Research Lab~87.88-0.376850341297.01
iAngelic Research~*91.17-0.0447608993
InboxDollars~~**93.23-0.594536579388.35
inBrain.ai~*88.64-0.218103600098.53
inMarket~~**80.330.0026646396
ISA (MySoapBox)~~**90.46-0.057139001385.24
Make Opinion GmbH~**92.18-0.626685308990.49
MarketCube85.4193.25
MARU Group (SA)93.020.0245085877
MyPoints~~**94.230.083231790828.87
On Device Research*91.98-0.0073954790
Opinion Capital90.370.0281314073
Point Club*89.34-1.433411115091.76
Prodege~~**93.990.083231790892.89
Publishers Clearing House~~**87.240.159276235126.56
PureSpectrum~*91.95-0.2602103863
Qmee~*91.850.001322505895.69
Research for Good (SS)92.640.3376058509
Revenue Universe89.92-0.0630964452
Screen Engine/ASI**94.82-0.251035818596.28
SocialLoop~**90.51-0.125982572293.76
Tap Research86.25-0.242422983689.89
TapJoy~~**87.760.054559489881.66
Tellwut**90.830.1581107556
TheoremReach89.83-0.078841133789.11
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Australia

AU-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

AU-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

AU-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~~**88.73-1.141671578084.71
Attapoll~~**92.900.352496254992.41
Bitburst~~**92.96-0.768961773391.99
Competition Panel~*94.140.522467795891.17
Dale Network~*89.010.317996748894.25
Dalia Research~~**91.400.015866015892.38
DISQO91.090.3738204218
GMO Research~~**91.740.335474204891.42
GrowthOps~~**89.6584.80
iAngelic Research91.490.2123381665
inBrain.ai~~**92.840.187533900290.82
Make Opinion GmbH~*92.700.160548889396.33
MarketCube~~*90.5592.67
On Device Research~**93.060.299642285991.15
Persona.ly92.29-3.063292585693.46
Point Club92.15-0.515062459194.44
Prodege~*93.810.348659673897.61
Qmee~*94.240.205435459096.07
Research for Good (SS)90.270.2466565631
Researchify Pty Ltd~~**91.691.881486566680.59
Revenue Universe~*87.32-0.155227884793.83
Rewardia~**92.830.313169768794.27
Tap Research~*88.75-0.102220252896.91
TapJoy~*92.090.218371371593.53
TheoremReach~~**90.940.007027490995.30
Titan~~**84.08
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Brazil

BR-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

BR-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

BR-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~*92.210.314045501196.39
AdGate Media~~**92.340.139653585194.56
Attapoll~*92.27-0.066197497795.86
Bitburst~~**92.110.016328270292.06
ClixSense~*92.950.166353106196.54
Dale Network~*91.190.048420453294.25
Dalia Research~~**92.18-0.092707924896.00
Dayos~*92.62-0.071375160596.47
inBrain.ai~~**91.090.468489254496.04
Make Opinion GmbH~~**91.450.351274234690.98
Market Agent~*93.100.404934219694.58
MarketCube91.0896.49
Mo Web~~**93.280.416585730491.43
Neobux~**92.680.261675304894.54
On Device Research~~**93.02-0.017126125994.03
Opinaia Panel*90.46-0.149040137691.82
Persona.ly~**93.16-0.034019359894.87
Point Club~**92.15-0.645942678292.92
Prodege~~**92.800.030639068490.08
Research for Good (SS)91.25
Research on Mobile~~**91.13-0.020589881990.72
Revenue Universe~~**91.80-0.158128025993.26
Tap Research~~**90.21-0.088970099296.28
TapJoy~~**89.72-0.0306017595
TheoremReach~~**89.950.226767653294.24
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Canada

CA-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

CH-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

CA-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International91.410.3182602698
AdGate Media~~**89.07-0.015858683493.32
Attapoll~~**87.570.224875612191.50
Bitburst~~**90.020.090128811692.53
Branded Research94.710.370715035796.50
ClixSense90.96-1.131389798293.30
Dale Network~*89.840.228839158887.65
Dalia Research~~**88.310.183492444395.88
Decision Analyst92.970.352557193491.95
DISQO*93.700.370946841292.95
General Research Lab94.160.332112036289.60
iAngelic Research91.210.294805002894.81
InboxDollars (DR)92.410.309988931697.33
inBrain.ai~*93.200.355211897195.67
Make Opinion GmbH~~**91.49-0.010752670393.51
MarketCube~~**88.6993.33
MARU Group~~**90.570.379024123790.71
On Device Research90.530.1998324198
Point Club91.970.095611881794.83
Prodege~*95.010.332251917097.41
Qmee~~**92.700.272418761494.21
Research for Good (SS)91.980.276238083193.22
Research on Mobile~~**88.48-1.366572506894.25
Revenue Universe~**88.230.129553385091.19
Splendid Research91.050.241493675492.26
Survey Everyone88.940.2792999718
Tap Research~*91.470.160159879995.11
TapJoy~~**90.550.315174130097.78
Tellwut~*92.770.323697601492.60
TheoremReach~**91.600.067957626095.35
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: China

CH-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

CH-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

CH-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Beijing Youli Technology Co., Ltd.~~**87.35-1.065968267490.14
ClixSense~~**90.670.493091930692.16
Dalia Research~~**89.570.372617766583.81
Data100~~**92.400.497868506794.81
DataSpring~~**91.0794.25
GMO Research~~**90.980.757342389092.74
Ignite Vision89.92-0.203563742792.32
Insight Works~*89.410.2595711485
Interface Asia~~**90.900.083452152887.41
Juyanba~*84.77-0.284207856193.06
KuRunData~~**91.410.696615449091.44
Maiwen China~~**89.750.477635129090.83
Make Opinion GmbH~~**86.55-0.2034234657
MarketCube~~**88.5395.61
MDQ~~**87.56-0.320955406282.50
Research for Good (SS)91.12-0.916002614592.52
Revenue Universe~~**81.57-0.653271923487.70
SampleBus Market Research Service Ltd78.750.162384206796.16
Tap Research~~**83.66-0.237086643086.40
TapJoy~~**85.50-1.480497020889.43
TheoremReach~~**86.44-0.653311600484.93
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: France

FR-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

FR-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

FR-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~*90.310.252273199886.75
AdGate Media~*86.74-0.055922161091.60
Attapoll~~*87.250.274382433192.32
Bitburst~~89.340.309049158095.16
ClixSense~*90.16-0.334832003692.20
CreOnline*90.25-0.010618260693.04
Dalia Research88.040.042274009097.37
Decision Analyst~~91.420.183459095987.34
Devola~*89.390.098238321393.99
Gaddin.com90.580.316420772195.16
Gamekit84.510.109234872392.31
Hiving~~*89.960.130133792390.48
inBrain.ai~89.320.314072184490.71
JS Media / Moolineo90.510.258846970798.07
Make Opinion GmbH~86.740.199631494787.72
Market Cube~**89.2692.05
Mo Web~*89.460.209689618690.15
Neobux~*88.55-4.300288716397.93
On Device Research~*88.530.104278562394.15
Persona.ly~~88.72-2.487231943196.68
Point Club~*89.08-0.051083063196.37
Prodege~*90.120.232707092096.07
Research on Mobile~*88.830.669333602292.05
Revenue Universe~~*86.93-0.236570863093.73
Tap Research87.870.140023878595.62
TapJoy~~*87.020.062706041492.54
TheoremReach~*88.710.250639992692.26
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Germany

DE-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

DE-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

DE-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International**89.040.176647038085.42
AdGate Media~~**88.740.144920297990.74
Appinio92.93
Attapoll~*91.370.267231759394.52
Bitburst~~**93.430.228918635693.52
ClixSense~~*89.51-0.027994079192.64
Dalia Research~*91.440.095895479889.27
Decision Analyst89.860.281761207979.49
Hiving~~**89.310.253457575083.68
inBrain.ai~~**90.260.428346314082.64
Make Opinion GmbH92.460.210565877096.91
Market Agent ~**92.070.489319312891.65
MarketCube~*89.4790.67
Mo Web93.500.310628337194.72
On Device Research~**92.920.379882542493.60
Opinion Capital**92.320.142313845983.79
Panel Inzicht86.730.223650698873.05
Persona.ly~~*90.44-1.334979665893.55
Point Club91.28-0.456635517690.08
Prodege93.950.306848832787.77
Promio~~**91.700.288161499485.67
Research for Good (SS)90.820.213105238693.37
Research on Mobile~~**89.25-1.345138395388.64
Revenue Universe~~**89.130.107107079690.61
Tap Research~*89.340.066143027895.72
TapJoy~**90.550.322881680582.64
TheoremReach~~*90.820.020507219987.45
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: India

IN-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

IN-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

IN-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International89.490.5240572342
AdGate Media~**83.31-0.873863132893.59
Attapoll~*88.990.148718042594.61
Bitburst~~**83.78-0.207157521393.95
ClixSense~*89.03-0.094120442796.49
Crownit-Link86.67
Dalia Research~**85.57-0.034229526495.56
Expresso82.57
GMO Research89.071.231679992994.25
inBrain.ai~~**84.440.069889370897.07
Innovative Hall87.510.147200363391.48
Make Opinion GmbH~~**87.170.238844910495.60
MarketCube87.8496.45
MarketXcel~~**88.530.3348588151
Neobux~~**89.18-0.218938652496.33
Opinion Capital~~**88.990.462383721586.74
Persona.ly~~**88.300.642706890895.03
Point Club81.66-3.355394484897.47
Poll Pronto~~**88.35
Prodege~~**90.010.308073088793.87
Research on Mobile~~**85.19-1.415167837392.76
Revenue Universe~**82.13-0.897224824293.08
Tap Research~**86.58-0.211457210195.81
TheoremReach~~**87.34-0.052987072393.19
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Italy

IT-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

IT-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

IT-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~*91.530.180164091395.94
AdGate Media~~**84.470.165259997483.71
Attapoll~*90.080.331230992695.31
Bitburst~*91.500.313572945796.80
ClixSense92.140.332835433696.73
CreOnline92.14-0.250651668995.24
Dale Network~89.980.209780553494.16
Dalia Research87.660.071624436794.47
Decision Analyst~~**91.36-0.022695894689.20
Hiving~~**89.670.390488305895.34
Make Opinion GmbH89.880.454577243295.57
MarketCube89.7894.93
Mo Web~*91.910.457361388793.44
Neobux90.660.167133355394.20
On Device Research~**90.50-0.279541387892.84
Persona.ly~*90.29-0.656100878391.90
Point Club~89.900.499000226295.76
Prodege~~**91.460.452064719790.58
Research on Mobile~~**89.70-4.021046135291.97
Revenue Universe~**86.99-0.959089471390.20
Splendid Research92.560.380132217089.92
Surveyeah92.050.107320254993.64
Tap Research87.88-0.398026816894.57
TapJoy~~**88.80-0.168304518895.83
TheoremReach~*89.41-0.172068386291.72
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Japan

JP-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

JP-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

JP-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Dalia Research~*89.48-1.280455760990.87
GMO Research~90.71-0.005930475680.26
MarketCube~**90.6393.36
Monitas (Gain, Inc.)91.710.006033613387.70
Research for Good~*91.76-1.635539930594.27
Revenue Universe~~**86.56-0.413466527892.71
Tap Research~~**92.43-0.330647090992.99
TapJoy~~**89.18-1.387760005989.99
TheoremReach~**91.80-0.226862811593.00
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Mexico

MX-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

MX-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

MX-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~*94.190.581984175795.10
Attapoll~*92.170.269007477894.94
Bitburst~~**92.100.248181691395.87
ClixSense~**93.780.221276404394.78
Dale Network~*90.860.061104685198.45
Dalia Research~*91.08-0.182880477596.87
inBrain.ai~~**91.220.680214166292.71
Innovative Hall~~**92.560.085673850592.84
Make Opinion GmbH~~**92.600.600047901092.24
MarketCube~*91.1595.46
Mo Web~*92.660.360056059396.99
Neobux~~**93.780.370639276994.27
Opinaia Panel~*91.240.271332227291.84
Persona.ly~**93.75-2.612716517095.59
Point Club~**93.20-0.898588403897.59
Prodege~~**93.400.069544115986.80
Research for Good (SS)~**92.312.622730769194.98
Research on Mobile~~**91.350.235726971190.19
Revenue Universe~~**90.58-3.058236512491.96
Splendid Research92.51-0.538249847896.31
Tap Research~~**90.11-0.071682019690.48
TapJoy~~**90.810.354883044794.70
TheoremReach~~**89.250.186813251294.58
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Netherlands

NL-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

NL-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

NL-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~~**85.91-0.800633889392.50
Attapoll~*85.57-0.058945076689.16
Bitburst~*88.24-0.061652033687.60
ClixSense91.000.1357293557
Dale Network85.790.304756535870.16
Dalia Research~~**86.03-0.402798917993.47
Make Opinion GmbH~*90.010.010835165495.70
MarketCube88.5279.97
Pabble B.V.90.3495.33
Panel Inzicht~~**89.650.619439682666.28
Persona.ly**86.19-0.661215501491.00
Point Club87.9791.38
Prodege~~**91.090.117877959692.96
Research on Mobile~~**86.44-0.255049575991.84
Revenue Universe~*83.48-0.417322180986.47
Tap Research87.8497.20
TapJoy~**87.630.007011167992.33
TheoremReach~~*89.10-0.069058295393.62
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Russia

RU-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

RU-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

RU-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~*91.440.290435498792.77
AdGate Media~~**89.50-0.212560384785.15
Amry Research92.16-0.172327780597.17
Attapoll~**90.46-0.124430090294.74
Bitburst~~**90.740.4397057374
ClixSense~*92.84-0.848518204595.50
Dalia Research~*89.75-0.038491821889.59
Make Opinion GmbH~91.951.314021097594.63
Marketagent93.241.073076120497.60
MarketCube91.0494.92
Mo Web~*91.770.362617961893.99
Neobux~~**92.652.050180127394.26
Persona.ly~*91.50-0.512668253991.91
Point Club*91.710.710297710392.47
Prodege~~**91.35-0.449392500393.94
Research on Mobile~~**90.36-2.775171438390.77
Survey Everyone~*91.660.436770949796.60
TheoremReach~~**90.68-0.685903090294.05
Tiburon Research91.61
YouThink.io~*91.431.128595401095.20
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: Spain

ES-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

ES-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

ES-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International92.600.333154418996.37
AdGate Media~~**90.720.349582900292.14
Attapoll*91.75-0.109380885796.95
Bitburst~*92.330.283767454994.93
ClixSense~*94.28-0.209594981894.22
CreOnline~**94.32-0.187742894793.61
Dale Network*92.89-0.162544891894.36
Dalia Research*91.69-0.364056440697.29
Decision Analyst~~**92.59-0.125637082595.05
Hiving~**91.640.260566189292.94
inBrain.ai~~**90.99-0.089415038991.49
Innovative Hall~**94.00-0.118685849488.75
Make Opinion GmbH~*92.600.419344894096.01
MarketCube*91.9591.15
Mo Web93.38-0.348281514091.26
Neobux~*93.370.966531855294.91
On Device Research~*94.17-0.319597477995.56
Persona.ly~*92.38-1.502545526995.40
Point Club~*93.220.113407269796.80
Prodege~*93.39-0.131275290293.88
Research on Mobile~**90.78-0.674073330295.47
Revenue Universe~*89.60-2.217234172895.33
Splendid Research93.580.125335075396.25
Tap Research*89.57-1.066369603497.38
TapJoy~**89.83-1.245989527494.75
TheoremReach~**92.01-0.196593601896.20
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: United Kingdom

UK-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

UK-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

UK-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International89.080.320658665994.52
AdGate Media89.62-1.161730213886.68
Attapoll~~**88.410.081682080895.05
Bitburst~~**88.18-0.144026089489.67
Branded Research~*94.050.301950598593.85
ClixSense90.4096.01
Dalia Research~~**89.43-0.450505603593.86
Dayos93.48-2.5340251512
Decision Analyst92.59-0.180990311090.50
Expresso88.88
General Research Lab~~**90.25-0.288762947390.40
Hiving~~**89.550.259867894292.14
iAngelic Research~*92.02-0.013184332593.54
Inbox Dollars (UK)92.720.328927221092.45
inBrain.ai~~**91.56-0.253258810692.82
Innovative Hall89.20-0.0089442536
Lux Surveys**90.6485.65
Make Opinion GmbH~90.62-0.474823704693.03
MarketCube~*87.1893.80
MindMover~~**93.270.126063690283.39
Mirats Insights~**65.88-0.0802937405
Mo Web~~**91.260.272153754693.75
MVUK - Kato~**92.3394.48
On Device Research~**93.030.165887719996.33
Persona.ly~*91.71-0.090372814990.55
Pick Media Ltd~**93.640.392080358496.94
Point Club90.52-0.567869811095.98
Prodege93.810.376200258696.22
Qmee~*93.46-0.050726503391.07
Research for Good (SS)88.340.192935865792.68
Revenue Universe~*85.67-0.142033361891.18
Splendid Research89.52-0.008944253687.01
Survey Everyone89.450.4146009986
Tap Research~*88.48-0.356670045695.04
TapJoy~*89.560.106681424294.51
TheoremReach~*88.29-0.220870481396.73
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

June 14, 2021 - October 3, 2021 Supplier Scores: South Korea

SK-Q-Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

SK-Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

SK-Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Dalia Research~~**87.52-0.2626628932
DataSpring91.740.3806580587
GMO Research91.570.1682662575
Make Opinion GmbH~~**86.43-0.3382540562
MarketCube**84.43
Panel Marketing Interactive89.41
Persona.ly~~**90.780.0361297676
Research on Mobile~~**85.15-0.3357388905
Revenue Universe~**83.25-0.4251654661
Tap Research~~**89.77-0.0324929393
TapJoy~~**85.82-0.8723276908
TheoremReach~~**87.22-0.3272640109
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Stay in the know with LUCID. Subscribe to our newsletter.