The Standard for Quality in Research Technology

How can we ensure that you’re getting high quality survey responses? The Lucid Quality Program provides a unique solution for evaluating supply partners and respondents. Using independent, third-party data specialists we measure our supply partners on the three most important aspects of survey sample: response quality, accepted completes, and consistency.

This exclusive offering allows Lucid customers to run projects with confidence, knowing they’re getting high-quality survey responses. In the spirit of transparency, we publish results each quarter – and current scores can be viewed anytime.

team doing quality work

The Science Behind Quality Scoring

We conduct a rigorous assessment of our supply partners that is designed to provide customers (sample buyers) with insight into their sample quality. All eligible supply partners are required to participate in the program. 

After being evaluated, supply partners are issued a QScore, Acceptance Score, and Consistency Score.

QScore

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers

Acceptance

Acceptance Score measures the acceptance rate of survey completes for individual supply partners. “Acceptance” refers to completes that meet buyers’ quality standards and, as a result, are accepted by those buyers. To define a benchmark for acceptance, a weighted average of all accepted completes in the Lucid Marketplace is measured. Then, to calculate Acceptance Score, individual supply partners’ completes are measured against each other, at the project level, to determine how close they are to that benchmark.

Consistency

Consistency Score measures a supply partner’s composition from quarter to quarter. “Composition” is the balance of respondent preferences and opinions within a given panel. These attitudinal responses are determined by respondents’ brand affinity, value-seeking behavior, technology adoption, movie and music usage, and gaming activities. Measuring composition can help researchers with tracking research.

Minimizing Sample Bias

To get accurate data, researchers must take great care to apply quotas – helping to prevent bias from over/under representation of certain groups. Similar care should be taken to manage latent sample characteristics, which are inherent in sample sources (due to differences in respondent recruitment strategies).

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers.

Brand Pref

A common objective of survey research is to measure the performance and impact of brands. By selecting sample sources that do not skew too brand-favorable or averse, clients can get the best reads on their brands. Lucid regularly creates sample blends that can mitigate this potential source of bias.

Australia Value Seeking
Similar to brand preference and usage, some sources may skew heavier or lighter in terms of consumers that use coupons, shop sales, or exhibit extreme bargain hunting. Selecting sample sources that lean too far in either direction can provide biased insight not reflective of the population as a whole. This otherwise latent characteristic is quantified by our Quality Program.
Australia Tech Q Score

People who are avid, early adopters of technology tend to have different attitudes around innovation and change than the general population. Using a sample source that skews too tech savvy or laggard can bias research outcomes – especially if the study is related to technology. Scoring sample sources on tech usage helps inform buyers so pitfalls can be avoided.

Movie Music

For entertainment research in particular, obtaining sample that skews too heavy or light in terms of media consumption can bias results. This dimension varies greatly among those who otherwise appear similar demographically – some people simply don’t listen to music or watch many movies. Understanding the tendencies of certain sample sources helps prevent research missteps.

Gamer

Since some sample sources incentivize respondents in currencies used for playing games, a quick check of game-playing proclivity can reveal if a source looks mainstream or more skewed. This is particularly compelling for research in the video game category, but sample sources where larger numbers of people spend significant time gaming could signal other biasing characteristics.

Fully Transparent Supplier Scores

August 10, 2020 - November 29, 2020 Supplier Scores: United States

US Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

US Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

US Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Attapoll~~**92.75-1.677794826095.56
BizRate~~**91.69-0.2533972802
Bovitz~**93.21-0.0443816662
Branded Research~**91.88-0.152000355294.07
Consultancy Services LLC~~**94.480.3694022770
Dalia Research~*89.44-0.203933875790.79
DISQO*91.82-0.030573057296.00
Embee Mobile88.26-0.115017709482.54
General Research Lab~~**89.83-0.3417078354
InboxDollars~**93.540.002896964586.87
inBrain.ai~~**90.89-0.3206572465
Make Opinion GmbH~~**87.50-0.3437605778
MyPoints**91.450.146135911990.29
Prodege~~*93.000.1461359119
Qmee**93.21-0.168273969393.78
Research for Good (SS)~92.64-0.8394512247
Screen Engine/ASI~~*95.370.5903180308
SocialLoop~~**90.70-0.1301165566
Tap Research~*88.01-0.3700334566
TapJoy~~**88.970.0004805684
TheoremReach~*88.14-0.249921353790.33
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Australia

AU Q Score 2

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

AU Acceptance 2

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

AU Consistency 2

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~~**88.31-0.6171232369
Attapoll~~**90.400.3256580834
Competition Panel~**92.970.4696926791
Dale Network~*89.470.023657243291.26
Dalia Research~**90.500.214316087195.34
GMO Research0.141798742989.14
GrowthOps~*88.1089.59
iGain (SayForExample)~**92.280.1809608056
inBrain.ai~~**92.42-0.3095270251
MarketCube*91.370.0698330459
On Device Research~~**92.470.1639062948
Persona.ly~*89.15-1.8627043612
Point Club91.10-3.499071331991.45
Prodege~*94.210.4163284254
Qmee~~**94.28-0.233408393390.20
Researchify Pty Ltd~~**90.390.2122385351
Revenue Universe~**86.18-0.6798670022
Rewardia~*92.450.1634635121
Tap Research~*89.29-0.3036366639
TapJoy~**90.850.2545604390
TheoremReach~~**91.56-0.3749124413
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Brazil

BR Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

BR Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

BR Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~*93.650.245964412394.27
AdGate Media~~**87.16-0.2385419967
Attapoll92.090.0073017848
BitBurst GmbH~~**90.61-0.2384320088
ClixSense~*93.860.0015068383
Dale Network~*92.540.5866162150
Dalia Research~~**92.250.039232497391.08
Make Opinion GmbH~~**90.18-1.6964134887
MarketCube~91.95-0.1159231284
Mo Web~*94.180.1554131660
Neobux~~**92.77-0.8251054931
Opinaia Panel*91.350.0375974500
Persona.ly~*94.190.4758342288
Point Club~~**91.57-1.0467971933
Prodege~~**92.060.3937047080
Research on Mobile~~**90.70-0.0964750325
Revenue Universe~~**90.38-0.2358771224
Tap Research*88.74-0.2471736540
TheoremReach~*90.60-0.132059664695.86
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Canada

CA Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

CA Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

CA Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~*89.690.1935108453
Attapoll~~**89.20-0.049751787580.47
Branded Research93.490.197372467392.95
Dale Network88.810.239192840388.20
Dalia Research*90.180.0158799290
Decision Analyst~~**93.530.1865194990
DISQO92.900.228052870593.71
General Research Lab~~**94.160.334291774389.60
iGain (SayForExample)~*91.870.1930779021
InboxDollars (DR)92.730.121204881593.23
inBrain.ai~~*93.350.2530539112
Make Opinion GmbH~~**89.34-0.0374235245
MarketCube90.900.036490959594.50
MARU Group~~**92.600.205826188584.79
Prodege93.890.218727823695.85
Qmee~*93.450.166028090188.98
Research for Good (SS)91.060.076575911295.63
Revenue Universe~*92.960.193918230196.37
Splendid Research~*90.61-0.017754714197.11
Tap Research90.05-0.159614489196.41
TapJoy*88.260.2680765882
Tellwut~**92.770.180353156792.40
TheoremReach90.180.130389109490.11
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: China

CH Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

CH Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

CH Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Beijing Youli Technology Co., Ltd.~~**87.19-0.6736735255
ClixSense~~**89.70-0.0230182888
Dalia Research~~**89.01-0.377939088891.50
Data100~~**87.53-0.069915351190.37
DataSpring~~**91.44-0.1150945148
GMO Research~~**90.99-0.072698828986.05
Ignite Vision~~**89.35-0.3657453957
Interface Asia~~**87.12-0.7693520394
KuRunData~~**90.410.022898846287.14
Maiwen China~~**88.05-0.355948015688.61
MarketCube~~**89.44-0.231189214795.41
MDQ~~**90.62-0.8909210437
Revenue Universe~~**85.25-0.5923817230
Tap Research~~**87.91-0.7823928957
TheoremReach~~**87.22-0.6846868941
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: France

FR Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

FR Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

FR Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International*90.90-0.019619241590.67
AdGate Media~~**84.57-1.3818346539
Attapoll~*87.54-0.114844060391.93
BitBurst GmbH~~**89.020.168003624493.34
ClixSense~~**90.24-3.0764947218
CreOnline90.400.410967025196.76
Dalia Research~*87.47-0.166883054992.53
Decision Analyst~~**92.390.101536579890.90
Devola~*90.300.103674212093.59
Gaddin.com91.210.252541668792.87
Gamekit~~**81.870.7475496363
Hiving~~**92.320.048158380490.61
JS Media / Moolineo~*89.730.0283546443
Make Opinion GmbH~~**87.91-0.4741255875
Market Cube88.13-0.000221147987.57
Mo Web~*89.930.131145335096.28
On Device Research~**89.32-0.184936696988.21
Persona.ly~~**88.79-0.904323083586.12
Point Club~*88.45-0.247880466493.73
Prodege~~*91.520.204898285391.08
Revenue Universe~**88.31-0.072387858492.60
Tap Research~88.05-0.011911654494.87
TapJoy~~**89.200.1231030364
TheoremReach~*89.05-0.226247115892.75
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Germany

DE Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

DE Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

DE Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~**89.370.1115552906
AdGate Media~~**90.07-0.1989069204
Appinio~~**92.440.3558408995
Attapoll~**89.880.0743223631
BitBurst GmbH~~**93.840.2701416199
ClixSense~**91.080.5872769734
Dalia Research~**90.680.0697456481
Decision Analyst~~90.680.5770515614
Hiving~~**89.330.1831961241
Make Opinion GmbH~*90.190.4634405259
Market Agent ~**93.250.5497743456
MarketCube~*90.640.277981413491.85
Mo Web*93.300.3639662374
Panel Inzicht~~88.090.0733957298
Persona.ly~**90.83-3.3236161729
Point Club*90.89-1.9208088657
Prodege~*94.420.388404016391.31
Promio~~**91.400.2059042434
Research for Good (SS)~*88.550.3513416168
Revenue Universe~**90.870.0504020002
Tap Research~*88.27-0.2984456517
TapJoy~~**84.750.2762080152
TheoremReach~**89.14-0.045513584183.20
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: India

IN Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

IN Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

IN Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Attapoll~**88.19-0.008209076391.79
BitBurst GmbH88.690.0140564267
ClixSense~~**86.75-0.204958021486.54
Dalia Research*85.85-0.027661562894.05
Expresso~~**82.57-0.0818880856
Make Opinion GmbH~~**86.30-0.5524036545
MarketCube~*86.13-0.969950407696.58
Neobux~~**89.79-0.348205475994.16
Persona.ly~~**88.920.065130944392.86
Prodege~**90.740.064571210687.79
Revenue Universe~~**87.93-0.422347789989.72
Tap Research~*86.64-0.138148929391.85
TheoremReach~*84.32-0.1642162939
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Italy

IT Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

IT Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

IT Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International91.350.0510664621
Attapoll90.99-0.3233416547
ClixSense92.300.2451160838
CreOnline~*92.33-0.157649651192.11
Dale Network~**91.460.3457612462
Dalia Research~89.12-0.1855029835
Decision Analyst91.630.054506305392.03
Hiving~~**89.250.435845065895.76
MarketCube91.26-0.896266778996.22
Mo Web91.71-0.005746745792.39
Persona.ly~*91.98-0.191957963095.45
Prodege~**91.380.7631475035
Revenue Universe~~**89.280.1319250618
Splendid Research92.650.361873494992.30
Tap Research89.11-0.5588233178
TapJoy~~**89.240.1400131252
TheoremReach87.45-0.3092599707
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Japan

JP Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

JP Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

JP Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Dalia Research~*86.77-0.371483069289.46
GMO Research*91.100.644266119782.15
MarketCube~**87.38-0.1199600969
Monitas (Gain, Inc.)~89.590.290440454588.77
Research for Good~*92.52-0.1283472345
Tap Research~~**92.07-0.879337046387.69
TapJoy~~**87.14-0.2981115322
TheoremReach*83.60-0.5180165924
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Mexico

MX Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

MX Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

MX Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~*92.10-3.015944347489.47
Attapoll~~**90.23-0.501428603384.77
BitBurst GmbH~~**90.390.0695907922
ClixSense~~**93.63-0.106158362891.24
Dale Network~*91.580.396069921287.41
Dalia Research~~**91.280.0520055204
Innovative Hall~~**92.25-0.845310241087.38
MarketCube~*91.600.360860023689.67
Mo Web~*92.170.117824283093.73
Neobux~~**94.240.002478009592.97
Opinaia Panel~*91.96-0.051617442390.18
Persona.ly~~**92.95-0.064853042295.51
Point Club~~**89.62-2.5488096434
Prodege~~**92.680.154957943389.33
Research for Good (SS)~~**91.31-0.481582675193.83
Revenue Universe~~**90.40-0.5432308196
Splendid Research~~**93.520.054013026785.88
Tap Research~~**89.74-0.6665437347
TheoremReach~~**90.00-0.6167675544
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Netherlands

NL Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

NL Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

NL Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~~**80.56-0.8042676628
Attapoll~**88.50-0.151118809690.04
BitBurst GmbH~~**88.870.1725110482
Dalia Research~*90.160.286629851489.32
MarketCube88.11-0.064876516489.81
Pabble B.V.~~**89.940.0885087206
Panel Inzicht~*91.760.091051021393.22
Persona.ly~~**88.24-1.6014904951
Research on Mobile~~**87.900.4364352929
Revenue Universe~**86.010.069432019685.79
Tap Research-0.022482963297.20
TheoremReach~**85.92-0.093725584091.21
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Russia

RU Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

RU Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

RU Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International*91.810.047744538192.38
Amry Research~~*92.630.040871729891.58
ClixSense~*91.890.1081283264
Dalia Research~~**90.170.034757528181.40
MarketCube90.370.123953929892.75
Mo Web91.840.294896962493.45
Neobux~**92.030.236817355291.52
Persona.ly~*91.140.274558241391.67
Point Club*91.741.6286447524
Research on Mobile~~91.77-0.4336802146
Survey Everyone~~**91.870.077483690986.29
TheoremReach~*88.740.027170547794.68
YouThink.io*91.870.706151298494.01
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: Spain

ES Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

ES Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

ES Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International93.370.435106539595.02
Attapoll*92.39-0.899737961495.82
BitBurst GmbH~~**91.860.411328450494.29
ClixSense~*95.200.301122999293.21
CreOnline~*94.280.495219729592.39
Dale Network93.460.930797354090.03
Dalia Research~*90.61-0.2095864221
Decision Analyst~~**93.26-0.4810450847
Hiving~~**92.980.206526760791.53
Make Opinion GmbH~**89.77-1.5593636355
MarketCube92.00-0.140623295097.31
Mo Web93.150.382005975194.59
Neobux~~**93.210.099228656395.71
On Device Research~*93.61-0.568151973094.43
Persona.ly~*93.19-0.635090179191.28
Point Club92.18-1.9208088657
Prodege~**94.900.180678316492.85
Revenue Universe~~**89.66-0.3520489989
Tap Research~*90.75-0.769931671791.95
TapJoy~~**91.270.0942273478
TheoremReach90.82-0.900966868794.78
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: United Kingdom

UK Q Score 1

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

UK Acceptance 1

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

UK Consistency 1

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~**89.08-0.0620370326
AdGate Media~~**88.72-0.604553119761.30
Attapoll~**90.66-0.135591051489.48
BitBurst GmbH~~**89.99-0.378268064185.66
Branded Research92.980.242075995494.93
Dalia Research~**88.25-0.344853952487.48
Decision Analyst~~**92.62-0.1678910199
General Research Lab**-0.271919013390.40
Hiving~*89.60-0.051344047990.06
iAngelic Research~~**93.250.0909707311
iGain (SayForExample)92.24-0.2471369588
Inbox Dollars (UK)92.090.0853968804
inBrain.ai~~*91.58-0.7094172039
Lux Surveys~~**90.91
Make Opinion GmbH~**89.470.4538462850
MarketCube87.86-0.186139323393.90
MindMover~~**93.030.0506870841
Mo Web~*90.450.107949229590.12
MVUK - Kato~~**92.6793.07
On Device Research~**90.24-0.363932202193.43
Pick Media Ltd~~**93.000.2492651495
Point Club92.30-0.771252301991.56
Prodege94.730.246825643992.93
Qmee~*92.98-0.495476621981.84
Research for Good (SS)90.34-0.203769553588.93
Revenue Universe~*88.170.049347211688.14
Splendid Research90.01-0.308933165197.08
Tap Research~*86.88-0.7194448748
TapJoy~**87.98-0.1455431305
TheoremReach*88.01-0.417328090491.79
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

August 10, 2020 - November 29, 2020 Supplier Scores: South Korea

South Korea Q Score

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

South Korea Acceptance

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

South Korea Consistency

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Dalia Research87.50-0.2607082590
DataSpring*91.440.4613930050
GMO Research91.590.6236694790
MarketCube~~*83.80-0.1441037870
Panel Marketing Interactive*88.92-0.1674424930
Research on Mobile~**86.03-0.3994006500
Revenue Universe82.88-0.4170533950
Tap Research88.15-0.3545321980
TheoremReach84.79-0.4654167900
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Stay in the know with LUCID. Subscribe to our newsletter.