The Standard for Quality in Research Technology

How can we ensure that you’re getting high quality survey responses? The Lucid Quality Program provides a unique solution for evaluating supply partners and respondents. Using independent, third-party data specialists we measure our supply partners on the three most important aspects of survey sample: response quality, accepted completes, and consistency.

This exclusive offering allows Lucid customers to run projects with confidence, knowing they’re getting high-quality survey responses. In the spirit of transparency, we publish results each quarter – and current scores can be viewed anytime.

The Science Behind Quality Scoring

We conduct a rigorous assessment of our supply partners that is designed to provide customers (sample buyers) with insight into their sample quality. All eligible supply partners are required to participate in the program. 

After being evaluated, supply partners are issued a QScore, Acceptance Score, and Consistency Score.

QScore

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers

Acceptance

Acceptance Score measures the acceptance rate of survey completes for individual supply partners. “Acceptance” refers to completes that meet buyers’ quality standards and, as a result, are accepted by those buyers. To define a benchmark for acceptance, a weighted average of all accepted completes in the Lucid Marketplace is measured. Then, to calculate Acceptance Score, individual supply partners’ completes are measured against each other, at the project level, to determine how close they are to that benchmark.

Consistency

Consistency Score measures a supply partner’s composition from quarter to quarter. “Composition” is the balance of respondent preferences and opinions within a given panel. These attitudinal responses are determined by respondents’ brand affinity, value-seeking behavior, technology adoption, movie and music usage, and gaming activities. Measuring composition can help researchers with tracking research.

Minimizing Sample Bias

To get accurate data, researchers must take great care to apply quotas – helping to prevent bias from over/under representation of certain groups. Similar care should be taken to manage latent sample characteristics, which are inherent in sample sources (due to differences in respondent recruitment strategies).

Even when demographics are matched precisely, latent or unseen characteristics can influence data outcomes – and the degree of influence is driven by the subject matter of the survey. Our program monitors respondent values and behaviors, with regard to specific subject matter, so it can be managed by sample buyers.

A common objective of survey research is to measure the performance and impact of brands. By selecting sample sources that do not skew too brand-favorable or averse, clients can get the best reads on their brands. Lucid regularly creates sample blends that can mitigate this potential source of bias.

Similar to brand preference and usage, some sources may skew heavier or lighter in terms of consumers that use coupons, shop sales, or exhibit extreme bargain hunting. Selecting sample sources that lean too far in either direction can provide biased insight not reflective of the population as a whole. This otherwise latent characteristic is quantified by our Quality Program.

People who are avid, early adopters of technology tend to have different attitudes around innovation and change than the general population. Using a sample source that skews too tech savvy or laggard can bias research outcomes – especially if the study is related to technology. Scoring sample sources on tech usage helps inform buyers so pitfalls can be avoided.

For entertainment research in particular, obtaining sample that skews too heavy or light in terms of media consumption can bias results. This dimension varies greatly among those who otherwise appear similar demographically – some people simply don’t listen to music or watch many movies. Understanding the tendencies of certain sample sources helps prevent research missteps.

Since some sample sources incentivize respondents in currencies used for playing games, a quick check of game-playing proclivity can reveal if a source looks mainstream or more skewed. This is particularly compelling for research in the video game category, but sample sources where larger numbers of people spend significant time gaming could signal other biasing characteristics.

Fully Transparent Supplier Scores

Supplier Scores: United States
September 12, 2022 – January 1, 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media92.82-1.864873214994.25
Attapoll~~**92.680.412998910095.95
Besitos Corporation~*92.25-0.012984058394.50
Bitburst~*91.09-0.439161663997.26
BizRate88.520.5250044780
Bovitz~~**93.89-0.5279299785
Branded Research~~**92.330.387071929595.74
Centiment~~**96.760.500920966274.21
Consultancy Services LLC~~**92.810.640516285086.86
CrowdNautics, Inc.~~88.47-0.1047429953
Dalia Research89.05-3.205154400095.92
Decision Analyst92.240.585919432181.77
DISQO~*91.23-0.593879960394.71
Embee Mobile~**82.210.487445434590.29
Gamekit92.250.251199901495.12
General Research Lab~*90.630.272797042692.47
iAngelic Research~**92.420.0849146063
InboxDollars~**94.05-0.164451402892.56
inBrain.ai~*86.22-0.679552350490.94
inMarket~~80.330.1755672169
ISA (MySoapBox)~~**91.170.668395356191.02
ITC~91.730.861908824692.17
Logit~~*91.290.0058014886
Make Opinion GmbH~~**90.860.114477697396.27
MarketCube~**87.980.000000000092.56
MARU Group (SA)93.020.2968340464
MindField Research~~90.690.000000000075.22
My Cashew~~**88.29-0.1068499244
MyPoints~~94.230.335747746228.87
On Device Research~91.680.397535136491.19
Opinion Capital~~91.630.0000000000
Point Club~93.73-2.716777456591.76
Prodege~~94.670.335747746294.16
Publishers Clearing House~~**85.500.449832062889.60
PureSpectrum~91.950.0000000000
Qmee~*90.990.529402885095.34
Research for Good (SS)92.64-1.1925044528
Revenue Universe89.65-0.732173760687.95
Screen Engine/ASI~~94.53-0.283116346496.28
SocialLoop~*90.010.087410025795.38
Tap Research84.800.090324170390.61
TapJoy~*90.390.683978884990.62
Tellwut~~**92.420.300837514472.32
TheoremReach89.60-1.054891896396.07
Three Hyphens~93.170.5827342812
URWelcome Technologies~*92.080.639446464696.18
Vetri Foundation~*92.480.277641106797.14
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Australia
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media~~87.48-0.402702260591.97
Attapoll~~**94.550.314086181793.39
Bitburst~*93.00-0.539683553996.76
ClixSense~~90.050.288995852390.41
Competition Panel~**93.010.446555356593.42
Dale Network~*83.880.125209593691.32
Dalia Research~~**90.44-0.590742331187.75
DISQO89.75-0.492582097796.52
GMO Research*93.040.251338371168.20
GrowthOps~~**90.850.000000000087.70
iAngelic Research91.49-0.2763176772
inBrain.ai~~*91.67-1.008242307292.70
Make Opinion GmbH~*91.520.169852900693.31
MarketCube~*90.730.000000000091.01
On Device Research~~**92.48-0.192753437794.24
Persona.ly92.290.053203934393.46
Point Club91.3992.36
Prodege94.360.282140941397.03
Qmee93.680.111869136795.40
Research for Good (SS)90.27-0.6636781185
Research on Mobile~~89.280.0454996492
Researchify Pty Ltd~~90.173.020463915980.47
Revenue Universe~*87.32-0.338271063795.43
Rewardia~*91.810.237325756995.15
Tap Research~*87.28-0.210604701990.87
TapJoy~*89.450.076512735092.00
TheoremReach~*89.650.026923558393.60
Titan~~84.080.0000000000
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Brazil
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International*92.750.910417315695.87
AdGate Media~*91.920.042610429796.76
Attapoll92.57-0.204948543394.75
Bitburst92.130.281174308695.62
ClixSense94.00-0.003903742597.35
Dale Network89.42-0.464415284396.98
Dalia Research93.10-0.856010350996.18
Dayos~~91.310.073151421897.87
inBrain.ai92.730.320513465197.06
Innovative Hall~**92.95-0.8320830332
Make Opinion GmbH93.370.387952540894.64
Market Agent92.880.829106004395.91
MarketCube92.320.000000000092.48
Mo Web92.662.092438900392.28
Neobux~92.561.490838016795.77
On Device Research93.330.475906562793.15
Opinaia Panel91.20-0.460297904195.56
Persona.ly93.800.448346274695.46
Point Club~94.3396.38
Prodege93.55-0.263566035293.85
Research for Good (SS)91.25
Research on Mobile~~89.93-0.171762182293.50
Revenue Universe~*91.15-1.045662465497.91
Tap Research89.82-1.992662918791.69
TapJoy~~91.750.085774516487.62
TheoremReach91.66-0.298730126394.60
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Canada
2022 September 12 – 2023 January 01

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~**93.18-0.0320400706
AdGate Media~*89.99-1.314170987395.19
Attapoll*90.630.208106819389.45
Bitburst89.77-0.115145657495.39
Branded Research95.150.347617033196.63
ClixSense~**92.800.002011024793.30
Dale Network82.920.148149139695.16
Dalia Research86.72-3.608531318990.16
Decision Analyst~~92.860.498358163391.95
DISQO92.940.286893936295.71
FeedSurvey~~86.540.0000000000
General Research Lab~~88.890.000000000089.60
iAngelic Research91.21-0.398710903294.81
InboxDollars (DR)93.390.087138989894.27
inBrain.ai91.95-0.089292195789.88
ITC94.191.053724397693.63
Juyanba~~81.090.000000000092.23
Logit93.810.3696689843
Make Opinion GmbH92.050.143580875396.54
MarketCube89.580.000000000097.18
MARU Group~*92.040.311292558985.76
MyPoints~~92.070.1672269548
On Device Research90.530.0000000000
Persona.ly*91.830.3761553964
Point Club91.9192.43
Prodege94.280.167226954896.98
Qmee93.930.172781010595.84
Research for Good (SS)~92.420.821619809091.09
Research on Mobile~~83.99-0.080430463183.34
Revenue Universe90.33-0.136514380693.85
Splendid Research91.050.303236675992.26
Survey Everyone88.940.6445645552
Tap Research89.39-0.070966287895.67
TapJoy90.330.217433894595.60
Tellwut92.210.280827533091.44
TheoremReach91.27-0.113340972094.53
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: China
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Beijing Youli Technology Co., Ltd.*84.04-0.396450574097.09
ClixSense~~**88.55-0.367645740494.52
Dalia Research~~*88.500.030137647190.65
Data100~**91.03-0.224444106895.83
DataSpring*91.31-0.045080870990.97
GMO Research90.910.072530588998.45
Ignite Vision~~**91.900.070933482484.29
Insight Works*89.34-0.098786303792.14
Interface Asia~~87.8286.60
Juyanba~84.770.076604856498.34
KuRunData~*90.340.067896282193.64
Maiwen China*88.88-0.043121067993.57
Make Opinion GmbH~~**87.44-0.571099557377.63
MarketCube~**88.800.000000000083.39
MDQ~~**89.32-0.566499469388.42
Prodege~~**90.91-0.557787592684.83
Research for Good (SS)91.120.000000000092.52
Revenue Universe~~82.23-0.107473929378.54
SampleBus Market Research Service Ltd87.17-0.177983784392.25
Tap Research~~**84.99-0.170428175383.73
TapJoy~~**84.31-0.362502832091.36
TheoremReach~~**86.94-0.189422563682.18
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: France
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~88.220.107778450581.70
AdGate Media~*87.04-0.292596554895.59
Attapoll~*87.810.303843137893.27
Bitburst~*89.200.104932925893.89
ClixSense~**90.330.594799389992.20
CreOnline~*90.370.267867289293.63
Dalia Research88.800.268079999196.41
Decision Analyst~~**90.380.215276948979.21
Devola88.430.000000000093.99
Gaddin.com91.880.110426993995.65
Gamekit84.510.421852206292.31
General Research Lab88.440.1354581040
Hiving~*90.150.345894048772.08
inBrain.ai88.410.227018777693.30
JS Media / Moolineo89.910.197742298595.74
Make Opinion GmbH88.020.187438891490.78
Market Cube~~*90.330.000000000090.57
Mo Web~*90.180.295323712791.38
Neobux~~88.05-0.480505130097.93
On Device Research~*88.73-0.413599907093.97
Persona.ly~*87.65-0.113003765695.06
Point Club~~87.6091.98
Prodege~90.290.313504198593.41
Qmee87.280.1046180424
Research on Mobile~~85.480.078848222392.05
Revenue Universe~*84.51-0.049182619590.89
Surveoo91.150.458242134093.46
Tap Research88.530.058816924188.68
TapJoy~~**86.76-0.063293468191.09
TheoremReach~*89.950.208433462794.28
Trial Panel~~**89.050.255122546691.88
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Germany
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~**90.040.304297215385.42
AdGate Media~89.59-2.067594658191.95
Almedia AG~~87.990.0000000000
Appinio~90.230.000000000092.85
Attapoll~~**93.390.476424414296.63
Bitburst~*91.06-0.274765645894.74
ClixSense~*91.840.579292658892.64
Dale Network~~*82.35-0.594432108986.32
Dalia Research89.42-0.468245095493.51
Decision Analyst~~87.360.394580823379.49
General Research Lab~~89.070.0000000000
Hiving~~89.170.090066069582.21
inBrain.ai~~**90.90-0.942441260194.32
Liidimedia oy*91.490.424237516596.93
Make Opinion GmbH93.260.191364686296.35
Market Agent *92.170.478354097886.60
MarketCube*90.540.000000000090.21
Mo Web~~**89.880.399962897092.52
On Device Research~*92.150.393531327892.64
Opinion Capital~~89.730.000000000084.42
Panel Inzicht~~87.800.000000000073.05
Persona.ly~*90.590.499039408694.63
Point Club~~91.5092.85
Prodege93.700.465694607187.32
Promio~~**90.200.394704883162.58
Research for Good (SS)~90.941.058388056888.78
Research on Mobile~*89.53-0.065143372395.08
Revenue Universe~86.8488.93
Splendid Research~**92.970.251323000896.03
Tap Research86.79-0.265512673393.82
TapJoy~~**87.02-0.453177559491.35
TheoremReach90.59-1.047347250291.75
Untiedt~~**91.290.414261718185.70
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: India
2022 September 12 – 2023 January 01

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International89.491.0239058488
AdGate Media~~**84.62-0.316817172596.85
Attapoll~~**87.770.505362319695.59
Australian Clearing Pty Ltd~**89.670.000000000094.06
Bitburst~~**86.300.010567706296.97
ClixSense~~**90.000.144329374196.74
Crownit-Link86.670.0000000000
Dalia Research~*81.06-1.709240273594.32
Expresso82.57-1.8131537665
General Research Lab~~**86.18-0.599324134590.20
GMO Research~**89.850.988892708194.79
inBrain.ai~~**89.510.261790761795.80
Innovative Hall87.510.000000000091.48
Make Opinion GmbH~~**86.720.150464122697.18
MarketCube87.840.000000000096.45
MarketXcel~~**87.130.886358590090.98
Neobux~~**90.14-0.800997843692.57
Opinion Capital~~86.150.000000000095.60
Persona.ly~~**90.64-1.534833556490.33
Point Club81.660.000000000097.47
Poll Pronto~~90.450.000000000090.02
Prodege~~**91.220.511330672893.29
Research on Mobile~~**87.580.163417250093.47
Revenue Universe~~**85.88-1.034785023695.57
Tap Research~~**88.010.070447496695.36
TheoremReach~*87.920.259042798094.18
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Italy
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~**90.500.104506476789.28
AdGate Media~~86.35-0.896038663579.50
Attapoll92.590.161303854697.88
Bitburst90.800.043073335992.84
ClixSense92.230.174539846586.31
CreOnline93.230.174387366395.02
Dale Network~87.420.810747813391.65
Dalia Research~91.74-0.408893183695.85
Decision Analyst92.050.089746343892.79
Hiving~~**91.300.395246408095.32
Make Opinion GmbH92.220.194547326596.77
Market Agent~89.670.1619257990
MarketCube92.100.000000000093.19
Mo Web~*91.830.033331043792.32
Neobux~91.350.918108217194.20
On Device Research~*89.76-0.097921971391.90
Persona.ly91.350.282077986095.70
Point Club91.0396.38
Poll Pronto~91.470.0000000000
Prodege91.320.155210955290.09
Research on Mobile~~87.19-0.519887147394.77
Revenue Universe88.28-3.022146898993.38
Splendid Research~~92.340.261963491494.66
Surveyeah92.120.041732176394.76
Tap Research89.81-0.059801065488.87
TapJoy~91.07-0.312939358494.25
TheoremReach91.58-0.004899650392.75
Untiedt~~**89.66-0.305565804284.83
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Japan
2022 September 12 – 2023 January 01

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Bitburst~*88.83-0.7611848977
Cross Marketing Account88.570.234192940385.56
Dalia Research*90.71-0.314000132190.07
GMO Research91.310.105035900088.40
Make Opinion GmbH~~**91.01-0.622396294794.73
MarketCube~*90.570.000000000093.36
Monitas (Gain, Inc.)*92.190.064863221782.56
Research for Good92.850.000000000086.07
Research on Mobile~~89.39-0.482919355996.27
Revenue Universe~~**88.01-0.345832859893.82
Tap Research~~**90.13-0.193706489689.55
TapJoy90.91-0.055592391096.18
TheoremReach~~*89.65-2.510688672689.69
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Mexico
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International92.560.318960389996.44
Attapoll~*93.240.335252261695.59
Bitburst~*91.810.055643720094.85
ClixSense93.900.311538752395.47
Dale Network89.730.532123800591.55
Dalia Research~**92.04-0.870117044193.80
inBrain.ai~*91.800.310182593691.10
Innovative Hall~~92.650.725369034794.41
Make Opinion GmbH~*93.190.196108110092.44
MarketCube90.820.000000000095.54
Mo Web93.080.494894996493.52
Neobux~~**94.250.726966647592.77
Opinaia Panel91.90-2.834506811494.16
Persona.ly93.750.735379422497.76
Point Club~~92.670.000000000094.18
Prodege~*92.610.262979385592.81
Research for Good (SS)~92.3194.98
Research on Mobile~~89.12-1.731944808392.17
Revenue Universe~**87.38-0.976969268687.66
Splendid Research92.510.306289752396.31
Tap Research~*89.15-0.063415955695.81
TapJoy~**89.760.007837554790.70
TheoremReach~*91.880.222973451196.25
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Netherlands
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
AdGate Media85.85-1.2527660640
Attapoll~~**91.460.126758053494.21
Bitburst~**90.04-0.700052446596.26
Dalia Research90.42-0.301312889692.62
GMO Research~~**91.392.418016335195.64
Make Opinion GmbH91.230.167012664996.36
Prodege89.290.1637710187
Rewardia91.810.849634705688.31
Tap Research88.14-2.056987201294.48
TheoremReach~~**89.380.437986727194.62
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Russia
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~~**92.270.039015777294.36
AdGate Media~**87.170.336641108090.99
Amry Research~92.000.000000000094.15
Attapoll~~90.940.000000000093.86
Bitburst~~**89.350.059614380490.93
ClixSense~~91.830.203300957395.79
Dalia Research~*88.880.534241784391.42
Make Opinion GmbH~*91.720.502310227697.01
Marketagent93.24-0.088157158297.60
MarketCube~90.960.000000000095.38
Mo Web~*91.090.006155403195.03
Neobux~~93.150.223872953891.49
Persona.ly~~**91.210.274786105989.24
Point Club~92.350.000000000093.93
Prodege~91.0293.78
Research on Mobile~~**87.970.637065915890.08
Survey Everyone~~91.360.000000000093.20
TheoremReach~*90.73-0.254086062893.70
Tiburon Research91.610.0000000000
YouThink.io~92.450.234945685488.62
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: Spain
12 September 2022 – 1 January 2023 

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International~93.11-0.250694228388.33
AdGate Media~~90.030.235008111192.08
Attapoll~*93.250.104681674693.55
Bitburst*91.72-0.283615336992.48
ClixSense~*93.920.039571200493.91
CreOnline~~**93.120.084076145087.02
Dale Network92.540.895281656388.83
Dalia Research*93.17-0.149874858896.02
Decision Analyst~~93.19-0.630048403895.05
Gaddin.com93.91-0.235933753795.52
General Research Lab93.320.1906834638
Hiving**92.55-0.010103374996.35
inBrain.ai~*92.98-0.038902750592.27
Innovative Hall~93.26-1.398557707291.54
Make Opinion GmbH~*93.270.065331879094.04
Market Agent94.324.395786249692.40
MarketCube~*91.840.000000000094.67
Mo Web~*92.450.228642335093.13
Neobux~*93.77-0.088689531195.76
On Device Research~*94.32-0.252962085794.73
Persona.ly~*93.300.437765012096.07
Point Club~94.8092.54
Prodege~93.56-0.188789786992.70
Research on Mobile~~90.78-2.888218097292.96
Revenue Universe~~**89.83-1.620062169994.92
Splendid Research~93.910.168547064992.52
Surveoo93.840.028654166094.36
Surveyeah~95.250.3190657322
Tap Research~*89.98-0.566813996186.18
TapJoy~~**90.21-0.571260455689.41
TheoremReach91.87-0.036947046093.75
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: United Kingdom
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
A-K International89.080.338259938194.52
AdGate Media88.82-0.448802501995.24
Attapoll~**93.100.385745442595.11
Bitburst~~*90.57-0.288856693091.44
Branded Research~*95.190.562318635096.57
ClixSense90.40-0.868184352296.01
Dalia Research~~**87.58-1.878368976295.24
Dayos93.480.2626934168
Decision Analyst~~**91.28-0.111197910690.50
Expresso88.88
General Research Lab~~90.090.000000000093.29
Hiving~~87.7181.63
iAngelic Research~~90.570.124894960890.63
Inbox Dollars (UK)~91.240.542770714091.87
inBrain.ai~*90.83-0.389465176896.25
Innovative Hall89.200.0000000000
Liidimedia oy91.640.287738443490.16
Logit~91.81-0.0794419916
Lux Surveys~~**92.310.000000000086.85
Make Opinion GmbH~~**89.29-0.056593940496.94
MarketCube~91.560.000000000096.27
MindMover~~**91.040.809061180670.27
Mirats Insights~65.880.0950230162
Mo Web~*90.080.543772184793.04
MVUK - Kato~~**90.460.000000000087.87
On Device Research~*93.440.302776283196.29
Persona.ly*91.700.488435482191.41
Pick Media Ltd~~**91.450.548047083192.25
Point Club~89.9694.35
Prodege~*94.220.453494128695.65
Qmee90.910.181508093892.20
Research for Good (SS)~90.170.183680655397.08
Revenue Universe~*86.750.418917189794.77
Splendid Research90.820.340139300787.01
Survey Everyone89.45
Tap Research~*89.11-0.276134512093.35
TapJoy~*89.000.269730685494.75
TheoremReach90.43-2.381561601593.21
Vetri Foundation~*90.900.3196717386
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight

Supplier Scores: South Korea
12 September 2022 – 1 January 2023

QScore

Quality “grades” based on standard survey response measures (e.g., time spent on survey, attentiveness, open-ended response quality).

Acceptance

Identifies the number of buyer-accepted survey completes for each individual supplier, then compares that number to the marketplace average of buyer-accepted survey completes.

Consistency

Measures the composition of respondent attitudes/opinions over time, helping to ensure that composition bias is minimized for tracking research

SupplierQScoreAcceptanceConsistency
Bitburst~~**88.64-0.006321150391.66
ClixSense~~94.630.0869867505
Dalia Research~**86.42-1.493367707490.20
DataSpring89.331.279442836493.22
GMO Research90.061.054250882089.28
Make Opinion GmbH*89.470.865673931594.97
MarketCube~~88.280.0000000000
MarketLink*90.130.409590752493.49
Neobux~~**91.940.490871486594.73
Panel Marketing Interactive89.410.0000000000
Persona.ly~~**92.641.148578289695.02
Prodege~~92.090.270067808695.37
Research on Mobile~~83.61-2.080776143191.12
Revenue Universe~~**81.71-0.222751341988.17
Tap Research~~87.53-0.635207304888.81
TapJoy~**86.24-0.035013065698.18
TheoremReach~~**86.790.311806385285.72
On the Score sheets for each country, the following applies:
QScore:
No tilde = no significant weighting
Single tilde (~) = weighting between 1.5 and 3.0
Double tilde (~~) = weighting over 3.0, however total n within 70% of target

Consistency Score:
No asterisk = no significant weighting
Single asterisk (*) = weighting between 1.5 and 3.0
Double asterisk (**) = weighting over 3.0, however total n within 70% of target
N/A = new supplier to program or insufficient data to weight