By: Lindsay DiLeo, Communications Manager
Market research tracker or wave studies are an effective way to keep a finger on a brand’s pulse – if they’re run well. Like newborn babies, trackers must be protected and nourished, given stability and our utmost attention to ensure their success. Those of us in market research go to great lengths to deliver consistent and reliable insights – and we at Lucid took a minute to consult some internal experts and client contacts for tips to make trackers a little less scary.
They’re called tracker or wave studies because they’re run repeatedly over time to track or gauge things like:
- Brand image: how is the brand perceived?
- Brand awareness: how many people know the brand?
- Customer segments: who makes up the customer/client base and why?
- Ad effectiveness: is the campaign gaining traction for the brand?
In the tracker world, danger lurks around every corner. Project setup errors, inconsistent sampling, or deviations from standard practices threaten the data validity. And this type of research is a large investment – all eyes are fixed on this precious project. So how do we ensure a harmonious tracker experience?
Here are a few tips for running a successful market research tracker/wave study.
- Be thoughtful about the behaviors and attitudes you measure: When you ask vague questions, you’ll get vague answers. Don’t ask for college-level brand analysis from respondents – they don’t think about brands like we do. As market researchers, you interpret the data’s reflection on the brand as opposed to asking the respondent to reflect on the brand.
- Consider Timing: Tracker data isn’t worth much if respondents are hazy on the answers. Ask respondents to think about recent experiences (not a shopping trip three months ago). And when asking about the frequency of a behavior, consider whether you’d even know the answer (e.g., « how many breaths have you taken in the past month? »).
- Keep it short: The data’s not worth much if respondents completely check out of the survey halfway through. Research on research shows respondents become fatigued and disengaged during long and involved surveys. Instead, break long surveys into smaller ones. Perhaps pair methodologies to get a more in-depth view of respondent attitudes and behaviors.
- Control the blend: To prevent sampling-related data shifts, sampling expert Chuck Miller (President, DM2), recommends consistently sampling from the same sources wave to wave. The percentage of each source’s allocation can vary somewhat, even as much as 10% per source per tracking period. For example, if you use Panel X for 30% of your sample one month, 27%-33% may be acceptable next month/wave — although the more stable, the better. Also, it is very important to understand any latent sample characteristics that might influence research outcomes when blending sources. For example, understanding if a sample source skews toward technology early adopters (or not) would be critically important when running a tech company brand tracker.
- Prioritize quality and costs: High costs don’t mean high quality. Fulcrum Quality validates supplier quality for smarter sample shopping. Compare scores measuring traditional quality metrics (e.g., time spent, attention to detail, open-ends), acceptance, (how much of the supplier’s data set is thrown out of the final data set for poor quality), or consistency (does the supplier maintain control over the sample composition/makeup). Choose high-quality, cost-competitive suppliers that will generate large returns for the large amount of sample used over the course of the year.
- Communicate clearly: A study this large has many moving parts. Make sure nothing slips through the cracks due to miscommunication. Formalize and document all responsibilities, timings and deliverables between all parties (client, vendor, etc.). Take extra caution when communicating the specs of your study. Small uncommunicated changes in sample demographics or time in field can carry large costs and efficiency implications.
- Expand your universe: New technologies like Fulcrum’s programmatic sampling allows you to access millions of respondents across hundreds of panels at your fingertips. The question is not, « Should you blend sources? » but rather, « Why wouldn’t you? » Ensure fresh sample is brought to your tracker each wave by tapping into a large pool of respondents. Also, use the survey groups function to keep previous respondents out of future waves.
- Be strategic about devices: Smartphones and tablets offer new insight into respondents’ worlds. The added mobility and functionality allows them to share photos and videos in real time. There are many reasons to factor mobile and tablet devices into your research. But give special consideration to survey design when you do. Have you seen a matrix table on a smartphone?
- Don’t shy away from technology: Taking the human element out of your tracker study can be scary at first. But it saves a lot of headaches in the long-run. Optimize security settings to de-duplicate, validate respondents and prevent fraudulent activity. Use random data generation to test your survey logic. And auto-charting eliminates manual work and increases accuracy.
- Validate the results: When you receive your data, test any changes. Ideally, changes in survey response trends would only reflect the consumer landscape. When data shifts noticeable or unexpectedly, we rule out any causes related to sampling methodology or survey technology.
Lucid facilitates many of the efficiencies to ensure success on your next tracker study – from the Fulcrum platform for setup and automation, to Federated Sample, our team of experts to inform your sampling and fieldwork decisions. We can take the uncertainty out of your tracker project experience. Contact us for more information.