Takeaways from AAPOR 2015: Innovative Techniques to Improve Accuracy of Online Surveys
Last month, we attended the American Association of Public Opinion Researchers (AAPOR) annual conference in Hollywood, Florida. Public opinion researchers from academia, the media, market research, analytics, and other fields came together to present new methods and discuss ways to improve how we conduct surveys. In a post last week, we discussed advances in the selection of likely voters for electoral surveys.
Another topic that was on the mind of AAPOR participants was how to improve the accuracy of online surveys. While online surveys have some important benefits over telephone surveys, such as the ability to test specific language, include visuals, and provide greater accuracy when asking sensitive questions, they generally suffer from the drawback of using nonprobability samples: they are usually drawn from opt-in panels that do not give everyone in the population a chance of participating and thus not only have inherent biases, but also make it inappropriate to calculate traditional measures of precision like a margin of error.
Many of the most interesting research presented at AAPOR revolved around the approaches that different researchers have begun testing to correct these biases. Some of these are similar to approaches we have been exploring internally at GSG, where we have been deploying online surveys with great success for many years.
- Using non-traditional demographic questions to calibrate online surveys can reduce bias. We know that traditional demographic weighting of online surveys (based on age, race, education, etc.) is not enough to correct the biases inherent with samples drawn from opt-in panels. Mansour Fahimi of GfK showed that this still leaves data that are biased on questions ranging from social engagement to community and altruism, and happiness to politics. Fahimi finds that including, and weighting on, questions on Internet use, survey completion, early adoption of consumer products, and time spent watching TV can reduce bias further than simple demographic weighting.
- Matching online results to probability-based surveys can improve accuracy even further. Several researchers looked into a technique that goes one step beyond weighting: sample matching. This approach uses an algorithm to match each member of an online panel sample to a respondent in a parallel probability-based survey (usually a phone survey). Charles DiSorga and his colleagues from Abt SBRI and SSI developed a method to match an online and phone sample using an Internet-use propensity score. While this approach produces results that still have a significant bias, it allows the researchers to compute a standard error for the survey. Meanwhile, Michael Brick from Westat and his coauthors explored five different approaches to weight or match responses from a nonprobability online survey to a mail-based probability survey in order to import some of the strengths of the probability sample to the non-probability online panel. They find that sample matching using a “nearest neighbor” algorithm is the best approach and reduces bias significantly (though it does not eliminate it).
- New voter-file matched online panels may offer significant gains in quality. Importantly for pollsters like GSG, some vendors of online panels are now offering their panels matched to a voter file. This means pollsters have access to a wealth of information about each respondent before asking any questions (such as models predicting the respondent’s party affiliation, voting history, and Internet use). Adapting some of the techniques discussed at AAPOR to use this wealth of data would likely result in even more dramatic reduction of bias for online surveys, particularly on political or partisan questions.
At GSG we are excited to be on the cutting edge of these developments as we continue to explore and develop methods such as these to make all of our surveys as accurate as possible in an ever-changing research landscape.