Prospective Sampling Bias in COVID-19 Recruitment Methods: Experimental Evidence from a National Randomized Survey Testing Recruitment Materials | BMC Medical Research Methodology

0

Surveys can be an invaluable tool for gathering public opinion and experiences on emerging crises like COVID-19. They are, however, vulnerable to biases that may arise due to problems in the methodological design of the study. Our study advances this literature by identifying and documenting a subtle manifestation of bias in the context of COVID-19 research; namely, how the recruitment documents themselves may shape who chooses to respond and/or how they choose to participate.

Although this experiment documents the results of this bias, there are multiple possible interpretations of the mechanisms by which it emerges. A direct possibility is that the COVID-19 postcards have created a sampling bias, in which the specificity of this topic – and the massive public attention – more strongly motivates the participation of those with more worried attitudes (eg. opposed to, say, someone who isn’t interested throwing away the postcard). However, alternative possibilities may also be present. For example, researcher demand bias (i.e., respondents perceive and seek to achieve what they think the researcher hopes to hear; in this case, the postcard suggesting researchers who had concerns about COVID-19) or priming (i.e., an initial stimulus that affects the primacy of particular topics in subsequent responses; in this case, the postcard topic highlighting COVID- 19) are two possible alternative explanatory frameworks. A reviewer also pointed to the possible role of ‘Malmquist bias’ [18], given a more appealing picture on COVID postcards (graphical depiction of the virus versus a drab operating room). Further research could help differentiate these causal mechanisms under experimental conditions.

For public health practitioners and researchers, these findings have several practical implications. While sampling bias is often cited in “obvious” examples (e.g. missing key demographics in recruitment), our results illustrate that important biases can occur in more subtle ways, such as message design recruitment. As such, it is essential that researchers think carefully about the recruitment tools they use and test them, and transparently share the tools they have used for reviewers and readers. Similarly, practitioners should critically evaluate survey recruitment strategies before relying on results, lest sampling bias lead to unrepresentative results. These lessons are critical in the context of the COVID-19 pandemic, as much survey research has explicitly used COVID-19 recruiting messages as a way to achieve higher rates of audience engagement – ​​while potentially introducing the biases we have identified here.

We also find that targeted messages can be helpful in increasing the response rate. However, these increased response rates are hampered by the possibility of oversampling people with higher levels of concern. As such, researchers should consider using recruitment instruments that use more generic framings to minimize the chances of sampling bias, or analytical strategies to calibrate for context-specific biases.

There are of course limitations to this study. For example, the response rate for both recruitment methods was remarkably low. Although not uncommon in mail recruiting, other studies during the pandemic using similar methods have achieved higher response rates. [19, 20]. Further investigation could be conducted to isolate possible COVID-specific effects (e.g., early concerns about fear of transmission on the surface of mail), details of this study (e.g., whether aspects such as size, material or design of postcards affected results), or solicitation by mail in general. Moreover, the “health” framing does not represent a true “neutral” option: while it was certainly a more generic recruiting than a COVID-specific advertisement, it probably carries its own biases compared to other other possible recruitment media. As noted above, it is also very difficult to find a theoretically justifiable method to control “local risk”, a highly subjective variable. More work should be done on this topic to develop techniques to account for these perceptions. Finally, as one reviewer helpfully pointed out, there are several other correlates and variables – such as anxiety, depression, and physical health – that would be very interesting to explore to understand their potential impacts on the bias of answer. These are important, albeit very complex, topics (for example, understanding the relationship between pre-existing mental health conditions, those induced or exacerbated by COVID, and response biases) that warrant further investigation in future research.

Share.

Comments are closed.