Often my clients are worried that a few “rogue actors” who are unhappy with the company’s service or simply racing through the survey to get to the incentive are going to torpedo the reliability of the research results. Can this actually happen?
Can you avoid that happening to your data?
There will always be “outliers” in your research — people who appear to feel very differently than others about something, or who may have an ax to grind, or want to shower you with unwarranted praise because they think they’ll benefit as a result, or who simply don’t care about your questions and will answer randomly just to get to the “prize” at the end. While you can’t necessarily eliminate these respondents from participating in your research, there are a number of things you can do to ensure that their feedback doesn’t have such an influence that your research shows a result that isn’t “true” in a larger sense.
Appropriate Sample Size
The first and most important approach is to ensure there are enough people in your research that a small number can’t overwhelm the feedback of the group. This is what statistical reliability, confidence intervals and margins of error are all about. It’s much more difficult to be confident that a study of 20 respondents is representative of a larger population than it is that a study of 500 accurately reflects the larger group.
Preventing Unqualified Respondents
In addition to providing standard screening questions to determine whether a respondent is qualified to participate given your survey target, you should consider including questions that will confirm whether they are the right people or not. These questions are ones that would be much more difficult for someone to “get right” without being in your target group.
You might ask in an open-ended question which provider a respondent uses for a particular service. Instead of asking if they live in a given state, you can provide a full list when you’re only interested from respondents from a specific geography. In one study of private aircraft owers, we asked for plane tail numbers that could then be verified when we were assessing whether to include certain responses.
Eliminating the Loudest Voice
Ensuring that respondents are able to provide their own opinions without the influence of someone else is an important consideration when you’re conducting focus groups or other group discussions (in person or online). Often someone will dominate those discussions, making other participants’ perspectives hard to capture or effectively “shutting down” expression of different opinions.
Online focus groups can be an outstanding solution to the problem, but other forms of moderation can also serve to ensure that all voices are heard.
Eliminating the Pleaser Motivation
Some respondents want to provide positive feedback about organizations either because it makes them feel good or because they think doing so will benefit them in some other way. Alternatively, respondents may be unwilling to provide honest feedback if they think you can see who they are.
The easiest cure for this is, of course, anonymity. If your respondents know you can’t see their identity, they are less likely to provide positively inflated feedback and more likely to provide an honest opinion that is negative or might not reflect well on them. While providing anonymity is straightforward in a survey, it can even be accomplished in qualitative work through online focus groups, anonymous discussion portals, etc.
There are a number of things you can do with your data once it’s been collected and before you begin analysis. Data cleaning is an important step in ensuring that you’re analyzing quality data, and many popular online survey software tools incorporate optional data cleaning as part of their functionality.
You may need to weight your responses based on certain characteristics to ensure that the analysis is more representative of the population you’re interested in representing in your analysis. For example, if your survey receives responses from 300 men and 100 women and the population you’re interested in understanding is closer to a 50/50 split, you may want to assign each male respondent’s data the weight of less than one response, and each female respondent’s data the weight of more than one response.
Respondents who answer willy-nilly to get through your survey as quickly as possible may result in poor data quality. Eliminating responses in the shortest time per question or similar measures can improve your overall data quality.
Patterning or straightlining is another indicator of poor data quality — respondents answering all “4” on a set of rating items, or demonstrating a pattern of 1-2-3-4-5-6-5-4-3-2-1 etc.
While open-ended questions are more taxing on respondents and often less favored, they can provide helpful information if used strategically. Responses that regularly have gibberish or other irrelevant text for these questions should be carefully scrutinized for other signs of poor data quality.