Interviewers affect answers and, thereby, make answers unreliable. As indicated in the newsletter below from a university survey research center, survey researchers acknowledge as much and, moreover, admit they do not examine these effects. This newsletter is part of what I call, the Counter Literature to Survey Research, most of which is provided by survey researchers themselves. They know asking does not produce reliable information but they continue to ask because they’re addicted to asking and to answers.
“Interviewers . . . influence respondent behaviors [answers] in systematic ways . . . . For example, . . . an interviewer’s observable characteristics — such as gender, age and race/ethnicity — may cue respondents to relevant social norms that then become integrated into their answers. This . . . most likely . . . happen[s] when interviewer characteristics are directly relevant to the questions being asked. For example, interviewer gender may become relevant when respondents are answering questions about gender-related topics. . . .
interviewer variance represents generalized differences across interviewers that are more idiosyncratic in nature, for example, how they phrase questions or probe responses. These differences may account for measurable amounts of unique variance across individual interviewers.
In most survey data analyses, both interviewer effects and interviewer variance remain unexamined, despite the fact that they . . . have significant influence on statistical estimates.”
If you want to find out what’s really going on, don’t ask. That’s the theme of my book, The Problem with Survey Research.