Interviewers affect interviewees. As I point out in my book, The Problem with Survey Research,(p. 199) interviewers (and all others who ask questions) are stimuli and reinforcers who–as is true for asking instruments and settings in which questions are asked and answers given–bias answers. Interviewers’ styles of behavior and personal attributes (e.g., ethnicity, experiences, gender, etc.,) cue and induce the answers they receive. Here’s my review of Joseph d. Matarazzo and Arthur N. Wiens, The Interview: Research on Its Anatomy and Structure, Aldine Transaction, 1972; the only study of interviewer effects I’m aware of that’s based on systematic observational data.
Matarazzo and Weins (M&W) integrate their systematic observational research with similar efforts by others and demonstrate interviewer effects on interviewees. True to their scientific behavioral approach, M&W define the basic units of investigation, which include duration of utterance, interruption, relation time latency (duration of silence), and so on. The data show that interviewers’ behaviors; i.e., content of their questions and their mannerisms/”tactics” while asking (e.g., how long interviewer waits for response) affect interviewees’ behaviors; i.e., content of answers and their mannerisms while responding (e.g., reaction time). In addition, the data show that interviewers’ characteristics (e.g., socioeconomic status, organizational position/function, gender, etc.) affect interviewees’ answers and mannerisms. (Also, research in The Interview indicates that settings affect interviewees.)
By providing scientific, observational, evidence of the biasing/skewing effects of interviewers on interviewees, M&W prove to all who accept systematic observational research that interviews produce unreliable information. It’s not that every answer generated by every interview is incorrect, but when all you have are answers it’s impossible to know which, if any, are correct or incorrect. The only way to know is to check answers with data from one or more non-asking sources, such as experiments and content analysis of documents. Interviewers do have data from these other sources; all they have are answers to their questions; all they have is unreliable information.
Although interviewer effects on interviewees have been extensively acknowledged in the academic literature, M&W’s book (and research cited therein) is, as far as I know, the only examination of interviewer effects based on systematic, observational, research.