In Newsletter No 95, copied below, which I received from a university survey research center in late September, 2017, survey researchers acknowledge that survey research results/answers might not be accurate and, therefore, answers must be checked with data from non-asking sources. This is to say, survey researchers are acknowledging what I call “The Problem with survey research”. The Problem is that answers to questions are unreliable. When all you have are answers to question it’s impossible to know whether any answer is correct or incorrect. The only way to know is to check or verify answers with information/data from two or more non-asking sources, such as observation (e.g., observe and measure biological activity; i.e., biomeasures), experiments, and context analysis of documents.
Newsletter No 95 identifies how respondents contribute to The Problem and calls for non-asking data to check their answers/responses. I have put my clarifying and/or correcting comments in brackets [ ]and bold.
One limitation of data from surveys is that they . . . rely on respondent self-reports. As a result, the accuracy of such data rely on the assumptions that respondents are able and willing to provide . . . accurate responses, and these assumptions may not always be true. First, survey respondents may not always be able to provide all types of information. They may not always know [do not always have relevant and correct information] . . . or be able to provide detailed medical or financial information from memory. [Asking for information means that askers are dependent upon the memory of respondents. To ask is to probe memory and memory is innately elusive and discontinuous.] Respondents’ memory of a specific event may not be accurate (particularly for events that are frequent or regular in which memories from similar events may be confounded) and respondents’ may have difficulty remembering when an event occurred (sometimes called event dating) even if they accurately recall the event itself. This is particularly problematic when respondents are asked to report on behaviors or experiences that occurred within a specific time frame (e.g., In the past 12 months, how many times did you go to see a doctor?). Second, respondents may not always be willing to answer honestly and completely. [Everyone knows that everyone, including respondents, lie and, depending on circumstances (e.g., who’s asking whom about what when), quite often. Everything is a topic of asking and lies are told about everything.] Survey questions sometimes ask respondents about topics that are sensitive (e.g., sexual history) [The extent to which topics are considered sensitive by respondents also makes answers unreliable. The greater the sensitivity of question topics, the greater the effects of rates of response and on reports of the the behaviors and attitudes investigated. Moreover, what’s considered sensitive varies by respondents’ childhood experiences, peer and professional socialization, present and past socioeconomic positions and functions, plans for the future, and so on. Consequently, any question topic can be sensitive.] or that respondents may want to answer in particular ways to give a more positive impression of themselves (e.g., turning out to vote in an election, attending church, or having egalitarian beliefs) or to avoid reporting negative opinions or behaviors (e.g., prejudice toward racial minorities, illegal drug use, or eating unhealthy foods). [Respondents make answers to questions unreliable because they tend to skew their replies, regardless of question topic, to correspond to commonly held social and organizational values and norms. Answerers, like everyone else, are not inclined to be a witness against themselves; they tend not to say things, or have opinions or beliefs, that will harm them, either in the eyes of others, or legally.] Despite these limitations, there is strong evidence that survey responses are typically quite . . . accurate, [“Quite” accurate? This statement should read: there is evidence that some survey responses are accurate and that some are inaccurate. Survey researches need to acknowledge that there’s extensive documentation that survey responses are inaccurate–for the reasons mentioned in Newsletter No. 95; i.e., because of respondents’ memory, lying, etc., and (not mentioned in No. 95) also because answers are skewed by asking instruments, settings in which questions are asked and answers given, as well as by the characteristics (e.g., gender, age, etc. ) and behaviors of askers themselves.] but researchers are increasingly combining survey data with data from other sources [They’re “increasingly” using data from non-asking sources not because there is strong evidence that survey responses are . . . accurate, but because there’s extensive evidence that many survey responses are inaccurate]“.
For a complete statement of The Problem with survey research and how each of the four components of survey research (respondents, asking instruments, settings in which questions are asked and answers given, as well as askers themselves) contribute to The Problem, see my book, The Problem with Survey Research.
See also my blog post: Counter Literature to Survey Research.