Unreliability of Answers to Sensitive Questions

Periodically, I receive a Survey News Bulletin from a university survey research unit; with rare exception, each acknowledges the unreliability of answers to questions. The following is my edited version of a Survey News Bulletin concerning the unreliability of answers to sensitive questions. (The complete Bulletin, No. 34, is reproduced below.)

Sensitive questions are “for example . . . whether a respondent has engaged in risky sexual behavior or used illegal drugs, the extent to which a respondent holds negative racial attitudes, or whether a student has cheated on an exam. . . . Sensitive questions . . . are intrusive, questions where there is a threat of disclosure, or questions for which there are answers that might make the respondent appear socially undesirable. People may deal with surveys that contain sensitive questions by
not participating in the survey (unit nonresponse)! ,
not answering specific questions (item nonresponse)!, or
not answering them honestly! . . . . [lying!]

There are many factors that may affect responses to sensitive questions including
respondent’s tendency to engage in socially desirable responding [another form of lying!] ,
mode of data collection!, and
interviewer characteristics and behavior!”

There “are strategies used in surveys” to counter effects of these factors but “these methods are sometimes
quite complex [or impossible] to implement!,
often result in reduced statistical power!, and may
not always work as intended!.”

My response to Bulletin No. 34—and I believe it’s the only reasonable response—is Don’t Ask if you want reliable information about sex, illegal drug use, racism, or any other sensitive matter. Instead, observe behavior and/or trace behavior, use content analysis, develop random experiments, etc. For a complete statement of the limitations of survey research, as well as brief descriptions of what I call “proper” methods of social science research, see my book, The Problem with Survey Research.

No. 34:
Asking Sensitive Questions:
One challenge of using surveys to collect data is that they rely almost exclusively on self-report data. As such, they rely on respondents to be both able and willing to honestly and completely answer survey questions. One type of question that is a particular challenge in surveys is the sensitive question. Such questions measure, for example, constructs like whether a respondent has engaged in risky sexual behavior or used illegal drugs, the extent to which a respondent holds negative racial attitudes, or whether a student has cheated on an exam. Tourangeau and Yan (2007) define sensitive questions as those that are “intrusive,” questions where there is a “threat of disclosure,” or questions for which there are answers that might make the respondent appear “socially undesirable” (p. 860). People may deal with surveys that contain sensitive questions by not participating in the survey (unit nonresponse), not answering specific questions (item nonresponse), or not answering them honestly (socially desirable responding). There are many factors that may affect responses to sensitive questions including the respondent’s tendency to engage in socially desirable responding, the mode of data collection, and interviewer characteristics and behavior. Indirect questioning techniques like the randomized response technique (RRT) and list technique (aka item count technique or unmatched count technique) are strategies used in surveys that allow respondents to answer a question to an interviewer in a way that protects their anonymity. Unfortunately, these methods are sometimes quite complex to implement, often result in reduced statistical power, and may not always work as intended. Another strategy in asking survey questions is to try to normalize the undesirable behavior or opinion being asked about or providing reassurances about the confidentiality of responses.
See: Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859-883.

About georgebeam

George Beam is an educator and author. The perspectives that inform his interpretations of the topics of this blog–-as well as his other writings and university courses -–are system analysis, behaviorism, and Internet effects. Specific interests include quality management, methodology, and politics. He is Associate Professor Emeritus, Department of Public Administration; Affiliated Faculty, Department of Political Science; and, previously, Head, Department of Public Administration, University of Illinois at Chicago
This entry was posted in Survey Research and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s