Efforts To Make Answers Reliable Fail

Updated February 10, 2023

Survey researchers/askers are always trying to make their always-unreliable answers reliable and always fail to do so.  Answers are always unreliable because answers are always affected/skewed by questions (e.g., wording), askers (e.g., gender affects answers), respondents (e.g., their memory), and settings in which questions are asked and answers given (e.g., different answers given to online pollsters than to friends at the bar or on the street).  Efforts to make always-unreliable answers reliable–e.g., by inserting attention checks (see No. 106 below)–always fail because they also always affect/skew, or to use No. 106’s word, “influence” answers.  To ask, with or without attention checks, is to affect/influence answers; that’s just the way it is!

In the newsletter below, which I received from a university research center, I have put the most relevant parts in bold and I’ve placed my comments in bold and brackets [  ] .

“No. 106
Using Attention Checks to Identify Poor Quality Data

With the ever increasing popularity of self-administered modes of survey data collection, particularly online, attention checks have become a common approach to verifying that respondents are in fact giving due attention to the survey response task.  Also known as “Instructional manipulation checks” (IMCs), or “screeners,” attention checks are intended to identify individuals [respondents]who satisfice when responding, typically by not reading questions carefully and hence failing to correctly follow instructions. Respondents unable to “pass” attention   questions are believed to provide poor quality data that is less reliable, [BELIEVED; survey researchers don’t KNOW because the only way to KNOW if answers are or are not reliable is to check or verify them with data from non-asking sources.  Survey researchers do not have this data; therefore, they DON’T KNOW.]  and those respondents are often excluded when conducting data analyses.

However, more recent empirical research is inconclusive regarding these assumptions about attention checks and the value of excluding those who “fail” attention checks when analyzing the study data. There is concern that, because failure of attention checks may be correlated with some sociodemographic variables, deleting these cases may have a detrimental effect on the composition of final samples, which may also affect data quality. There are additional concerns that attention check questions may influence subsequent respondent behavior in ways that can also damage data quality by increasing respondent mistrust of researchers and by decreasing motivation to carefully answer subsequent questions. Consequently, recent research now advises against using attention checks and removing these respondents.”  [HOWEVER, by NOT using attention checks, answers remain unreliable because there’s no way to know if respondents are “giving due attention to the survey response task”; no way to know if they’re “reading questions carefully”; if they’re correctly follow[ing] instructions.” TO ASK,WITH OR WITHOUT ATTENTION CHECKS, IS TO AFFECT/INFLUENCE ANSWERS; THAT’S JUST THE WAY IT IS!]

If you want to find out what’s really going on, don’t ask.  That’s the theme of my book, The Problem with Survey Research.

About George Beam

I'm an educator and author. The perspectives that inform my interpretations of the topics of this blog are behaviorism and system analysis. Specific interests include American politics, socioeconomic issues, survey research, and effects of the Internet and attendant hard- and software. I'm Associate Professor Emeritus, Department of Public Administration, Affiliated Faculty, Department of Political Science, University of Illinois at Chicago.
This entry was posted in Survey Research and tagged , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.