Updated February 21, 2022
Nonresponse–that is, not answering when asked—is a fatal flaw in survey research. Regardless of the type of asking instrument (poll, survey, self-administered questionnaire, computer assisted telephone interview, et al.) and regardless of whether the asking effort is face-to-face, on the telephone, online, at home, at the office, or at school, every instance of asking produces nonresponse–and usually a lot of it.
Actually, nonresponse is as ubiquitous as asking. With each passing day, at least since first documented in the 1950s, response rates for all types of asking have been falling domestically and internationally and in both commercial and university-based asking endeavors.
Even a small amount of nonresponse–say, 10-14 percent–could, even askers admit, bias results, could produce what they, themselves, according to their professional standards (AAPOR), call “unacceptable” answers. However, nonresponse rates typically are at least 50 percent, and nonresponse in the 75-80 percent range is not uncommon and, as such, are therefore unacceptable but, nevertheless accepted: there are many published studies with response rates of 10 and even 5 percent. If survey researchers didn’t use “unacceptably” low response rates they’d be out of business.
Some of the material in this post is from my book, The Problem with Survey Research, pp. 140-41.
Pingback: Asking Instruments Make Answers Unreliable | George Beam's Blog