Counter Literature to Survey Research

The Counter Literature to Survey Research, which I’m developing in this post, is needed because survey researchers, as well as those who rely on and promote survey research–and that includes a lot of people!–are addicted to asking and answers  and, as we all know, there’s no sense arguing with the addicted!  However, it does make sense to develop the Counter Literature to Survey Research that appeals to the non-asking addicted and helps educate the next generation of social science researchers, popular-media investigators and reporters, as well as the general public so that confidence in survey research is eroded, which, in turn, makes observation, experimentation, and other “proper” (as I name them) methods more attractive and more extensively used.  As proper methods become more attractive and more extensively used, reliable knowledge is thereby attained and problem-solving optimized.

My book, The Problem with Survey Research, is, as far as I know, the only complete, 100%, demolition of survey research.  In the words of one reviewer, it’s “[a] full-throated, high decibel, root and branch assault on surveys”.

However, the Counter Literature to Survey Research includes articles and parts of books and articles, as well as blog posts, and other sources; many written by survey researchers themselves and by those relying on survey research who, to varying extent, acknowledge and/or demonstrate the extensive and fundamental weaknesses of the asking method.  For example, the 60 contributors to Survey Nonresponse acknowledge and document the extensive extent of nonresponse: e.g., “participation in surveys is declining over time”; “all sectors of the survey industry–academic, government, business, and media–are suffering from falling response rates” (p. 41); “there is ample empirical evidence that response rates are declining internationally”; “Nonresponse is indeed an increasing problem in the developed world” (p. 52); etc.  Because these 60 contributors are addicted to survey research, they can’t abandon their “fight against nonresponse” (p. 53); even though it’s a loosing campaign given the fact that nonresponse is increasing in spite of all of their efforts to “control” (p. xv), let alone reduce, it.  All of this “suffering” in exertions to “control” increasing nonresponse (p. xv) is willingly endured “to improve” asking procedures (xiii).  Falling Response Rates! Nonresponse Is An Increasing Problem! 

*****

An early contribution to the Counter Literature to Survey Research is an article by survey researcher, Daniel Katz: “Do Interviewers Bias Poll Results?” [Public Opinion Quarterly, Vol. 6, No. 2 (Summer 1942), pp. 248-68].  He presents information and analysis that support his affirmative answer: “Interviewers . . . [are] a source of bias in public opinion studies. . . . “[W]hite-collar interviewers . . . [and] working-class interviewers, [t]hough both . . . worked under the same instructions, . . . did not find the same public sentiment [i.e., they did not obtain the same answers] on labor and war issues” (p. 248).  Katz demonstrates “the influence  of the social status of the interviewer upon the findings  he reports” (Ibid).   Interviewers A Source Of Bias!

*****

Another early contribution to the Counter Literature to Survey Research is Hyman’s 1944 article assessing respondents, “Do They Tell the Truth?”, in which he answers, not always and–depending upon who’s asking whom about what–not often.  “The distortions” between what people say and what they do, Hyman writes,are significant” (p. 559). [Herbert Hyman, “Do They Tell the Truth?” Public Opinion Quarterly, Vol. 8, No. 4 (Winter, 1944-45), pp. 557-59.]   Distortions Significant!

*****

Speaking  of respondents, as well as truth and not telling it, consider “Little White Lies and Social Science Models: Correlated Response Errors in a Panel Study of Voting” by Stanley Presser and Michael Traugott [Public Opinion Quarterly, Vol. 56, No. 1 (Spring 1992), pp. 77-86].  At the outset, Presser and Traugott (P&T) write that “respondent self-reports may [my emphasis] bias survey data” (p. 77); a conditional eliminated on the next page when respondents’ answers/words are checked with records of actual behavior: “A number of studies using administrative records show that retrospective reports of voting contain considerable  error” (p. 78).  In this article about “misreporting” (p. 78)–a nicer word than lying–P&T also identify and describe those doing the misreporting; viz., the  “misreporters” (ibid.); also named ”`biased respondents]'” and “`those incorrectly claiming that they had voted'” (ibid).  [It’s also the case, as P&T mention, that The American Voter discussed misreporting–so, The American Voter is another contributor to the Counter Literature to Survey Research.  How much so requires further investigation.]  P&T also refer to “inaccuracy . . . [in] three surveys” (p. 80) and the need for “[i]ncreased attention to systematic measurement error in. . . . [s]urveys about subject matters that are seen as socially desirable or undesirable” (p. 86).  Lies!     Considerable Error!     Misreporting!  Biased Respondents!     Inaccuracy In Three Surveys!    Systematic Measurement Error!

*****

Joseph Matarazzo and Arthur Wiens in their book, The Interview , use observational/behavioral data to demonstrate interviewer effects on answers.  Interviewer Effects!

*****

Steve Martin contributes to the counter-literature to survey research with his Harvard Business Review blog post: Stop Listening to Your Customers.  “[T]here is”, he writes, “a fundamental problem with asking people what will persuade them to change: Most of the time they won’t know the answer.  It’s not that they won’t give an answer.  They’ll often provide plenty.  It’s just that the answers they provide will have a high likelihood of being wrong”.  Answers Wrong!    

*****

Robert Weissberg says that his book, Polling, Policy, and Public Opinion, “is not an assault on . . . polling itself” (pp. 6-7), but it is!  Polls in general are slammed as “misleading” (p. vii); unrealistic because they do not force respondents to “select . . . among harsh reality-imposed options” (p. 14); because they are “manipulative public relations ventures” (p. 145) and “entertainment” (p. 176); and so on.  Moreover, and more specifically, he shows how poll results are skewed by question wording, asking instruments, respondents, pollsters themselves, and other components of the asking method.  Misleading!     Unrealistic!

*****

Lindsay Rogers writes in The Pollsters that he’s not against all forms of asking but, in arguing against asking about public opinion, he nevertheless does a first-rate demolition of survey research in general.  Here are two comments from the dust jacket: “A pungent demonstration that pollsters do not know what public opinion is and hence cannot measure it. . . . A forceful warning to those who rashly assume that the only shortcoming of the polls is their lack of accuracy”. Chapter titles describe his assessments of aspects of survey research–e.g., Chapter 11: “Discusses the Framing of Questions and Difficulties with Interviewers: Bias and Cheating “; and, Chapter 14: Stresses the Importance of Those Who Reply `no opinion’, and Shows how Inattention to these Groups Makes the Results of Polls Misleading”.  On pp. 109-12 he discusses, with examples, “loaded” questions and how the meaning of words in questions varies by respondents.  Bias!     Cheating!     Misleading!     Loaded Questions!

*****

Shulamit Reinharz, in “The Ritual of Survey Empiricism” [On Becoming a Social Scientist, pp. 50-125] acknowledges that she “became completely disillusioned” with attitudinal surveys/polls (p. 69).  At one point she writes: “While working on the questionnaire, pondering its probable absurdity and remembering all the other questionnaires and forced-choice questions I have endured, I vowed never again to participate in or accept the results of an attitudinal questionnaire” (p. 76).    Probable Absurdity!     Never Again!

*****

Andrew Hacker, in his 29-paragraph review of five writings on surveys (including my book, The Problem with Survey Research) refers 20 times to survey “limitations”. That’s about one limitation per paragraph and a half, with the result that almost every one of his assertions to the effect that a particular survey helps us understand this or that is followed by a minifying or nullifying qualifier (“but”, “it remains to consider”, “seems too good to be true”, or some such) generated by a limitation of the very same survey that produced the initial-now-questionable-and-perhaps-negated understanding. What a survey gives, a survey takes away.   20 Survey Limitations!

Even so, Hacker’s confidence in surveys is steadfast: “many of the Pew Center’s findings shed light on where the country is going”.
He’s aware of one of the most fundamental limitations of surveys; viz., surveys rely on what people say but what people say does not necessarily and—depending on who’s asking who about what—often does not correspond to what they actually do or think. Commenting on the declining number of whites “willing to say [my emphasis] their own race is natively more intelligent”, he writes: “In part, this may be cautiousness about what one says [my emphasis] aloud, even to anonymous interviewers”; i.e., what these respondents say does not correspond to what they’re actually thinking.    Respondents Cautious About What [They] Say!

Nevertheless, Hacker trusts surveys: “The General Social Survey . . . [produces] interesting findings”.

He mentions additional limitations of surveys, including: question wording biases answers, respondents lie, respondents are not informed, respondents “understate” and “exaggerate”, and survey answers are ambiguous. Also, he points out that in some instances, survey answers are conflicting or inconsistent and do not “meld” into a consistent or meaningful view about “where the country is moving”.  Question Wording Biases Answers!     Respondents Lie!     Respondents Are Not Informed!     Respondents Understate And Exaggerate!     Survey Answers Are Ambiguous!     Survey Answers Are Conflicting Or Inconsistent!

Still, Hacker accepts results of surveys, evident, for instance, when he writes that a survey-based chapter in one of the reviewed books is “revealing”.
Although he only hints at the limitation of nonresponse in such phrases as, “not all who pick up [telephone calls] are willing to talk”, I’m sure he knows that increasing nonresponse rates for most surveys make nonresponse an increasingly detrimental limitation. He also calls attention to the “hazards” of online surveys; e.g., it’s not possible to know who is answering: “It might be that someone with dementia is responding, it might be a teenager in Riga”.  Limitation Of Nonresponse!     Hazards Of Online Surveys!

In seeming disregard of recognized reasons and evidence against the asking method, Hacker’s faith remains firm: “surveys . . . [can] yield new, even unexpected information”.

Hacker also knows that question format is another limitation. Closed ended survey questions, for instance, don’t allow “people [to] `speak’ on surveys . . . [because] they are choosing from options others have framed”. In addition, he acknowledges that survey researchers, themselves, make answers unreliable when they word questions to obtain the answers they want. For example, in discussing a result of questions concerning trends in views of marriage—namely, “that 39 percent of Americans agree that `marriage is becoming obsolete’”—he writes: “it’s hard to believe that so many were calling it `obsolete’ before an interviewer introduced the word”.  Question Format Is A Limitation!     Survey Researchers, Themselves, Make Answers Unreliable!

*****

Many fatal flaws of survey research are acknowledged in Overcoming Survey Research Problems: low response, errors in Web-based surveys, biases induced by incentives/bribes and by sensitive question topics, and so on.  Contrary to the title, the authors of this edited collection do not show how to “overcome” any survey research weakness.  Rather, they identify some, but not all, of them; describe some, but not all, efforts/procedures to counter these issues, and assert, not that the obstacles have been “overcome” but, rather, that survey researchers “must constantly innovate. . . . change . . . methods” (p. 18).  In one instance the authors “offer an additional cautionary note” (p. 36) and, in the case of surveys on sensitive topics, insist “that even greater care be taken” (p. 49).  Low Response!   Errors!   Biases!

*****

Herbert Asher’s, Polling and the Public, is mostly identifications,  discussions, and demonstrations of the deficiencies of the asking method.  In essence, he demolishes public opinion polling in particular, and survey research in general, with the stated aim “to help readers become wiser consumer of public opinion polls” (p. xii).  In other words, Asher knows that asking is a fundamentally flawed procedure for finding out what’s really going on, but he remains committed to it.     Talk about addiction to asking!

In the Preface–on the very first page of the book!–he calls attention to the growing plight of pollsters: “In the past decade polling has faced new technical and methodological challenges.  Traditional telephone polling has become more difficult [and there’s been] a dramatic increase in Web surveys, some . . . simply contemporary versions of the pseudopolls often conducted by news media and other organizations” (p. xi).  Four lines later–and we’re still on the first page!!–he mentions again “the methodological challenges facing pollsters” (ibid).  On the next page–that’s the second page of the book!–he mentions the “frequent misuse” of public opinion polls; e.g., the use of polls by candidates for public office, 501(c)s, and many other groups and organizations “to advance their own objectives” and he also mentions “factors that . . . influence poll results” (p. xii).  New Technical And      Methodological Challenges!   More Difficult!   Pseudopolls!  Frequent Misuse!   Influence Poll Results!  

On the second page of the 1st chapter he writes that in “a number of instances . . . claims were made that a poll was representative . . . when, in fact [it was not]”; that “polls are increasingly used . . . to convince and even manipulate. . . . with [users of polls] trying to advance their cases by citing supportive poll results”; and he specifies some of “the factors that can [and do] affect poll results–such as question wording, sampling techniques, and interviewing procedures” (p.2).  Still on page two but continuing to page three, he refers to “citizens . . . being manipulated by those who use polls . . . to promote their own ends”.  And at the bottom of page three, he again mentions pseudopolls, calling attention to “the huge growth in the use of pseudopolls–nonscientific and often biased polls”.  Not Representative!  Factors That Affect Poll Results[:] Question Wording, Sampling Techniques, And Interviewing Procedures!   Huge Growth In Pseudopolls!  Nonscientific Polls!   Biased Polls!

All of these weaknesses in polling and we’re only on page 3!  Even so, Asher remains a supporter of polling.  Talk about addiction to asking! !

Beginning on page 8 and continuing through page 18, Asher discusses “FRUGging” (“fund-raising under the guise of surveying”), “SUGging” (“selling under the guise of [survey] research”) and pseudopolls (“nonscientific and often biased”).  He also points out that “[i]n recent years, respondent participation rates have declined” (p.11).  In addition, he writes that “the use of a public opinion poll as a ploy to raise money is . . . widespread” (p. 12); that in these types of surveys, questionnaire “items [are] carefully constructed to generate responses sympathetic to the sponsors’ objectives” (p. 13); the “questions are loaded” (p. 14); that “the people who actually complete the questionnaires may not be representative” (p. 15); that CNN said that its “`Quick Vote [poll] is not scientific'” (p.16);  that these types of polls “are highly flawed and may give misleading portraits of public opinion” (p. 16); incorporate “unfair question wording” (p. 16); that “these unscientific enterprises . . . are becoming more prevalent in the United States” (p. 17); and that “bad polling practices and results can mislead many Americans” (p. 17).  He also mentions “the shortcomings of scientific polls” (p. 18).  Sensing that his case against asking might persuade his readers to abandon the asking method–and for sure he doesn’t want that to happen!–he states on page 18: “Readers should not construe this book as a condemnation of . . . polls”.  But it is!   FRUGging!   SUGging!   Participation Rates Decline!   Poll As A Ploy [Is]  Widespread!    Questionnaire Constructed To Generate Responses Sympathetic To The Sponsors’ Objectives!   Not Representative!   Not Scientific!   Highly Flawed!   Misleading!   Unfair Question Wording!   These Unscientific Enterprises More Prevalent!   Mislead!  Shortcomings!

Referring on p. 18 to Americans as “major consumers of public opinion research on a wide variety of topics”, and probably not “smart consumers”, he writes: “Americans should be aware of the problems and limits of polls before they `buy’ anything from them.  Often someone is actively promoting the poll results to generate support for his or her objectives.  It might be the president, citing polls to argue that the American people support administration policies.  It might be a local builder, waving the results of a neighborhood poll purporting to show local support for a rezoning ordinance to permit his commercial construction project to go through.  It might be a regional transportation commission, citing pol results to justify the establishment of bus lanes on freeways.  Or it might be a friend or neighbor selectively using poll results to win an argument”.   Problems And Limits Of Polls!      Promot[e] Poll Results To Generate Support!

On page 19, he mentions “the increased challenges faced by traditional telephone polling”.   Increased Challenges Faced By Traditional Telephone Polling!

A few pages later (p.24) Asher describes the commonly used “push” poll: “In a push poll, a campaign contacts a large number of voters under the guise of conducting a public opinion poll, presents some negative information about the other candidate, and then ask some questions about that candidate. . . . The aim of push polls is not to acquire . . . information about the election contest but to push potential voters away from a particular candidate”.   Push Polls!  [See also my posts, Push Polls by Interests Groups, and All Polls Push]

Essentially, all of Chapter 2 (pp. 37-61) is a description, with examples, of how results of polling are biased and made unreliable because respondents are uninformed about question topics.  As Asher words it at the outset of this chapter: “Too often in a survey context, people respond to questions about which they have no genuine attitudes or opinions”.   Here are other comments in chapter 2 about how respondents (and askers) skew answers and make them unreliable:  “[P]ollsters . . . include questions on topics that people know very little about” (p. 38).  “Few people in [interviews] . . . admit they are uninformed. . . .  So most . . . answer the questions, and the interviewer duly records their response.  Even on a self-administered mail questionnaire or on a Web-based survey, respondents may feel the need to show they are informed, and therefore they answer questions about which they have little information” (p. 39).  On page 41, Asher refers to “responses that are superficial responses to the interview stimulus”.  “[A]nother problem in attitude and opinion measurement [is]: What does it mean when a person replies to a survey question, `I don’t know’ or `I can’t decide’ or `It depends’?  Do these responses represent a genuinely neutral stance or something else? (p. 47); maybe the refusal to state an opinion arises from “a strong sense of privacy” or a desire to “not . . . offend anybody” (p. 48).  “[C]haracteristics of respondents, as well as the properties of survey questions . . . affect the frequency of `no opinion’ and `don’t know’ answers” (p. 49).  “[R]esponse alternatives and their ordering affect survey responses” (p. 49).  “If a survey is measuring genuine attitudes, the responses should show some degree of stability over time.  Yet often survey responses fluctuate radically over a relatively short period” (p. 51).  Because the American citizenry is largely uninformed about many social, economic, and political issues, some people “wonder whether poll results can really tell us anything meaningful about citizens’ policy preferences on some matters” (p. 56).  “Americans express opinions on many things, even when they have little information, and . . . the very act of polling and asking questions often creates opinions that might not otherwise be evident” (p. 56).   Respondents Are Uninformed!   People Respond To Questions About Which They Have No Genuine Attitudes Or Opinions!   Few Interviewees . . . Admit They Are Uninformed!   What Does It Mean When A Person Replies, `I Don’t Know’?!   Characteristics Of Respondents Affect Frequency Of `No Opinion’ And `Don’t Know’ Answers!   Properties Of Survey Questions Affect Frequency Of `No Opinion’ And `Don’t Know’ Answers!   Response Alternatives Affect Responses!   Response Ordering Affects Responses!   Responses Fluctuate!   Wonder Whether Poll Results Can Tell Us Anything Meaningful!    Polling Creates Opinions!

Mostly all of Chapter 3, “Wording and Context of Question”, contributes to the Counter Literature of Survey Research; for example:  “Of all the pitfalls associated with public opinion polling, question wording is probably the one most familiar to consumers of public opinion research. . . . Individuals with an ax to grind can easily construct questions that will generate the responses they want” (p. 63).  “Even when the sponsor [of the poll] has no obvious ax to grind, question wording choices greatly influence the results obtained. . . . Less obvious than the impact of question wording is the effect on responses of the order and context in which specific questions are placed” (p. 64).  “questions . . . can seem ambiguous to respondents” (p. 66). . . . “response alternatives that a question provides can affect survey results” (p. 70). . . . “visual design of self-administered questionnaires can affect responses” (p. 86). . . . “Personal circumstances, recent societal events, and the content  of media coverage can alter the meaning of a survey question for respondents” (p. 87). . . . “snowball samples may not be generalizable” (p. 93). . . . “growing difficulty in contacting citizens by telephone” (p. 102). . . . “A growing concern among pollsters is the problem of nonresponse” (p. 107). . . . “An ongoing problem in polling is the tendency of selected samples to overrepresent females and underrepresent males” (p. 114). . . . “weighting has not been particularly effective in adjusting for the under coverage of cell phone users in surveys” (p. 115).   Pitfall Associated With Public Opinion Polling: Question Wording!   Construct Questions That Will Generate The Responses They Want!   Question Wording Choices Greatly Influence The Results Obtained!   Effect On Responses Of The Order And Context In Which Specific Questions Are Placed!    Questions Can Seem Ambiguous To Respondents!    Response Alternatives That A Question Provides Can Affect Survey Results!   Visual Design Of Self-Administered Questionnaires Can Affect Responses!    Personal Circumstances, Recent Societal Events, And The Content  Of Media Coverage Can Alter The Meaning Of A Survey Question For Respondents!     Snowball Samples May Not Be Generalizable!    Growing Difficulty In Contacting Citizens By Telephone!    A Growing Concern Among Pollsters Is Nonresponse!    An Ongoing Problem In Polling Is The Tendency Of Selected Samples To Overrepresent Females And Underrepresent Males!     Weighting Has Not Been Particularly Effective In Adjusting For The Under Coverage Of Cell Phone Users In Surveys!

(More from Asher, Polling and the Public, later.)

*****

For fun and facts, check out Pollster, a book of cartoons slamming both askers and answerers.  (Out of print; if Amazon doesn’t have it, try Abe Books or local library.)

*****

Benjamin Ginsberg, contributes to the Counter Literature to Survey Research when he discusses in Chapter 3 of his book, The Captive Public, pp. 58-85, effects of asking instruments on answers.  “The data reported by opinion polls”, he writes, “are actually the product of an interplay between opinion and the survey instrument.  As they measure, the polls interact with opinion, producing changes in the character and identity of the views receiving public expression” (p. 60). Polls Produc[e] . . . Changes!  In the next sentence he refers again to “The changes induced by polling” (p. 60).  The Changes Induced By Polling!

“polling can affect . . . the beliefs of individuals asked to respond to survey questions” (p. 62).  Affect . . . Beliefs Of . . . Respond[ents]!

“polling has come to be one of the important factors that help determine how, whose, which, and when private beliefs will become public matters.  Indeed, . . . polling has done much to change the aggregation, cumulation, and public expression of citizens’ beliefs” (p.62).  Polling . . . Change[s] The Aggregation, Cumulation, And Public Expression Of Citizens’ Beliefs!

“polls elicit subjects’ views on questions that have been selected by an external agency–the survey’s sponsors–rather than by the respondents themselves.  Polling thus erodes individuals’ control over the agenda of their own expressions of opinion.  With the  use of surveys, publicly expressed opinion becomes less clearly an assertion of individuals’ own concerns and more nearly a response to the interests of others.  The most obvious consequence of this change is that polling can create a misleading picture of the agenda of public concerns, for what appears significant to the agencies sponsoring polls may be quite different from the concerns of the general public” (pp. 80-81).  Polling . . . Erodes Individuals’ Control Over . . . Their Own Expressions Of Opinion!   Polling . . . Create[s] A Misleading Picture Of . . . Public Concerns!

“Given the commercial character of the polling industry, differences between the polls’ concern and those of the general public are probably inevitable. . . . Because they seldom pose questions about the foundations of the existing order, while constantly asking respondents to choose from among the alternatives defined by that order–candidates and consumer products, for example–polls may help narrow what the public perceives to be realistic and social possibilities. . . . [P]olling fundamentally alters the character of the public agenda of opinion” (p. 82). Differences Between The Polls’ Concern And Those Of The General Public Are . . .  Inevitable!  Polls . . . Narrow What The Public Perceives To Be Realistic And Social Possibilities!  [P]olling Fundamentally Alters . . . Public . . . Opinion!

*****

Another contribution to the Counter Literature is a website post that I’ve titled, using words from this post:

Given The Limitations Of Surveys . . . One Might Ask Why Surveys Are Conducted At All.

“Surveys obtain information by asking people questions. Those questions are designed to measure some topic of interest. We want those measurements to be as reliable and valid as possible, in order to have confidence in the findings and in our ability to generalize beyond the current sample and setting. . . .
At the root of these measurement issues is how the survey questions are asked. Careful crafting of survey questions is essential, and even slight variations in wording can produce rather different results. Consider one of the most commonly studied issues in aging: activities of daily living (ADLs). ADLs refer to the basic tasks of everyday life such as eating, dressing, bathing, and toileting. ADL questions are presented in a staged fashion asking first whether the respondent has any difficulties in performing the task by themselves and without the use of aids. If any difficulty is reported, the respondent is then asked how much difficulty he or she experiences, whether any help is provided by another person or by an assisting device, how much help is received or how often the assisting device is used, and who is that person and what is that device”. Even Slight Variations In Wording Can Produce Rather Different Results!

“[P]revalence estimates of the number of older adults who have ADL difficulties vary by as much as 60 percent from one national study to another. In addition to variations in sampling design, . . . differences in the prevalence estimates result from the selection of which specific ADLs the respondents are asked about, how long the respondent had to have the ADL difficulty before it counts, how much difficulty the respondent had to have, and whether the respondent had to receive help to perform the ADL. . . .” Estimates . . . Vary By As Much As 60 Percent From One National Study To Another!

“A related concern is the correspondence between self-reported ADL abilities and actual performance levels”.  Concern [About] The Correspondence Between Self-Reported ADL Abilities And Actual Performance Levels!

“Even when reliable and valid questions are asked, there can still be serious problems due to missing data. Missing data comes in three varieties: people who refuse to participate (the issue of response rates), questions that are left unanswered (the issue of item missing values), and (in longitudinal studies) respondents who are lost to follow-up (the issue of attrition). The problem is that missing data results in (1) biased findings if the people for whom data is missing are systematically different, [and] (2) inefficient statistical estimates due to the loss of information”.  Serious Problems Due To Missing Data! Biased Findings! Inefficient Statistical Estimates!

“The most important limitation of surveys has to do with internal validity, or the establishment of causal relationships between an independent variable (the cause, denoted by X) and a dependent variable (the effect, denoted by Y). . . . [Only] experimental designs meet the criteria for probabilistic causation. In survey research, however, this is not the case because assignment to the experimental versus control group has not been randomized and the time sequence has not been manipulated”.  Surveys [Do Not] . . . Establish . . . Causal Relationships!

“Given the limitations of surveys . . . one might ask why surveys are conducted at all”.   And the answer is: survey researchers are addicted to asking; they can’t stop doing what they know they shouldn’t.

*****

Contributions to the Counter Literature to Survey Research are provided by a university survey research unit’s Survey News Bulletin that I periodically receive. With rare exception, the Bulletins acknowledges the unreliability of answers to questions. The following is my edited version of a Bulletin concerning the unreliability of answers to sensitive questions. (Just so you know I’m not making this up, the complete Bulletin, No. 34, is reproduced below.)

Sensitive questions are “for example . . . whether a respondent has engaged in risky sexual behavior or used illegal drugs, the extent to which a respondent holds negative racial attitudes, or whether a student has cheated on an exam. . . . Sensitive questions . . . are intrusive, questions where there is a threat of disclosure, or questions for which there are answers that might make the respondent appear socially undesirable. People may deal with surveys that contain sensitive questions by
not participating in the survey (unit nonresponse)!,
not answering specific questions (item nonresponse)!, or
not answering them honestly! . . . . [lying!]

There are many factors that may affect responses to sensitive questions including
respondent’s tendency to engage in socially desirable responding [another form of lying!] ,
mode of data collection!, and
interviewer characteristics and behavior!

There “are strategies used in surveys” to counter effects of these factors but “these methods are sometimes
quite complex [or impossible] to implement!,
often result in reduced statistical power!, and may
not always work as intended!.”

No. 34:
Asking Sensitive Questions:
One challenge of using surveys to collect data is that they rely almost exclusively on self-report data. As such, they rely on respondents to be both able and willing to honestly and completely answer survey questions. One type of question that is a particular challenge in surveys is the sensitive question. Such questions measure, for example, constructs like whether a respondent has engaged in risky sexual behavior or used illegal drugs, the extent to which a respondent holds negative racial attitudes, or whether a student has cheated on an exam. Tourangeau and Yan (2007) define sensitive questions as those that are “intrusive,” questions where there is a “threat of disclosure,” or questions for which there are answers that might make the respondent appear “socially undesirable” (p. 860). People may deal with surveys that contain sensitive questions by not participating in the survey (unit nonresponse), not answering specific questions (item nonresponse), or not answering them honestly (socially desirable responding). There are many factors that may affect responses to sensitive questions including the respondent’s tendency to engage in socially desirable responding, the mode of data collection, and interviewer characteristics and behavior. Indirect questioning techniques like the randomized response technique (RRT) and list technique (aka item count technique or unmatched count technique) are strategies used in surveys that allow respondents to answer a question to an interviewer in a way that protects their anonymity. Unfortunately, these methods are sometimes quite complex to implement, often result in reduced statistical power, and may not always work as intended. Another strategy in asking survey questions is to try to normalize the undesirable behavior or opinion being asked about or providing reassurances about the confidentiality of responses.
See: Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859-883.

*****

The development and dissemination of the Counter Literature to Survey Research is beneficial because as confidence in survey research erodes, other methods–such as observation and experiments–become more attractive and more extensively used.  Reliable knowledge is thereby attained and problem solving optimized.

About georgebeam

George Beam is an educator and author. The perspectives that inform his interpretations of the topics of this blog–-as well as his other writings and university courses -–are system analysis, behaviorism, and Internet effects. Specific interests include quality management, methodology, and politics. He is Associate Professor Emeritus, Department of Public Administration; Affiliated Faculty, Department of Political Science; and, previously, Head, Department of Public Administration, University of Illinois at Chicago
This entry was posted in Survey Research and tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

7 Responses to Counter Literature to Survey Research

  1. Tony Hinnant says:

    The Problem with Survey Research
    George Beam I like it….it helped a lot. Hope to see more of these books.
    Thank you
    Mr Tony Hinnant

  2. Pingback: Review of Andrew Hacker, “Who Knows the American Mind?” | George Beam's Blog

  3. Pingback: Use Surveys Known to be Unreliable | George Beam's Blog

  4. Pingback: Katz and Hyman’s Contributions to the Counter Literature to Survey Research | George Beam's Blog

  5. Pingback: Interviewer Effects Unexamined | George Beam's Blog

  6. Pingback: Question Wording, Change in Wording, and Rosa’s Law | George Beam's Blog

  7. Pingback: Survey Research Needs Data From Non-Asking Sources | George Beam's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s