2020 Election Polls, Like All Polls, Not Representative


It has been pointed out by many—including Finn McHugh in “Why the polls keep getting it wrong” and Mary Kay Ling and Doree Lewak, “Why election polls were so wrong again in 2020,”—that the 2020 election polls were not representative of the population queried. Trump supporters were underrepresented, as election results demonstrated.

The more important point, however, is not that the 2020 election polls were not representative but, rather, that essentially all polls are not representative. Actually, with rare exception, all survey research efforts, including public opinion surveys (regardless of topic queried) are unrepresentative.

Use Unrepresentative Results

Although valid conclusions or implications about whole populations cannot be deduced or inferred from unrepresentative results, survey researchers do not discard unrepresentative results. Instead, they use unrepresentative results and present them as indicative—or at least suggestive—of what’s really going on.

The Census Bureau, it’s generally agreed, uses unrepresentative answers: e.g., answers that undercount by ethnicity, income status, house ownership, and geographic location. Allan Cox, in his study of American corporations, admits he doesn’t have a representative sample from which “to make statistical probability statements” about corporate executives but, of course, he does use unrepresentative data to make statements about the characteristics of corporates executives. Carole Jurkiewicz and Kenneth Nichols, in their study of ethics in Master of Public Administration curriculums, refer to their “significantly” unrepresentative results that “restrain the generalizability of the[se] results.” But they’re hardly, if at all, restrained, claiming that “fundamental findings emerged” from the unrepresentative results

Unrepresentative results are used in asking studies of organizations, sex, eldercare programs, HIV infection rates, opinions about government, ethics, violence, child mental health programs, number of civilians killed by U.S. troops, illicit drug use, alcohol consumption, effects of corporate policies, and so on. Odds are, if it’s been asked about—and essentially everything has been, and still is, asked about—the answers are unrepresentative. This is to say, the ubiquity of asking guarantees that there’s lots of unrepresentative—that is, incorrect—information about everything. Think about that!

Justify Use of Unrepresentative Results

The widespread use of unrepresentative results is accompanied by numerous and varied justifications for doing so. Many practitioners (e.g., consultants, marketers, and the like) defend the use of the unrepresentative results of Internet surveys by claiming that they’re “useful,” or that the “data [answers] . . . provid[e] important insights.”

Cox says his “confidence” in “the quality of the samples . . . offsets any loss of corporate representativeness entailed in the design.” Schnaiberg (author of an Appendix in Cox’s book) justifies using these unrepresentative results because they’re the most representative at the date of publication. Cheryl King and Camilla Stivers (authors of Government Is Us) although admitting their data about public sector personnel is unrepresentative, justify its use by asserting that it’s “food for thought” and, as such, generalizable to “people who work in government agencies.”

Jurkiewicz and Nichols justify their use of unrepresentative data—not as did Cox by asserting confidence in unrepresentative results, and not as did King and Stivers in terms of eating and thinking—but, rather, by asserting that they were the “first” to describe their “fundamental findings” by their small, unrepresentative, non-generalizable data.

John Stevens and co-askers, in their study of information systems and productivity, justify the use of responses that are unrepresentative and therefore cannot be generalized, on the grounds that the results are “sufficiently” representative for their purposes and because the results can be manipulated by the statistical tools they are using. Here are their exact (and, I might add, peer reviewed) words: “The sample . . . is considered sufficiently representative and large enough to be authoritative for the multivariate analysis performed here and for the level of generalizability sought in basic research or construct validation.” Absolutely!

Pollsters and survey researchers of all stripes always find ways to justify their almost-always unrepresentative results. If they didn’t make these efforts and concoct justifications acceptable to others in the asking professions, they’d be out of business.

This post includes material from my book, The Problem with Survey Research, pp. 270-72, wherein sources are cited.

About George Beam

I'm an educator and author. The perspectives that inform my interpretations of the topics of this blog are behaviorism and system analysis. Specific interests include American politics, socioeconomic issues, survey research, and effects of the Internet and attendant hard- and software. I'm Associate Professor Emeritus, Department of Public Administration, Affiliated Faculty, Department of Political Science, University of Illinois at Chicago.
This entry was posted in Survey Research and tagged , , , . Bookmark the permalink.

1 Response to 2020 Election Polls, Like All Polls, Not Representative

  1. Pingback: Asking Instruments Make Answers Unreliable | George Beam's Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.