Unrepresentative Samples and Results: Fatal Flaws in Survey Research

Updated December 8, 2020

Unrepresentative samples and, therefore, unrepresentative results, are fatal flaws in survey research. With rare exception, all survey research efforts (polls, surveys, interviews, et al.) are unrepresentative, thereby producing unreliable answers; that is, answers that may, or may not, be accurate. The only way to know if answers are, or are not, accurate is to check or verify them with data from non-asking sources; say, from observations or experiments. Survey researchers do not have this type of data; all they have is unreliable information.

Use Unrepresentative Results

Although valid conclusions about whole populations cannot be deduced or inferred from unrepresentative results, survey researchers do not discard unrepresentative results. Instead, they use unrepresentative results and present them as indicative of what’s really going on.

The Census Bureau, it’s generally agreed, uses unrepresentative answers: e.g., answers that undercount by ethnicity, income status, house ownership, and geographic location. Allan Cox, in his study of American corporations, admits he doesn’t have a representative sample from which “to make statistical probability statements” about corporate executives but, of course, he does use unrepresentative data to make statements about the characteristics of corporates executives. Carole Jurkiewicz and Kenneth Nichols, in their study of ethics in Master of Public Administration curriculums, refer to their “significantly” unrepresentative results that “restrain the generalizability of [these] results.” But they’re hardly, if at all, restrained, claiming that “fundamental findings emerged” from the unrepresentative results.

Unrepresentative results are used in asking studies of organizations, sex, eldercare programs, HIV infection rates, opinions about government, ethics, violence, child mental health programs, number of civilians killed by U.S. troops, illicit drug use, alcohol consumption, effects of corporate policies, and so on. Odds are, if it’s been asked about—and essentially everything has been, and still is, asked about—the answers are unrepresentative. This is to say, the ubiquity of asking guarantees that there’s lots of unrepresentative—that is, incorrect—information about almost, if not absolutely, everything. Think about that!

Justify Use of Unrepresentative Results

The widespread use of unrepresentative results is accompanied by numerous and varied justifications for doing so. Many practitioners (e.g., consultants, marketers, and the like) defend the use of the unrepresentative results of Internet surveys by claiming that they’re “useful,” or that the “data [answers] . . . provid[e] important insights.”

Cox says his “confidence” in “the quality of the samples . . . offsets any loss of corporate representativeness entailed in the design.” Schnaiberg (author of an Appendix in Cox’s book) justifies using these unrepresentative results because they’re the most representative at the date when the Report was published. Cheryl King and Camilla Stivers (authors of Government Is Us) although admitting their data about public sector personnel is unrepresentative, justify its use by asserting that it’s “food for thought” and, as such, generalizable to “people who work in government agencies.”

Jurkiewicz and Nichols justify their use of unrepresentative data—not as did Cox by asserting confidence in unrepresentative results, and not as did King and Stivers in terms of eating and thinking—but, rather, by asserting that they were the “first” to describe their “fundamental findings” by their small, unrepresentative, non-generalizable data.

John Stevens and co-askers, in their study of information systems and productivity, justify the use of responses that are unrepresentative and therefore cannot be generalized on the grounds that the results are “sufficiently” representative for their purposes and because the results can be manipulated by the statistical tools they are using. Here are their exact (and, I might add, peer- reviewed) words: “The sample . . . is considered sufficiently representative and large enough to be authoritative for the multivariate analysis performed here and for the level of generalizability sought in basic research or construct validation.” Absolutely!

Survey researchers of all stripes always find ways to justify their almost-always unrepresentative results. If they didn’t make these efforts and concoct justifications acceptable to others in the asking professions, they’d be out of business.

This post includes material from my book, The Problem with Survey Research, pp. 270-72, wherein additional sources are provided.


About George Beam

I'm an educator and author. The perspectives that inform my interpretations of the topics of this blog are behaviorism and system analysis. Specific interests include American politics, socioeconomic issues, survey research, and effects of the Internet and attendant hard- and software. I'm Associate Professor Emeritus, Department of Public Administration, Affiliated Faculty, Department of Political Science, University of Illinois at Chicago.
This entry was posted in Survey Research and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.