Pre-Election Polling Unreliable

Pre-election polling is unreliable and acknowledged as such by survey researchers.  Here are some comments from a newsletter published by a university survey research center, followed by a copy of the newsletter:

“many media polls showing a variety of results”

“many aspects of survey design . . . influence the results of a poll”

“five different pollsters/organizations took different approaches to adjusting and analyzing the data to estimate support for Trump and Clinton. . . . The findings for the presidential election question varied quite a bit, from one analysis showing Clinton up by 4 points to one showing Trump up by 1 point.”

(The unreliability of pre-election polls–and of all other forms of survey research, including interviews–is the theme of my book, The Problem with Survey Research.)

“No. 69

Methods for Analyzing Polling Data and Poll Results

Election polling represents one of the most visible examples of survey research. Especially during the campaign leading up to presidential elections, there are many media polls showing a variety of results. There are many aspects of survey design that can influence the results of a poll and one of these is the approach taken to analyze the data. To illustrate this point, Nate Cohn of the New York Times Upshot recently gave raw data from a pre-election poll conducted by Siena College to four pollsters. The data were also analyzed by researchers at the NYT Upshot.

These five different pollsters/organizations took different approaches to adjusting and analyzing the data to estimate support for Trump and Clinton. First, they took different approaches to making the survey sample representative of the population, using different estimates of the population (e.g., the Census or voter registration files) and different approaches for doing so (e.g., traditional weighting versus statistical modeling). Pre-election polls are unique in that their accuracy also is dependent on predicting who votes. The five analysts used different definitions of who is a likely voter (using self-report, voter history, or a combination of the two) and therefore included different subsets of the respondents when estimating support for the two presidential candidates. The findings for the presidential election question varied quite a bit, from one analysis showing Clinton up by 4 points to one showing Trump up by 1 point.”

About georgebeam

George Beam is an educator and author. The perspectives that inform his interpretations of the topics of this blog–-as well as his other writings and university courses -–are system analysis, behaviorism, and Internet effects. Specific interests include quality management, methodology, and politics. He is Associate Professor Emeritus, Department of Public Administration; Affiliated Faculty, Department of Political Science; and, previously, Head, Department of Public Administration, University of Illinois at Chicago
This entry was posted in Survey Research and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s