INTERNET EFFECTS: Facebook Collects, Follows, Stalks, Buys

Facebook collects ninety-eight data points on each its nearly two billion users.”

Facebook also follows users across the Internet, disregarding their `do not track’ settings as it stalks them.”

[Facebook] also buys personnel information from some of the five thousand data brokers worldwide. . . .”

These quotes are from Sue Halpern, “They Have, Right Now, Another You“, New York Review of Books, 12/22/2016, p. 32.

| Leave a comment

Internet Effects: Defined by the Internet

We are in the Internet Age and that means the Internet shapes everything, including humans and their relationships with each other and with everything else.  Here’s Bill McKibben’s comment on this point: “Our accelerating disappearance into the digital ether now defines us–we are the mediated people, whose contact with one another and the world around us is now mostly veiled by a screen.  We threaten to rebel, just as we threaten to move to Canada after an election.  But we don’t; the current is too fierce to swim to shore.”

Don’t swim against the Internet current!  Accept the Internet and its effects!  Find solutions to problems via the Internet, specifically by building open Internet problem-solving networks.  Scientists are doing this, and so did the US Patent Office.

| 1 Comment

Music, Listeners, and Silence

This seems to me an insightful comment about music: “[John] Cage reminded us that what music communicates `is always going to be largely dependent on the subjectivity of the listener irrespective of the presentation and intention of the composer.  That’s where the beauty of music/sound lies.’  Perhaps this is the hardest lesson that Cage taught classical music.  Strip away the mythology of great composers and the stories their music told and all that’s left is sound.  Then listening becomes a proactive responsibility.  Music is no longer entertainment.  You must sit, sometimes in silence, and listen hard.” (London Review of Books, 12/15/16)

| Tagged | Leave a comment

Unreliability of Polls Acknowledged

The unreliability of polls is acknowledged by more people with each passing day.  A case in point is a Chicago Tribune article by Art and Art History Professor, Eddie Chambers.  Here are a few of his comments:

“What we witnessed on the night of the [2016 presidential] election was a lesson in the unreliability of polls“,

“the media seemed . . . intent on peddling the quack science of polling

For more information about the extent of the acknowledgement of the unreliability of polls, see my book, The Problem with Survey Research.

Posted in Survey Research | Tagged , , , , , | 2 Comments

Pre-Election Polling Unreliable

Pre-election polling is unreliable and acknowledged as such by survey researchers.  Here are some comments from a newsletter published by a university survey research center, followed by a copy of the newsletter:

“many media polls showing a variety of results”

“many aspects of survey design . . . influence the results of a poll”

“five different pollsters/organizations took different approaches to adjusting and analyzing the data to estimate support for Trump and Clinton. . . . The findings for the presidential election question varied quite a bit, from one analysis showing Clinton up by 4 points to one showing Trump up by 1 point.”

(The unreliability of pre-election polls–and of all other forms of survey research, including interviews–is the theme of my book, The Problem with Survey Research.)

“No. 69

Methods for Analyzing Polling Data and Poll Results

Election polling represents one of the most visible examples of survey research. Especially during the campaign leading up to presidential elections, there are many media polls showing a variety of results. There are many aspects of survey design that can influence the results of a poll and one of these is the approach taken to analyze the data. To illustrate this point, Nate Cohn of the New York Times Upshot recently gave raw data from a pre-election poll conducted by Siena College to four pollsters. The data were also analyzed by researchers at the NYT Upshot.

These five different pollsters/organizations took different approaches to adjusting and analyzing the data to estimate support for Trump and Clinton. First, they took different approaches to making the survey sample representative of the population, using different estimates of the population (e.g., the Census or voter registration files) and different approaches for doing so (e.g., traditional weighting versus statistical modeling). Pre-election polls are unique in that their accuracy also is dependent on predicting who votes. The five analysts used different definitions of who is a likely voter (using self-report, voter history, or a combination of the two) and therefore included different subsets of the respondents when estimating support for the two presidential candidates. The findings for the presidential election question varied quite a bit, from one analysis showing Clinton up by 4 points to one showing Trump up by 1 point.”

Posted in Survey Research | Tagged , , | Leave a comment

Confidence in Polls Eroding

The election of The Donald, contrary to predictions of pollsters, has helped erode confidence in polls. The pollsters haven’t given up trying to do what can’t be done–that is, trying to to produce reliable information from answers to questions–but more and more people are recognizing that the asking method is fundamentally flawed.  A case in point is an article by Eddie Chambers, “Only one poll matters; drop the rest” (Chicago Tribune, 11/22/16), in which he calls attention to “the unreliability of polls”,  “the quack science of polling”, and that “time after time, polling has been exposed as a sham”.

For a complete assessment of polling and all other forms of survey research, see my book, The Problem with Survey Research.

 

 

Posted in Survey Research | Tagged , , , | Leave a comment

Interviewer Effects Unexamined

Interviewers affect answers and, thereby, make answers unreliable.  As indicated in the newsletter below from a university survey research center, survey researchers acknowledge as much and, moreover, admit they do not examine these effects.  This newsletter is part of what I call, the Counter Literature to Survey Research, most of which is provided by survey researchers themselves.   They know asking does not produce reliable information but they continue to ask because they’re addicted to asking and to answers.

“Interviewers . . .  influence respondent behaviors [answers] in systematic ways . . . . For example, . . . an interviewer’s observable characteristics — such as gender, age and race/ethnicity — may cue respondents to relevant social norms that then become integrated into their answers. This . . . most likely . . . happen[s] when interviewer characteristics are directly relevant to the questions being asked. For example, interviewer gender may become relevant when respondents are answering questions about gender-related topics. . . .

interviewer variance represents generalized differences across interviewers that are more idiosyncratic in nature, for example, how they phrase questions or probe responses. These differences may account for measurable amounts of unique variance across individual interviewers.

In most survey data analyses, both interviewer effects and interviewer variance remain unexamined, despite the fact that they . . . have significant influence on statistical estimates.”

If you want to find out what’s really going on, don’t ask.  That’s the theme of my book, The Problem with Survey Research.

Posted in Survey Research | Tagged , , | Leave a comment