Government And Politics Superseded By Internet+

In, Usual Politics: A Critique and Some Suggestions for an Alternative,  I described the bankruptcy/collapse of governments dubbed “democratic”.  (A few people then, and more now, agree.)  I described–correctly, I believe–the decline of legislatures, voting, and elected officials and the rise of what’s referred to today as “the administrative state“;  a situation, beginning in earnest during Deleno’s days in the house-white-built-by-blacks, in which administrative structures, procedures, and personnel came to pervade and dominate all aspects of government and politics.  However, since the beginning of the 21st Century (2007, to be exact) the administrative state, as well as almost all the rest of government and politics (usual, unusual, constitutional, left, right, center, Republican, Democrat, courts, elections, and so on) has been superseded in relevancy, efficiency, effectiveness–dare I say power?–by Internet+; that is by the Internet plus attendant hardware (e.g., computes, smartphones, sensors,) and software (e.g.,Google, algorithms, the cloud).  Internet+ is now the dominate reality.

| Leave a comment

Survey Researchers Acknowledge Need For Data From Non-Asking Sources

In Newsletter No 95, copied below, which I received from a university survey research center in late September, 2017,  survey researchers acknowledge that survey research results/answers might not be accurate and, therefore, answers must be checked with data from non-asking sources.  This is to say, survey researchers are acknowledging what I call “The Problem with survey research”.  The Problem is that answers to questions are unreliable.  When all you have are answers to question it’s impossible to know whether any answer is correct or incorrect.  The only way to know is to check or verify answers with information/data from two or more non-asking sources, such as observation (e.g., observe and measure biological activity; i.e., biomeasures), experiments, and context analysis of documents.

Newsletter No 95 identifies how respondents contribute to The Problem and calls for non-asking data to check their answers/responses.  I have put my clarifying and/or correcting comments in brackets [ ]and bold.

“No. 95

One limitation of data from surveys is that they . . . rely on respondent self-reports. As a result, the accuracy of such data rely on the assumptions that respondents are able and willing to provide . . . accurate responses, and these assumptions may not always be true. First, survey respondents may not always be able to provide all types of information. They may not always know [do not always have relevant and correct information] . . .  or be able to provide detailed medical or financial information from memory. [Asking for information means that askers are dependent upon the memory of respondents. To ask is to probe memory and memory is innately elusive and discontinuous.] Respondents’ memory of a specific event may not be accurate (particularly for events that are frequent or regular in which memories from similar events may be confounded) and respondents’ may have difficulty remembering when an event occurred (sometimes called event dating) even if they accurately recall the event itself. This is particularly problematic when respondents are asked to report on behaviors or experiences that occurred within a specific time frame (e.g., In the past 12 months, how many times did you go to see a doctor?). Second, respondents may not always be willing to answer honestly and completely. [Everyone knows that everyone, including respondents, lie and, depending on circumstances (e.g., who’s asking whom about what when), quite often.  Everything is a topic of asking and lies are told about everything.] Survey questions sometimes ask respondents about topics that are sensitive (e.g., sexual history) [The extent to which topics are considered sensitive by respondents also makes answers unreliable.  The greater the sensitivity of question topics, the greater the effects of rates of response and on reports of the the behaviors and attitudes investigated.  Moreover, what’s considered sensitive varies by respondents’ childhood experiences, peer and professional socialization, present and past socioeconomic positions and functions, plans for the future, and so on.  Consequently, any question topic can be sensitive.] or that respondents may want to answer in particular ways to give a more positive impression of themselves (e.g., turning out to vote in an election, attending church, or having egalitarian beliefs) or to avoid reporting negative opinions or behaviors (e.g., prejudice toward racial minorities, illegal drug use, or eating unhealthy foods). [Respondents make answers to questions unreliable because they tend to skew their replies, regardless of question topic, to correspond to commonly held social and organizational values and norms.  Answerers, like everyone else, are not inclined to be a witness against themselves; they tend not to say things, or have opinions or beliefs, that will harm them, either in the eyes of others, or legally.] Despite these limitations, there is strong evidence that survey responses are typically quite . . . accurate, [“Quite” accurate?  This statement should read: there is evidence that some survey responses are accurate and that some are inaccurate.  Survey researches need to acknowledge that there’s extensive documentation that survey responses are inaccurate–for the reasons mentioned in Newsletter No. 95; i.e., because of respondents’ memory, lying, etc., and (not mentioned in No. 95) also because answers are skewed by asking instruments, settings in which questions are asked and answers given, as well as by the characteristics (e.g., gender, age, etc. ) and behaviors of askers themselves.] but researchers are increasingly combining survey data with data from other sources [They’re “increasingly” using data from non-asking sources not because  there is strong evidence that survey responses are  . . . accurate, but because there’s extensive evidence that many survey responses are inaccurate]“.  

For a complete statement of The Problem with survey research and how each of the four components of survey research (respondents, asking instruments, settings in which questions are asked and answers given, as well as askers themselves) contribute to The Problem, see my book, The Problem with Survey Research.

See also my blog post: Counter Literature to Survey Research.

Posted in Survey Research | Tagged , , , | Leave a comment

Merge with AI

A recent development that might help us keep up with advances in the Internet and Internet-related hard- and software (computers, smartphones, sensors, the cloud, Google, etc.) comes from Elton Musk’s “newest venture, Neuralink, a California company that plans to develop a device that can be implanted into the brain. . . . The device would allow a person’s brain to connect wirelessly with the cloud, as well as computers and other brains with the implant. . . . `We’re going to have the choice’, says Musk, `of either being left behind and being effectively useless or like a pet–you know, like a house cat or something–or eventually figuring some way to be symbiotic and merge with AI'”.  (Chicago Tribune, 8/23/17)

Posted in Internet Effects | Tagged , | Leave a comment

Deceptive AAPOR Evaluation of 2016 Election Polls

The 2016 election polls were inaccurate, predicting a win for Email-Server-Hillary, whereas Mussolini-Arpaio-Trump prevailed and now is the Oval Office One.  But pollsters, because they’re addicted to asking, are seldom able to admit their mistakes.  Instead, they try to deceive by putting  a positive gloss on their failures.  In the newsletter below, “AAPOR Releases Evaluation of 2016 Election Polls”, which I received from a university survey research center, I indicate in bold the deception and in brackets [ ], bold, and italics my comment on it.

No. 92

AAPOR Releases Evaluation of 2016 Election Polls

On May 4, the American Association for Public Opinion Research (AAPOR) released its much anticipated report concerning the accuracy of 2016 national and state election polls in the U.S. Key conclusions from that report include:

“National polls were generally correct and accurate  [generally? “Generally” is not accepted in social SCIENCE.  In science we need to know which specific polls, and how many of them, were correct and accurate, and which ones, and how many, were not] by historical standards”  [The standard for scientific correctness and accuracy is correspondence with reality, not what pollsters accepted historically, in the past, for correctness and accuracy.

“State-level polls showed a competitive, uncertain contest [this is a positive gloss on state-level polls that attempts to mute the following comment that these polls under-estimated Mussolini-Trump’s support] but clearly under-estimated Trump’s support in the Upper Midwest”  [A non-deceptive statement would read: State-level polls clearly under-estimated Trump’s support in the Upper Midwest.]

There were multiple reasons why the polls under-estimated support for Trump,[Yeah! Non-deceptive statement] including:

“Real late change in voter preference during the final week of the campaign”

Adjustments for over-representation of college graduates was necessary, but many polls failed to do so [Yeah! Non-deceptive statement]

“Some Trump voters who participated in pre-election polls did not reveal themselves as Trump voters until after the election, and they out-numbered late-revealing Clinton supporters” [Yeah! Non-deceptive statement]

“Ballot order effects may have played a role in some state contests, but they do not go far in explaining the polling errors” [Yeah! Non-deceptive statement]

Predictions that Clinton had a very high probability of winning “helped crystalize the erroneous belief that Clinton was a shoo-in for president, with unknown consequences for turnout” [Yeah! Non-deceptive statement]

A spotty year for election polls [A non-deceptive comment would read: The 2016 presidential election polls failed to predict the winner is not an indictment of all survey research or even all polling” [Failure IS an indictment!  A non-deceptive statement would read: Failure of the 2016 presidential election to predict the winner is another one of the many examples of the unreliability of  polling]

For a complete assessment of polling and other forms of survey research, see my book, THE PROBLEM WITH SURVEY RESEARCH, available at Amazon and Google Books.

Posted in Survey Research | Tagged , , , , , | Leave a comment

My Open Internet Problem-Solving Networks Similar to Facebook Groups/”communities”

I am working on a manuscript, Problem-Solving via the Internet: An Alternative to Nation-States, Governments, and Politics in which open Internet problem-solving networks (e.g., Wikipedia, Peer-to-Patent) are presented as the most efficient and effective institutions to solve problems, now that we’re in in what I call the  Internet+ Age.   (By “Internet+” I mean the Internet plus (+) attendant hardware (e.g., smartphones) and software (e.g., the cloud, Google))

In the Chicago Tribune article below, Facebook Groups/”communities”, as described by Mark Zuckerberg, are similar to my open Internet problem-solving networks.  (I have bolded the most appropriate sentences.)

++++++++++

Facebook mission: Society building
CEO Zuckerberg aims to fight ills with virtual communities

Robert Reed
Dressed in his signature solid-color T-shirt and jeans, Facebook CEO Mark Zuckerberg came to Chicago on Thursday and outlined the start of a new chapter in the social network’s life.
Before describing his plan during a West Loop conference for a few hundred invited Facebook devotees, Zuckerberg disarmingly addressed the crowd with a couple of personal asides.
“Before we get started, I want to introduce myself. I’m Mark,” he said, prompting a chorus of chuckles and cheers from the attendees, who seemed to get a kick out of the tech billionaire’s smiling self-effacement.
That warm reception continued as Zuckerberg’s keynote speech went on to include big-screen Facebook photos of his young daughter, the family’s pet puli dog and his dad, who is recovering from heart surgery.
Having attended my share of CEO presentations, I can attest that Zuckerberg’s speaking style is unexpectedly open and welcoming. A young man of medium height and build, Zuckerberg comes across as conversational and extemporaneous — traits that are too rarely found among other senior-level corporate executives.
Of course, it would have been fun to see if he was the same during the give-and-take of a news conference. But Zuckerberg’s handlers kept him at arm’s length from the media, stressing that the CEO would not be taking reporters’ questions.
That’s too bad, because the flip side to Zuckerberg’s warm and fuzzy comments is a new, hard-nosed business strategy that deserves examination.
At the event, his narrative was about building communities and expanding Facebook’s basic user experience and approach.

In the next decade, the network will strive to build an untold number of virtual Facebook “communities” that rally groups of people locally and globally. Already there are ones that include new mothers, disabled veterans and even locksmiths.

There are many more to come.

Although Facebook community members may not know each other personally, they’ll increasingly opt to gather around a common interest, belief or problem that needs to be solved, Zuckerberg contends.

“In the next generation, our greatest opportunities and challenges we can only take on together — ending poverty, curing disease, stopping climate change, . . . stopping terrorism, ” Zuckerberg said.

To expedite this process, Facebook is providing a new virtual “toolbox” to help leaders of current and new “communities” manage posts, accept new members and get rid of people who are disruptive to a community site.

From there it gets a little fuzzy.
Still to be determined is the related business course of action for Facebook, which became a publicly traded company nearly five years ago and has a market capitalization of $446 billion.
Facebook declined to discuss with me the business side of the new community mission, opening the way for outside speculation.
Here goes:
From a public relations standpoint, this new approach may help Facebook beat back criticism of not acting quickly or decisively enough during the last election cycle to curb fake news or extremist posts.
It could also help Facebook tee up some new advertising opportunities. The formation of these highly targeted groups could prove attractive to major advertisers looking to connect with the likes of working parents, sports fans or folks coping with certain medical conditions or habits.
“This strategy could help advertisers target the consumers possibly more accurately, which could increase ad revenues,” Ali Mogharabi, equity analyst at Morningstar, wrote to me in an email after the Facebook event.
There’s also industry talk of Facebook being interested in backing some long-form programming, similar to the shows being produced by Amazon and Netflix.
Perhaps these communities can help advance that programming approach?
This year, Zuckerberg has been traveling the country — notably the Midwest — seeking out the counsel of community, business and political leaders.
I surmise it’s a fact-finding tour away from the inevitable insulation of Facebook’s headquarters campus in Silicon Valley.
Where will all this travel ultimately take Facebook? That’s not clear yet.
Still, this week’s visit to Chicago shows the casually dressed but hard-charging Zuckerberg is definitely a man on the go.
roreed@chicagotribune.com
Twitter @reedtribbiz

| Leave a comment

Effects of World Wide Web

“passing laws changes little, or takes a generation or longer to have effect.  Deep changes to people’s lives can be made almost instantly, however, by the introduction a new technology that everyone wants. . . . [T]he world wide web . . . has had far greater influence on the world than Teresa May, Vladimir Putin or Angela Merkel ever will.  The engine of history is engines”.  Adrian Bowyer.

Posted in Internet Effects | Tagged , , | Leave a comment

Question Wording, Change in Wording, and Rosa’s Law

Question wording affects answers.   The first sentence in the comment below (No. 85 Rosa’s Law and Surveys) is an acknowledgement of this fundamental flaw in survey research:  “Question wording plays a critical role in [affecting] how respondents . . . answer . . .  questions.”

However, the main point of #85 is about change in wording: “Survey researchers . . . should be aware of this change [in wording from mentally retardation before Rosa’s Law to intellectual disability after the Law was passed in 2010] and the possible implications for prevalence estimates, particularly if data from before and after 2010 are being compared or combined”, rather than the fundamental issue of  question wording.  Regardless of whether or not changes are made in question wording–actually regardless of how questions are wordedQUESTION WORDING AFFECTS ANSWERS.  And there’s no way–no way!–to word questions so that questions don’t affect/skew/bias  answer.

Answers obtained are results of questions asked.  That is, words in questions “manufacture” answers; they can “create”, actually bring into existence, “opinions [and other objects of investigation]that might not otherwise be evident.”  Answers aren’t “out there”, so to speak, and then questions find them; rather, questions make answers.

See also:

Question Wording Makes Answers Unreliable

Question Wording Affects/Biases/Skews Answers

Question Wording Skews Answers

Question Wording and Stated Opinions

The Problem with Survey Research

and Counter Literature to Survey Research

 

“No. 85
Rosa’s Law and Surveys about Disabilities

Question wording plays a critical role in how respondents interpret and answer survey questions. Question wording in surveys can change over time in response to advances in questionnaire design, changes in society or culture, or changes in definitions. This is particularly true when survey researchers are using terminology that is associated with a medical diagnosis or legal definition.

One such change occurred in 2010 when President Obama signed a law in October 2010. Known as Rosa’s Law, this legislation required the federal government to replace the term “mental retardation” with “intellectual disability.” The law is named after Rosa Marcellino, a girl with Downs Syndrome who was nine years old when it became law, and who, according to President Barack Obama, “worked with her parents and her siblings to have the words ‘mentally retarded’ officially removed from the health and education code in her home state of Maryland.” Rosa’s Law is part of a series of modifications to terminology – beginning in the early 1990s – that have been used to describe persons with what we now refer to as intellectual disabilities.

One result of this law is that federal surveys such as the National Health Interview Survey changed the terminology used in survey questions from asking about “mental retardation” to asking about “intellectual disability, also known as mental retardation.” Survey researchers using the NHIS data on intellectual disabilities should be aware of this change and the possible implications for prevalence estimates, particularly if data from before and after 2010 are being compared or combined. In addition, researchers who are designing surveys that measure intellectual disabilities may want to use terminology and question wording that is consistent with federal guidelines.”

Posted in Survey Research | Tagged , , , | Leave a comment