How Green Is My EV?

Updated March 32, 2023

As I’ve suspected, it takes a lot of carbon to produce an EV. Benjamin Svetkey agrees. He writes in The Wrap:  “some reports suggest that the huge amounts of carbon released in manufacturing its [Tesla’s] lithium batteries means you’d have to drive a Tesla for three to seven years before breaking even, carbon-wise, with an internal combustion engine.”


And that’s just the battery! What about the carbon released in producing the rest of this 3,648 to 4,722 lb. car: fenders, tires, wheels, et al.? Answer: “manufacturing an electric vehicle generates more carbon emissions than building a conventional car,”

Green/sustainable production won’t due. Reduced consumption will. 

I’m not proposing we reduce consumption via government edict/socialism, and certainly not by buying an EV rather than one that runs on gasoline but, rather, by not buying a car. (Many people don’t need a car, but most do need transportation.)

Reduced consumption is the goal. We have to consume some things: water, electricity, food, shelter, etc.—in these instances, less water, less electricity, and so on. The focus should always be on consumption, on stopping or reducing consumption

We can’t produce our way out of environmental destruction. It can’t be done. This isn’t Smokey and the Bandit.  

Posted in consumption | Tagged , , , , , | Leave a comment

The Truth about Donald Trump and Other Notable Liars: Post-Truth as Enhanced Expression

Updated April 14, 2023

Our post-truth situation began around 2016 during the US’s presidential election and the UK’s European referendum (Brexit). That’s when a large minority of top-level public officials, their appointees, and followers used the Internet and attendant hard- and software, as well as other media, especially talk radio, to enhance their expressions; in this instance to affirm what most people reject (there was a larger crowd at Trump’s inaugural than at Obama’s, Britain sends 350m Pounds a week to Brussels, Trump won the 2020 election, et al) and to deny what most people accept (science, global warming, Trump lost the 2020 election, et al).

Although post-truthers talk as if they deny the reality accepted by essentially everyone else, or as if they believe in a different reality, the fact is, they don’t; they’re not schizophrenics; they accept and believe in the same reality as essentially everyone else. Their behavior proves it: stopping at red lights, taking their meds, and so on. They behave like the rest of us because they believe like the rest of us. They, too, believe in the laws of physics (though few of any of us can name even one of them), and in science, medicine, and research. For all intents and purposes, it’s only what post-truthers say—their talk— that distinguishes them from the rest of us. We don’t say what they say

Indeed, the widespread conviction that tens-of-millions of people actually believe what the facts don’t support—for instance, that that Trump won the 2020 election— is based on, as pointed out by Columbia University sociologist, Musa al-Gharbi, faulty analysis of their talk; on “survey responses, followed by overly credulous interpretations of those results [responses] by academics and pundits,” and by the wide distribution and consumption of the results/responses made inevitable by Internet+ Age technologies.  

When post-truthers talk this way, when they respond to questions with obvious lies about what has occurred, when they assert conspiracy theories (Domino voting machines were rigged in Venezuela), they’re not saying what they actually believe but, rather, expressing their emotional commitments, their beliefs, their interests, their ideological and/or partisan political preferences. Their subjectivities override their knowledge of the facts.[1] “What looks like a disagreement over political facts,” writes Michael Hannon, a philosophy professor at the University of Nottingham, “is often just partisan cheerleading or party bad-mouthing.” Or as al-Gharbi puts it, “the big lie seems to be more about social posturing than making sincere truth claims.”

Effects of Post-Truthers’ Expressions

Post-truthers’ subjectively-driven expressions about political figures, policies, and past and present events have significant effects because what post-truthers say is, or can be, heard (and seen) by everyone who’s connected to the electronic planetary network. Many within the network, Martin Gurri’s “public,” the disaffected and the disenchanted, take these expressions as their own and send them, most often via social media, to friends and contacts. The lies, denials, fake news, and absurdities proliferate–expression/talk is enhanced–and the ranks of post-truthers grow.

A significant effect of the magnification of post-truth expressions, and of the consequent growing numbers of post-truthers, is increasing attacks on the elites/establishment, furthering the erosion of their authority. Al-Gharbi’s explanation of Trump’s Big Lie is an example of how this happens: “Within contemporary rightwing circles, a rhetoric embrace of the big lie is perceived as an act of defiance against prevailing elites. It is recognized as a surefire means to `trigger’ people on the other team. A demonstrated willingness to endure blowback (from Democrats, media, academics, social media companies et al) for publicly striking this `defiant’ position is interpreted as evidence of solidarity with, and commitment to, `the people’ instead of special interests; it’s taken as a sign that one is not beholden to `the establishment’ and its rules. “

Amplified post-truth expressions also significantly alter public/political discourse about numerous issues (climate change, public health, guns, abortion, and so on). With more post-truth information, as well as increasing numbers of post-truthers themselves, in the mix, public and political debates, conversations, and discussions, on- and offline, are increasingly shaped less by facts than by expressions of personal belief and interest, emotion (fear, liking/affection, anger, and so on), ideology, and identity (gender, ethnic, nationality, etc.). Uncertainty about everything is spawned, and whereas fact-based discourse is communication that often lends itself to agreement at the outset, and resolution of disagreements through negotiation and compromise—all of which is necessary for so-called “democracies[2]”—subjective-based discourse is hardly, if at all, communication in the sense of imparting or exchanging information but, rather, continuous conflicting expressions of personal persuasions. Democracy thereby becomes frozen, deadlocked, less and less able to be an effective and efficient form of government.

Footnotes

[1] This is not to deny the inherent biases in everyone’s interpretation and presentation of the facts. Ben Sasse makes this point by calling attention to “the centrality of intellectual traditions, intellectual frameworks, and intellectual communities. It turns out that no people—not even scientists—are disembodied automatons. We have beliefs, and jobs and mortgages. Scientists have deadlines and a schedule and scholarly discussion partners and buddies in the breakroom. We all bring self-justifying biases to bear at the beginning of every day. . . . We come from communities and places, and we have passions and experiences and investments. . . . [O]ur views are shaped by a whole lot more than `just the facts, ma’am.’” Ben Sasse, Them: Why We Hate Each Other—and How to Heal (St. Martin’s Press, 2018), pp. 85-86.

[2] The people never rule, nor do the officials they elect. When I use the word “democracy,” I mean a government in which voters elect a chief executive and/or legislators and, in some instances, judges, all of whom exercise some power concerning some matters sometimes. 


Posted in American society | Tagged , , , , , , , | Leave a comment

Respondents Make Answers to Sensitive Question Unreliable

 One of the numerous ways respondents make answers unreliable is that they skew their responses to the extent they consider question topics sensitive. The greater the sensitivity of topics, the greater the effects on rates of response and on reports of the behaviors and attitudes investigated. For example, mental health is a sensitive matter for most people, and when mental-health related questions are put to parents, accuracy of parents’ reports of their children’s hospital and out patience experiences—including “characteristics of the child reported on, characteristics of the illness that may be associated with the health [programs, and] characteristics of the [program] being reported about”—is adversely affected. 

Moreover, sensitive question topics contribute to the unreliability of answers because what’s considered sensitive varies by respondents’ childhood experiences, peer and professional socialization, present and past socioeconomic statuses, organizational position and functions, plans for the future, and so on. Consequently, any question topic can be sensitive. Managers and executives involved in a business organization’s budget making, for instance, usually consider expenditure costs, projected salaries, and other financial numbers sensitive. Also, budget numbers are sensitive matters for elected officials because expenditures for programs, tax rates and revenues, debts, and surpluses are related to their present position and reelection. But this is not the case for most citizens caught up in immediate personal concerns.

When researchers only have answers to questions, they’re not able to identify which, if any, answers have been skewed, or to what extent, by the possible sensitivity of the question topic. The only way to know is to check or verify answers with information from two or more non-asking sources, such as observations, experiments, and documents. Askers/survey researchers don’t have information from these sources; all they have are answers; all they have is unreliable information. 

When you want to find out what’s really going on, don’t ask. That’s the theme of my book, The Problem with Survey Research, wherein on pp. 92-93, there’s a version of this post with supporting references.

Posted in Survey Research | Tagged , , , , , , | Leave a comment

Our Environment Has Us Boxed In–and What to Do about It

In a previous post, We Can’t Think Outside the Box—and What to Do about It, I discussed how we’re boxed in by our language and can’t think of anything that isn’t already designated by that language. Auguste Comte (1798-1857) furthers my point when he discusses the environment—which, of course, includes our language—as an even more inclusive/extensive restraint on our thinking:

“[One] . . . reason why the constitution of [a] new system cannot take place before the destruction of the old [is] that without that destruction no adequate conception could be formed of what must be done. Short as is our life, and feeble as is our reason, we cannot emancipate ourselves from the influence of our environment. Even the wildest dreamers reflect in their dreams the contemporary state: and much more impossible is it to form a conception of a true political system, radically different from that amidst which we live.” Social Physics: From the Positive Philosophy of Auguste Comte, p. 407. 

What to D0? Focus on Our Present Situation/Environment

Since we’re boxed in by our environment, by our present situation, we should not waste our energies trying to understand the past or predict the future. Instead, we should focus our attention on the present situation/environment—which I name the Internet+ Age—and seek to understand its effects on us. Thereby we become best equipped to deal with the present and with what lies ahead.

Posted in philosophy | Tagged , , , , , , | Leave a comment

Centrality of Code

Code is central to life in the Internet+ Age. The Internet, itself, is built out of code (e.g., the principal protocols/codes known as TCP/IP and DNS). Moreover, everything on the Internet, all of the information on it, all of its content, is made from code. All programs or software are made out of code; everything that appears on our screens, all apps, websites, icons, .com(s), .org(s), .gov(s), .net(s), pictures, videos, music, texts, colors, social networks, . . . everything! If it’s on the Internet, it’s code! Even some things not visible on our screens, such as the cloud, machine learning, artificial intelligence, facial recognition, and all malware (including computer viruses, worms, and spyware) are made of code. 

Code, by making all that’s online, makes, as Douglas Rushkoff points out, “[t]he . . . environments in which we all spend . . . so much of our time these days . . . where we do our work and play.” By making all that’s online and thus our environments, code controls online behavior. Code, says William Mitchell in City of Bits, “control[s] when you can act, what kinds of actions you can take, and who or what you can affect by your actions.”  

Another indicator of the centrality of code is that code controls access to the information/content on the Internet. Barriers that limit access, such as separate chat rooms and digital envelopes, are built out of code. “Programming [coding],” Lawrence Lessing points out in Code Version 2.0, “determines which people can access which digital objects and which digital objects can interact with other digital objects.” 

Also, code is used to overcome blocks to access. For instance, code manifest in VPNs is just one of many instances of code overcoming blocks to access. This is to say, access to, as well as regulation or blocking of, content is a matter of code. With appropriate software/code, any content/code can not only be blocked or regulated but also accessed. When the topic is access it’s all a matter of code.

In addition, and with great consequence, code controls—via the code-made filter bubbles in which we all live—which news stories appear on our screens and which ones do not. As Eli Pariser writes in The Filter Bubble, “the power to shape the news rests in the hands of bites of code, not professional human editors.” Code—as written by coders and programmers—is the news editor of the Internet+ Age, blocking some stories from our news feed while bring others to our attention. 

The centrality of code in the Internet+ Age is also demonstrated in its increasing role in offline life. Although code is not the totality of offline life, as it is online, code is, nevertheless, the language that increasingly runs the offline world. As evident in specifically designed software—and when joined with other defining components of the Internet+ Age, such as radical connectivity, smartphones, and big data—code erodes traditional institutions and fosters breakthroughs to significantly different ones, such as Airbnb and Uber.

More specifically, code—as pointed out by Jeremy Keeshin in Read Write Code—is common in a number of professions, including financial trading, economics, and throughout the sciences. Moreover, since the early 2000s, code began to play a larger role in other professions and businesses, such as advertising, marketing, sales, public relations, and operations. 

With each passing day, more and more offline commercial, governmental, and social activities are governed by code. The thousands of rules and instructions encoded in software and connected to the Internet run traffic lights, guide airplanes, manage automobile functions (such as gasoline-air ratios and ignition), run police databases, and control energy grids. 

From the most important to the mundane (Even workaday matters like a shopping trip, as pointed out in a BBC Teach post, “now relies on code to make it run smoothly”). Today, it’s practically impossible for anyone—in Nikhil Abraham’s words—to “make it through the day without interacting with something build with code.” 

 From the totality of what’s online to more and more of what’s offline, it’s mostly, when it’s not completely, a matter of code.

Posted in Internet+ Age | Tagged , , , , | Leave a comment

“America’s Greatest Philosopher, Charles Sanders Peirce”

Updated January 25, 2022

George Gilder, in his book, Life after Google: The Fall of Big Data and the Rise of the Blockchain Economy, p. 104, says Charles Sanders Peirce is America’s greatest philosopher. Here’s the context (pp. 104-05) in which he makes that assessment: “America’s greatest philosopher, Charles Sanders Peirce, expounded this underlying reality—namely, that artificial intelligence systems achieve their utility from human languages and other symbol systems, including the computer languages and mathematical reasoning that program them—when he developed his theory of signs and symbols, objects and interpreters. Although he wrote some 150 years ago, Peirce’s insights remain relevant to the latest software package or machine learning claim. In words that Turing would echo in describing his `oracle,’ Peirce showed that symbols and objects are sterile without `interpretants,’ who open the symbols to the reaches of imagination. Peirce’s `sign relation’ binds object, sign, and interpreter into an irreducible triad. It is fundamental to any coherent theory of information that every symbol be linked inexorably to its object by an interpreter, a human mind. An uninterpreted symbol is meaningless by definition, and any philosophy that deals in such vacuities is sure to succumb to hidden assumptions and interpretive judgments.”

I also am partial to Peirce. My MA thesis, University of Pittsburgh, 1957, is titled: “Empiricism and Certainty in C. S. Peirce and A. J. Ayer”

Posted in Internet+ Age | Tagged , , , | 1 Comment

Respondents’ Interest in Question Topics Make Answers Unreliable

Updated February 17, 2022

The interest respondents have in question topics is one of the many ways they make answers unreliable. Those who appear to be interested in a topic respond at higher rates and provide different answers than those who, presumably, have less, or no, interest. Issues or topics investigated can generate, in asker terminology, “selection bias,” affecting answers.

It’s not that every answer affected by respondent’ interest in the question topic is necessarily incorrect or inaccurate; some affected answers can, nevertheless, correspond to what’s really going on. But since survey researchers/askers only have answers, they’re not able to determine which, if any, are either correct or incorrect, The only way to know is to check or verify answers with information from two, preferably three or more, non-asking sources; such as observation, experiments, and other “proper”—as I call them in my book, The Problem with Survey Research—methods of data collection and research designs. Askers don’t have information from these sources; all they have is unreliable information.

If you want to find out what’s really going on, don’t ask. That’s the theme of The Problem with Survey Research, wherein you can find a version of this post on page 92 with supporting references. 

Posted in Survey Research | Tagged , , , , , , , | Leave a comment

Respondents Make Answers Unreliable by Skewing Them to Correspond to Commonly Held Values and Norms

Updated February 17, 2022

One of the many ways respondents make answers unreliable is they skew their answers, regardless of question topic, to correspond to commonly held social and organizational values and norms. When asked about their behavior (objective phenomena), respondents say they perform socially desirable acts, and when asked to state their opinions, values, and the like (subjective phenomena), they answer consistent with prevailing preferences, priorities, and rules. This is to say, respondents present, as Cook and Selltiz phrase it, “socially accepted picture[s]” of themselves. And when there are a number of options on a questionnaire or poll, they usually select, in Ericsson and Simon’s words, “the socially desirable alternative;” the option that puts them in a good light. For example, some respondents to national election polls say they are “undecided” whom they are going to vote for because they think that’s a social desirable answer; an answer indicating they’re open-minded and (in contrast to respondents who say they have decided) that they’re more deliberative in obtaining all possible information about candidates prior to deciding how to vote. Answerers, like everyone else are not inclined to be witnesses against themselves; they tend not to say they did things, or have opinions or beliefs, that will harm them, either in the eyes of others or legally.

Of course, it’s possible that some respondents’ answers will not be skewed by commonly held values and norms, but because survey researchers/askers only have answers, they can’t determine which, if any, answers are skewed or not skewed. The only way to know is to check or verify answers with information from two or more non-asking sources, such as observation, experiments, and documents. Askers don’t have information from these sources; all they have is unreliable information. 

If you want to find out what’s really going on, don’t ask. That’s the theme of my book, The Problem with Survey Research, where on p. 92, you can find a version of this post with supporting references.   

Posted in Survey Research | Tagged , , , , , | Leave a comment

One Reason Respondents Make Answers Unreliable Is They Don’t Have Relevant and Correct Information

Updated December 21, 2021

One of the many ways respondents make answers unreliable is that many don’t have relevant and correct information. Rather than admit ignorance, the tendency is to guess. Others devoid of appropriate information, but thinking otherwise, give answers that may be off the mark.

 A major reason respondents do not have information that would allow them to answer correctly, accurately, or completely is that—although they are informed about many matters—they are misinformed about many important, and not so important topics; including politics, organizations, public policies, government expenditures, religions, the environment, information technology, drugs, sex, and economics.

 Respondents are misinformed, in part, because governments, corporations, schools, religious institutions, trade unions, and other organized interests—via print and electronic media—affect their understandings, perspectives, values, and opinions so that they usually think—and give answers to questions—that reflect the values, priorities, and perspectives of these groups, associations, businesses, and government agencies. 

Respondents are also misinformed by so-called “intellectuals,” professors, school teachers, and writers who are, in many instances, to use Eric Griffiths’ words, “peddlers of ideas[;]. . . . cogs in the global machine which processes information and misinformation.” More than a few history books contain misinformation about important topics and events, polluting the minds of readers; at least some of whom become respondents passing on the distortions and falsities. Many elementary and high school textbooks are out-of-date, and this misinformation can stay in the mind and, in later years, drive answers to questions.  

Also, numerous incomplete descriptions and explanations of various events, as well as countless illusions, lies, and errors of fact, including those produced by professional and academic survey researchers, and by bloggers, Tweeters, Facebook posters, YouTubers, et al. are broadcast in the media and skew at least some, probably many, answers. 

Because survey researchers only have answers, they don’t know which, if any, answers they receive are correct or incorrect. The only way to know is to check or verify answers with information from two or more non-asking sources, such as observation, experiments, and documents. Survey researchers do not have information from these sources; all they have are answers; all they have is unreliable information.

For a discussion of all the weaknesses of survey research, as well as brief descriptions of the methods of data collection and research designs that produce reliable information, see my book, The Problem with Survey Research.

Posted in Survey Research | Tagged , , , | Leave a comment

Lying Is One of the Many Ways Respondents Make Answers Unreliable

Updated February 21, 2022

In a previous post, I identified many ways respondents make answers unreliable. Here, I want to focus on lying as one of the ways respondents make answers unreliable.

Everyone knows that everyone, including respondents, lie and, depending on circumstances (e.g., who’s asking whom about what where), quite often. Everything is a topic of asking (Chapter 3, “Ask about Everything,” The Problem with Survey Research), so it’s reasonable to conclude that lies are told about everything.

U.S. Presidents—and other high-level politicos, including diplomats and members of Congress—lie when asked about war and peace, and untold other matters, domestic and foreign. These are government’s “official lies.” Candidates for elected offices lie. Lying, of course, is not limited to American government, but rather is the norm in all regimes.

Corporate executives lie; regularly they deceive citizens, stockholders, and competitors. Hospital officials lie. Organizational personnel in all functions and at all levels lie. Suppliers lie to purchasers, stating on questionnaires they have appropriate accreditations when they do not. Clergy lie, and not only Roman Catholic Jesus-let-the-little-children-come-to-me-pedophile-buggering priests and abusive (and abused) nuns. Mormons, each and every one a saint, lie.

Actually, lying is a daily affair that permeates all of life. It’s not an incidental, rare, or exceptional activity. Moreover, lying is not dispensable; it’s a necessary and, at times, desirable and morally justified.

Because everyone’s lying, at least sometimes, survey researchers, more than likely, are receiving some answers that are lies. But since survey researchers/askers only have answers, they’re unable to determine which answers, if any, are lies, and which, if any, are not. The only way to know is to check or verify answers with information from two or more non-asking sources, such as observations, documents, or from other “proper” methods of data collection and research designs (Part Six, “Proper Methods and Research Designs,” The Problem with Survey Research). Askers don’t have information from non-asking sources; all they have are answers; all they have is unreliable information.

A version of this post with supporting documentation can be found on p. 87 of The Problem with Survey Research.

Posted in Survey Research | Tagged , , , , | Leave a comment