Survey Research Feedback: Assessing and Moving Beyond

Everyone’s asking for feedback: manufacturers, motel operators, professional associations, professors, physicians, hospitals, cancer centers, newspapers, government agencies, Facebook, Twitter, TV personalities; actually, it’s probably impossible to find a single individual, group, or organization that doesn’t.  And everyone is asked for feedback: consumers, professionals, students, patients, readers, citizens, Facebookers, Twitterers—do you think there’s anyone who hasn’t been asked for feedback?  Asking and answering feedback questions has become ubiquitous; an integral part of modern life; unconsciously and, thus unmindfully accepted as a way to generate reliable and useful information. We need to bring into our rational discourse and reassess feedback that’s produced by asking for it.  I call such feedback, survey research feedback, and I begin my evaluation of it by contrasting survey research feedback with naturally occurring feedback.

Feedback, as natural phenomena—for example, in biology—is part of the output (of any living thing) that returns as input (to the living thing) to regulate its further output.  Natural feedback (excluding major and sudden disruptions to the system in which the feedback occurs) always works; i.e., via positive and negative feedback, natural systems operate/perform optimally.

In contrast, survey research feedback (concerning a program, policy, or any other human-made object) is answers to questions and, as I’ll demonstrate later, unreliable information; i.e., information that may, or may not, be correct or accurate.  Because survey research feedback is unreliable, it may not be able to keep the system of which it is a part operating optimally.  (Also—and I’ll cover this in a bit more detail below—in many instances, survey research feedback does not keep the system operating optimally because it is not used to regulate output but, instead, is used to provide opportunities for respondents to express themselves, and for other purposes.)

Answers to questions are unreliable, whether they’re produced by feedback or any other instrument or procedure that asks questions of respondents.  That’s the way it is because when all you have are answers to questions it’s impossible to know if the answers are correct or incorrect.  It’s not the every answer is incorrect; obviously, some are,  but when all you have are answers, it’s impossible to determine if any are either..  The only way to know if answers are correct or incorrect is to check or verify them with information from one or, preferably, two or more non-asking sources of information; say, from observation, experimentation, and documents.  Those who rely on survey research feedback do not have information from non-asking sources and, therefore, not able to discern which, if any, answers are correct or incorrect. All survey researchers/askers have is unreliable information because each of the four components of survey research, singly and in combination, makes answers unreliable: (1) respondents, (2) asking instruments, (3) settings in which questions are asked and answers given, and (4) survey researcher/askers themselves.

Respondents make answers unreliable—i.e., give answers that may, or may not, correspond to what’s really going on—and, thus, information in their answers may, or may not, be sufficient to regulate output optimally because they (1) sometimes lie, (2) often do not have relevant and correct information, (3) and because their values and norms affect answers, as do their (4) interest in, and (5) sensitivity to question topics.  Also, (6) answerers’ memory biases responses, (7) they are not always who they say they are, (8) those dissatisfied (with a product or program) are more likely to respond/give feedback than those satisfied, and  there are (9) many other ways respondents make answers unreliable; e.g., by improperly marking Likert scales, by not following questionnaire branching instructions, and so on.

Asking instruments (surveys, interviews, focus groups, and other types of asking) skew answers, producing information that may, or may not, be accurate; information that’s unreliable and, therefore, may, or may not, regulate output at all or may not regulate so that the system of which it is a part operates/performs optimally.

Evidence that asking instruments produce unreliable information is provided by numerous studies demonstrating (1) that asking instruments produce symbolic and unrealistic answers, (2) that each instrument produces different results, and that they (3) often generate inconsistent or conflicting answers, (4) much nonresponse and, more often than not, (5) unrepresentative results.

Settings in which questions are asked and answers given (e.g., culture, third parties, workplace, school, or home) are stimuli and reinforcers that skew answers, making them unreliable. Virtually every component of every setting forces respondents to say what’s compatible with each particular setting.  Askers for feedback, having only answers to their questions, cannot identify which, if any, are correct or incorrect; thus, they’re not able to regulate output for optimum benefit.

Askers, as is the case for instruments and settings, affect answers; thereby answers are made unreliable and unfit to regulate output for optimal operation.  Characteristics of askers that cue and induce the answers they receive include their styles of behavior (e.g., asking questions rapidly, pausing, voice intonation, and so on) as well as their personal attributes, such as judgments when coding responses, experiences, competencies, ethnicity, socioeconomic features, gender, and age.

Survey Research Feedback for Self Expression and Other Purposes

Of course, people should be asked for feedback because both askers and answers benefit. Answerers feel better than those who’ve not been asked. Askers, as a result of answerers feeling better, are in a more pleasant and productive relationship with answerers—and correct information might be obtained.

Survey research feedback also is used by organizations, groups, and individuals for promotion and sales.  For example, university alumni associations ask for feedback via surveys of members to promote the association and induce renewals.  However, those who ask for feedback for sales, promotion, and the like, need to keep in mind that the only way to know if information in feedback answers is correct—and, thus, the type of action required to regulate output for optimum output (in sales, promotion, or anything else)—is to check answers with data from two or more non-asking sources.  Ask for feedback, but don’t rely on it.

Moving Beyond Survey Research Feedback

Moving beyond survey research feedback requires (1) obtaining information from observation, documents, experiments, formal/predictive models, and comparison of non-asking phenomena.  (When two or more of these sources or procedures are used to determine if the data from each are compatible, or moving in the same direction, the reliability of the acquired information is enhanced.)  (2) Also, antecedent control and consequence control are procedures that effectively regulate output for optimal operation.

The greater value of data produced by observation (direct observation and observation of behavioral traces) over answers produced by questions and, thus the greater value of observation-based information for regulating output, is captured in common speech; e.g., actions speak louder than words, and, do what I say, don’t do as I do.  As political scientist, Arthur Bentley, puts it, we should observe “`something doing’”; “actually performed . . . activities” to find out what’s really going on.  Sociologist, C. Wright Mills, points out that answers to questions are unreliable because many times respondents do not say what they have done, or intend to do: [O]ften there is a disparity between lingual and social-motor types of behavior. . . . [i.e.,] between talk and action”.

Experiments is another procedure for acquiring reliable information about output that can be used as input to regulate further output is to experiment with the inputs of a system to identify effects on outputs.  An experiment defined as “a test or a series of test in which purposeful changes are made to the input variables of a process or system so that we may observe the reasons [causes] for changes that may be observed in the output”.  Experiment-based studies have led to improvements in organizational personnel retention, productivity, as well as in educational and other social programs.  When you want to improve policy and program outputs, don’t ask for feedback, experiment.

Building and testing models is a research design for generating reliable data that can be used as input to improve further output.  A formal, or logical, model is a simplified description of the object of investigation (e.g., a policy, program, or institution) from which hypotheses are deduced and, then, tested.  Support for the deduced hypotheses is support for the model’s assumptions, which are inputs for further output.

When a computer is used for modeling, it’s called a simulation, a simulation model, or a computer simulation.  An example of computer simulation, as well as its application to social, political, and organizational problems and issues, is a specific research design, named “system dynamics”.  As stated on the System Dynamics Society website: “System dynamics is a methodology for studying and managing . . . feedback systems, such as one finds in business and other social systems. . . . The methodology: identifies problem, develops a hypothesis explaining the cause of the problem, builds a computer simulation model of the system at the root of the problem, tests the model to be certain that it reproduces the behavior seen in the real world, devises and tests in the model alternative policies that alleviate the problem, and implements this solution”.  The “alternative policies” are the inputs for further outputs.  Rather than using survey research for feedback to be used as input, generate input for further output by building and testing formal models.  This is, writes Taagepera, “essential” if the social sciences are to have greater and positive impacts on “the real world”.

Document analysis (or content analysis when maps, Internet sites, social media, and so on are included) is another method for collecting reliable information to be used as input for further output.  Analyses of budgets and other documents, for example, provide reliable information about organizations—and, more specifically, organizational decision making, budget decision making processes, structures, and personnel—as well as individual performance, organizational change, and strategy formation.  When you want to improve policy and program outputs, don’t ask for feedback, analyze documents and other media content.

Comparison is a social science procedure that generates reliable information that can be used as input to enhance further output.  In one form of comparative research, an ideal type of the phenomenon investigated (e.g., bureaucracy, or a particular policy, program, or process), is constructed.  Then, the ideal type is compared or contrasted with actual instances of the phenomenon and, on the basis of similarities and differences, hypotheses that postulate explanations—causes—for the similarities and differences are constructed and tested.  Hypotheses supported by empirical evidence become inputs for the generation of further outputs.

Another variety of the comparative approach begins with identifications of similarities and differences between two or more actual instances of the phenomenon being investigated; then hypotheses are formed and tested, and when empirically substantiated become inputs for further output.  Instead of asking for feedback, compare to generate input data for further output.

Antecedent Control and Consequence Control

Antecedent control and consequence control are, in contrast to survey research feedback, better able to improve the further output of the policies, programs, institutions, groups or individuals being investigated.  Because outputs are functions of their antecedents and consequences, antecedent control and consequence control enhance outputs by controlling the antecedents (stimuli) of outputs and by controlling consequences (reinforcers) of outputs.

Although outputs are stimulated or cued  by antecedents, outputs cannot be sustained without reinforcing consequences of the stimulated or cued outputs.  Consequence control sustains the antecedent-cued output via positive or negative consequences of that output.  Thus, consequence control is essential to keep the system at optimal output.

Conclusion

Survey research feedback produces unreliable information and, therefore, should not be used to evaluate outputs of policies, programs, institutions, or individuals.  Reliable information for assessments and betterment is acquired from observation, experiments, logical/predictive models, documents, and comparison of non-asking phenomena.  Also, outputs are enhanced by antecedent control and consequence control.

***An earlier draft with endnotes is available on my website.

 

About georgebeam

George Beam is an educator and author. The perspectives that inform his interpretations of the topics of this blog–-as well as his other writings and university courses -–are system analysis, behaviorism, and Internet effects. Specific interests include quality management, methodology, and politics. He is Associate Professor Emeritus, Department of Public Administration; Affiliated Faculty, Department of Political Science; and, previously, Head, Department of Public Administration, University of Illinois at Chicago
This entry was posted in Survey Research and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s