Quality Management’s Impact Can’t Be Assessed by Survey Research

In a 2012 Quality Progress article, “Lasting Impressions”, Stauffer and Owens say that a report, “The Contribution of Quality Management to the UK Economy” demonstrates that “quality initiatives have made significant contributions . . . to the . . . bottom lines of the organizations that invested in them, . . . gross domestic product (GDP), corporate tax revenues, and employment”. This may, or may not, be true because virtually all of the information in the Report, including that gleaned from “relevant business and economic literature”, is based on surveys, polls, and interviews; i.e., on survey research; on answers to questions. Information produced by survey research (any procedure or instrument that asks questions of respondents) is unreliable because when all you have are answers to questions it’s impossible to know which, if any, answers are correct or incorrect. That’s what I call The Problem with survey research, and that’s why we shouldn’t ask to estimate quality management’s impact.

Answers to questions are inherently unreliable because there are no answers without respondents and respondents skew or bias every answer they give. For instance, many of the people interviewed about quality initiatives were responsible for quality initiatives in their organizations. They held such titles as: Quality Manager, Head of Quality, Head of Operations and In-Service Assurance, Business Improvement Director, General Manager, and the like. These respondents (as is true for everyone else), do not testify against themselves but, rather, are likely to give positive responses about the success of quality initiatives they’ve promoted. It’s not that all their answers about quality initiatives are biased/skewed/incorrect. Some answers may be correct, but when all you have are their answers, it’s impossible to distinguish between answers that may be correct and those that may be incorrect.

The only way to know if answers are correct is to check or verify them with data from observation, records, or other non-asking sources. This has not been done by Stauffer and Owens or by the authors the Report (Centre for Economics and Business Research). All Stauffer and Owens, as well as the Centre, have are answers to questions; all they have is unreliable information.

Reliable information about quality management’s impact (or anything else) can be obtained only by observation, experimentation, predictive modeling, document/content analysis, and other non-asking tools and procedures.

Don’t ask when you want to find out what’s really going on.  That’s the theme of my book, The Problem with Survey Research.

About georgebeam

George Beam is an educator and author. The perspectives that inform his interpretations of the topics of this blog–-as well as his other writings and university courses -–are system analysis, behaviorism, and Internet effects. Specific interests include quality management, methodology, and politics. He is Associate Professor Emeritus, Department of Public Administration; Affiliated Faculty, Department of Political Science; and, previously, Head, Department of Public Administration, University of Illinois at Chicago
This entry was posted in Quality Management, Survey Research and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.