Survey research focuses on correlation, rather than causality and, as James A. Davis pointed out a long time ago, “any [statistically] significant correlation . . . is generally publishable” [“Great Books and Small Groups: An Informal History of a National Survey”, in Phillip E. Hammond, ed., Sociologists at Work: Essays on the Craft of Social Research (Anchor, 1964), p. 246].
Davis’s comment is significant on three accounts:
First, “science”, according to the dictionary meaning of this term, establishes causality; thus, to the extent survey research doesn’t–and can’t–establish causality it’s not science. (Causality is established by experimentation, not by asking.)
Second, Davis, rather than abandoning survey research because it’s not science, continued to ask questions of respondents and became a renown asker, directing an asking unit at the University of Chicago. Davis exemplifies the addiction to asking: he knows the errors of his way but can’t stop doing what he knows he shouldn’t. (I discuss the addiction to asking (and answers) in my book, The Problem with Survey Research, Ch. 11. Addicted Askers, pp. 199-237.)
Third, because any statistically significant correlation is publishable, the academic and popular media are full of meaningless correlations, not presented as such and therefore usually misunderstood as causal connections. Given the ubiquity of survey research and the consequent ubiquity of meaningless correlations considered reliable information, most people’s understandings of themselves and the world are considerably deficient.
If you want to find out what’s really going on, don’t ask. Instead, observe, experiment, and use other “proper” (as I name them) procedures (The Problem with Survey Research, Part Six, Proper Methods and Research Designs, pp. 279-320).