information on how to spot this and other problems with surveys.)
Here is a sampling of some questions Mr. Perot used in his questionnaire. I paraphrased them, but the original intent is intact. (And this is not to pick on Mr. Perot; many political candidates and their supporters do the same type of thing.)
Should the line-item veto be able to be used by the president to eliminate waste?
Should Congress exclude itself from legislation it passes for us?
Should major new programs be first presented to the American people in detail?
The opinions of the people who knew about the survey and chose to participate in it were more likely to be those who agreed with Mr. Perot. This is one example where the conclusions of a study went beyond the scope of the study, because the results didn't represent the opinions of "all Americans" as some voters were led to believe. How can you get the opinions of all Americans? You need to conduct a well-designed and well-implemented survey based on a random sample of individuals. (See Chapter 16 for more information about conducting a survey.)
HEADS UP
When examining the conclusions of any study, look closely at both the group that was actually studied (or the group that actually participated) and the larger group of people (or lab mice, or fleas, depending on the study) that the studied group is supposed to represent. Then look at the conclusions that are made. See whether they match. If not, make sure you understand what the real conclusions are, and be realistic about the claims being made before you make any decisions for yourself.
Looking for lies in all the right places
You've seen examples of honest errors that lead to problems and how stretching, inflating, or exaggerating the truth can lead to trouble. Occasionally, you may also encounter situations in which statistics are simply made up, fabricated, or faked. This doesn't happen very often, thanks to peer-reviewed journals, oversight committees, and government rules and regulations.
But every once in a while, you hear about someone who faked his or her data, or "fudged the numbers." Probably the most commonly committed lie involving statistics and data is when people throw out data that don't fit their hypothesis, don't fit the pattern, or appear to be "outliers." In cases when someone has clearly made an error (for example, someone's age is recorded as being 200 years) it makes sense to try to clean up the data by either removing that erroneous data point or by trying to correct the error. But just because the data don't go your way, you can't just throw out some portion of it. Eliminating data (except in the case of a documented error) is ethically wrong; yet, it happens.
Regarding missing data from experiments, a commonly used phrase is "Among those who completed the study … ." What about those who didn't complete the study, especially a medical one? Did they die? Did they get tired of the side effects of the experimental drug and quit? Did they feel pressure to give certain answers or to conform to the researcher's hypothesis? Did they experience too much frustration with the length of the study and didn't feel they were getting any better, so they gave up?
Not everyone responds to surveys, and even people who generally try to take part in surveys sometimes find that they don't have the time or interest to respond to every single survey that they're bombarded with. American society today is survey-crazy, and hardly a month goes by when you aren't asked to do a phone survey, an Internet survey, or a mail survey on topics ranging from your product preferences to your opinion on the new dog-barking ordinance for the neighborhood. Survey results are only reported for the people who actually responded, and the opinions of those who chose to respond may be very different from the opinions of those who chose not to respond. Whether the researchers make a point of telling you this, though, is another matter.
For example, someone can