Public opinion surveys:
What is Margin of Error?
We use a public opinion survey to gain an understanding of the notions or convictions held by a population of people.
Why survey people? A survey can show us in a general way what the public thinks about candidates running for office or political issues of the day. A survey can help us gauge public reaction to current events.
A survey is more than just a questionnaire.
- Formal surveys are scientifically assembled and administered. For example, in a series of surveys in Cincinnati, a scientifically-selected random sample of voters were asked how they would vote on a regional transit system. Those voters were chosen carefully to be statistically representative of all voters.
- Informal surveys solicit comments in a more casual or offhand way. For example, the Department of Transportation attached a questionnaire to drafts of a statewide transportation plan handed out at public meetings. In effect, the people attending the sessions selected themselves.
A pundit once said, "Conducting a public opinion survey is like tasting a few spoonfuls to see what an entire pot of chili tastes like. You have to stir the chili a bit so the spoonfuls you sample will represent the whole pot."
- It is an evaluation tool in which a sample of a population is selected and information is collected from the people in the sample.
- The people surveyed are the respondents.
- The respondents must be chosen at random so results can be generalized to the whole population.
- The information collected from the sample, the data, is generalized to a larger population.
How do you conduct a survey?
A questionnaire is prepared carefully. A random sample of respondents is selected scientifically from a group, or universe of people. Then the questionnaire is presented individually to the people in the sample.
The questionnaire is presented in person or by mail in written form, or else it is administered verbally in an interview in person, by phone, or e-mail.
The small but statistically-valid sample of people in a group is believed to represent the complete group of people.
Researchers use both quantitative and qualitative methods.
Why are they useful?
- Quantitative methods include personal interviews at home, office or some central location, online surveys, telephone interviews, and hybrid mixes of these.
- Qualitative methods include the use of focus groups to contemplate questions, longer in-depth interviews, and observation of actual behavior.
Surveys bring the public into decision making. They can tell us about a community's perceptions and preferences. For instance:
Is it valid?
- We can identify public concerns.
- We can learn what people know or want to know.
- We can test whether a plan is acceptable to the public.
- We can gain a perception of what people know or think about programs.
- We can see whether opinions change, if we repeat the survey periodically.
The key to the validity of a survey is its randomness.
How well the sample represents the population is gauged by two important statistics – the survey's margin of error and its confidence level. Those statistics tell us how well the spoonfuls of chili represent the entire pot.
Why not interview everyone?
Usually, it is not feasible to administer a questionnaire to everyone in a universe. It is too cumbersome and too expensive.
In lieu of that, the sample of people responding to a formal survey provide us with a composite view of their larger population. In other words, the answers found by a statistically-valid scientific survey reflect what the population as a whole might have answered if everyone been asked the survey questions.
Surveys of a statistically-valid random sample of the population can be stratified to include only people within a specific geographic area, income group, or other category of people. That is, the population can be divided or separated by their special characteristics. For instance, a stratified sample of North Carolina residents might include the same proportion of Charlotte residents as in the actual population of North Carolina. While such a survey does not replicate the overall population precisely, it remains valid statistically.
Who conducts public opinion surveys?
On behalf of their clients, professional survey firms conduct most of the surveys we see reported in media. However, anyone with a modest understanding of research and statistics can construct a questionnaire, select a sample, and administer it.
We sometimes refer to the professionals as pollsters. During the 2004 presidential election year, some of the many pollsters with survey results reported in media have included:
How much does it cost?
American Research Group Gallup Poll Louis Harris Mason-Dixon Rasmussen Research 2000 Strategic Vision Survey USA Zogby List of many public opinion survey web sites »
The work required to construct a questionnaire, the complexity of drawing a sample population, and the time spent collecting the survey data make scientific surveys expensive. The more questions asked and the larger the number of respondents, the more expensive it is to collect, transcribe and summarize the data. A sample selected to reflect many types of interests within a population requires additional time and money.
For example, the company SurveyUSA sells to media organizations for about $5,000 a true random-sample research of any geography, on a topic of local, national or international interest, at a moment's notice, with results returned the same day, in time for the night's news deadline. That has made it possible for millions of Americans to see survey research results each night on the news.
Who composes the questions?
The customer determines generally the questions to be asked. Because it is important to frame them in a clear, unambiguous manner, the polling firm polishes the questions as it constructs the questionnaire. Sometimes questions need to be in languages other than English. They also need to be accessible to persons with disabilities.
Information mentioned while administering a questionnaire should be neutral to allow respondents to make up their own minds about a question or concern. Surveys can spread misinformation if drafted poorly or ambiguously.
What is Margin of Error?
Margin of error is a measurement of the accuracy of the results of a survey.
For instance, consider a margin of error listed as plus or minus 3% (or ± 3%) with a 95% level of confidence. That means there is a 95% chance that the responses of the target population as a whole would fall somewhere within 3% more or 3% less than the responses of the sample. That is a 6% spread. For any one specific question, the margin of error could be greater or less than 3%.
SOURCES: UNIVERSITY OF MICHIGAN AND UNIVERSITY OF CALIFORNIA AT SAN DIEGO
To put it another way...
When a survey is reported to have a margin of error ± 3% at a 95% level of confidence, it says simply, if the same survey were to be conducted over and over for 100 times, the resulting data from each of the 100 times would be within 3 percentage points above or below the percentage reported in 95 of the 100 surveys.
Suppose you are a political candidate and you pay a polling firm to survey potential voters. You want to know whether people will vote for you.
The polling firm conducts a survey. Analysis of the resulting data shows 50 percent of the respondents will vote for you. The polling firm tells you the margin of error is ± 3% with a confidence level of 95%.
This tells you, if the same survey were to be conducted 100 times, most of time the percentage of people who say they will vote for you will range between 47 and 53 percent. How much is most of the time? In this example, 95 percent of the time.
2,000 2% 1,500 3% 1,000 3% 900 3% 800 3% 700 4% 600 4% 500 4% 400 5% 300 6% 200 7% 100 10% 50 14% Assuming a 95% confidence level
Margin of error decreases as the sample size increases.
As the number of people surveyed goes up, the margin of error goes down.
As you can see in the table at right, a very small sample of 50 respondents has a ± 14% margin of error while a large sample of 2,000 has a margin of error of ± 2%.
The size of the entire group being surveyed – the population – does not matter, assuming the population is larger than the sample.
Notice that, by doubling the sample from 1,000 to 2,000, the margin of error only decreases from ± 3% to ± 2%. Of course, doubling the sample increases the time and cost of a survey.
Level of confidence
You see it only infrequently, but media organizations should report the confidence level along with the margin of error so consumers can better judge the results of a survey.
A 95% level of confidence is the polling industry standard.
What would happen if the survey results had only a 90% level of confidence, which could be obtained less expensively with a smaller sample than needed to rise to a 95% level of confidence.
While it would cost less to conduct the 90% survey, you would have less confidence in the results.
- To obtain a ± 3% margin of error at a 90% level of confidence, the required sample size would be 750.
- To obtain a ± 3% margin of error at a 95% level of confidence, the required sample size would be 1,000.
Ar this point, you may be able to see how media consumers can become confused over the validity of a survey when only the margin of error is reported.
Validity is the extent to which a survey measures the opinions it sets out to measure and the extent to which inferences made on the basis of the survey are appropriate and accurate.
Determining the margin of error at a particular confidence level is relatively simple. The most advanced math involved is square root.
Here are some useful calculators on the Web:
The margin of error reveals that survey data is not precise. Remember, data from a survey provides a range, not a specific number.