- History of opinion polls - Sampling techniques - Sampling error and margin of error - The difficulty of obtaining accurate results - Weighting the sample - Misleading poll questions and unscientific polls [SLIDE 1] To get an idea of how America feels on a certain issue or a candidate, opinion polls are used. An Opinion Poll is a method of systematically questioning a small, selected sample of respondents who are deemed representative of the total population. [SLIDE 2] When polls first began in the 1800s, newspapers and magazines started conducting them among their readers either in person or by mail. Early in the 1900s, publications were able to conduct polls on their reader base, which covered a larger portion of the population, and for a while, they managed to make some accurate predictions. But over time, as their subscriber base became less representative of the American population, their accuracy suffered. Polls eventually achieved a greater degree of accuracy when George Gallup, Elmo Roper and Archibald Crossley developed modern polling methods with fewer than 2,000 voters. [SLIDE 3] Of course, the question arises, "How can a sample of 2,000 voters accurately represent millions of people?" When the Literary Digest inaccurately predicted that Republican Alfred Landon would win the presidential race over Democrat Franklin D. Roosevelt, it was because the subscribers it polled were all demographically the same. Therefore, one principle in accurate polling is to choose voters totally at random. [SLIDE 4] It is always best to give a range of numbers as a result in a poll. Rather than reporting that the results were exactly 10%, it is better to state a range, as in 7% to 13%. The actual result will likely fall between 7 to 13, but only rarely will it hit exactly 10. That is what is known as a Sampling Error, the difference between a sample result and the true result if the entire population had been interviewed. Of course, polling 2,000 voters will have a margin of error. If the polling firm is reputable, they will report the industry standard of 95% chance that the actual results will fall within a certain margin of error. That means that there is a 5% chance in the earlier example that the actual results will fall below 7 or above 13 percent. [SLIDE 5] The president's approval rating is an example of a tracking poll. A Tracking Poll is a poll that is taken continuously—sometimes every day—to determine how support for an issue or candidate changes over time. [SLIDE 6] If a polling firm is reputable, they take substantial measures to get accurate results, but they have challenges that stem from cooperation in the opinion polls. Typically, women are more likely to answer random phone calls. College students and the poor are hard to get a hold of. Finally, a huge number of people contacted (around 85%) do not even want to be polled in the first place. [SLIDE 7] Polling firms have a practice that corrects for underrepresented groups in their polls. Based on demographics, they add "weight" to particular responses to make them more representative of that group. This practice is safe when it is applied to demographics, but it is not a good practice when it is used to attempt to represent political ideology, partisan preference, or likelihood of voting. This practice of weighting the samples is what causes different polling firms to end up reporting different results. It is known as a "House Effect" in public opinion polling, an effect in which one polling organization's results consistently differ from those reported by other poll takers. This difference is often the result of secret formulas polling firms use to weight their samples. They like to keep their trade secrets in-house and the variations in these weighting techniques and formulas result in a lack of consistency among polling firms. [SLIDE 8] For the most part, polls are relatively accurate. A majority of them were able to predict with accuracy the outcome of the 2012 presidential race and most of the races for the U.S. Senate. However, Gallup and Rasmussen Reports both predicted Romney would win and to their dismay, he did not. One problem that polls present is WHEN they are taken. If they are not exactly up-to-date, the voting climate can shift within the last day and even in the last minute. It happened when polls predicted President Carter would win the election over Ronald Reagan in 1980. It also happened when Harry Truman beat Thomas E. Dewey in the 1948 election, and the Chicago Daily Tribune reported it the other way around due to inaccurate polls. [SLIDE 9] A question that was asked by the Roper Center in 1992 exemplified a problem inherently wrong in some poll questions. The poll asked, "Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?" When asked, most respondents did not know how to respond. At first, 20% seemed to doubt the Holocaust ever took place. When the question was reworded, that number dropped to below 1%. Poll questions can mislead or confuse respondents and produce inaccurate results. There is a history of publications taking polls of their subscribers and then publishing the results as if they are official. Newspapers or magazines looking for certain data might republish those results and mislead their own readers. Going a step further, a shady practice exists where pollsters use push polls in which they mislead respondents in the question to get the results they want rather than attempting to get an honest sample.