absentee ballot for overseas voter

Polling Methodology


Many people have questions about polls and polling methodology. Here is a brief description of the process and how we handle the polls.

How are Polls Conducted?

Several organizations conduct state political polls, usually when commissioned to do so by their customers, which include local and national newspapers and television stations, as well as political consultants, and candidates. Different polling organizations use different methodologies and there is considerable controversy about which method is most accurate. To conduct a poll, the company first develops a questionnaire (together with the client). The phrasing of the questions is known to influence the results. Consider the following options:

  • If the election were held today, would you vote for George Bush or John Kerry?
  • If the election were held today, would you vote for John Kerry or George Bush?
  • If the election were held today, would you vote for Bush, Kerry, or Nader?
  • If the election were held today, who would you vote for?

The questions are then read to 500 to 1000 randomly chosen people called by telephone. Usually additional questions are added about age, gender, ethnicity, political affililation, education, income, and other factors to allow breakdowns by these categories. Often there are questions designed to determine if the person is likely to vote. These may include:

  • Are you currently registered to vote?
  • Did you vote in 2000?
  • Did you vote in 1996?
  • Do you believe that it is every citizen's duty to vote?
  • Do you think your vote matters?

Some polling companies give the results based on all adults they survey. Others include only registered voters. Yet others include only likely voters, using their proprietary formula for determining who is likely to vote based on questions like those above. Depending on exactly how the voting questions are phrased and which category of respondents is included in the poll, some systematic bias may be introduced. Some pollsters publish the results for both likely voters and all registered voters. Until Sept. 18, this site used likely voters if there was a choice (not often). Starting Sept. 18 the choice was made for registered voters if there was a choice on the grounds that there is increasing evidence the old formulas for screening likely voters will not work in 2004.

A recent development is the use of automated polls. With this technology, the company's computer dials telephone numbers at random and then plays a message asking whoever answers the demographic and political questions, to which they respond by pressing buttons on the telephone. The percentage of people who hang up quickly when this technique is much higher than when a human being conducts the poll. Nevertheless, Survey USA and Rasmussen rely heavily on this technique because it is fast and cheap, allowing them to charge less than their competitors in the polling business. Traditional polling companies criticize the methodology on the grounds that it does not adequately filter out teenagers too young to vote but definitely old enough to play games with the system. Chuck Todd, editor of the Hotline, a daily political tipsheet was once called by Survey USA and was effortlessly able to pass himself off as a 19-year old Republican Latina, something he could never have done with a human pollster. In response, the companies using automated polling, have numerous studies comparing their polls to traditional ones showing that they get the same results as their nonautomated competitors. But the issue of automated polling remains controversial.

Yet another factor is the day of the week the calls are made. Calls made Monday through Friday have a larger probability of getting a woman than a man, because there are more housewives than househusbands. Since women are generally more favorable to the Democrats than men are, this effect can introduce bias. Also, calls made Friday evening may miss younger voters, who may be out partying, and thus underweight them in the results. To counteract this effect, some polling companies call for an entire week, instead of the usual three days, but this approach results in polls that do not respond as quickly to events in the news. The most extreme example of this approach is Rasmussen, which polls people in the key battleground states every day and summarizes the results for the previous month at the start of each new month. More information about the polling processes is provided by this tutorial on polling written by the Gallup Poll.

Who Conducts Polls?

The websites of some of the major polling organizations are listed below. Note that most of them do not give much useful data for free. To get the numbers, you have to buy a subscription, in which case a wealth of data is provided. Also note that a few of the polling companies keep track of the state-by-state electoral vote, but all of them use only their own data. Since no polling company polls every state every week, using only one company's data means that their maps are often based on obsolete data. For this site, we base the maps on the results of four paid subscriptions, some of which, like www.pollingreport.com, themselves subscribe to multiple polling companies. Other sources, such as polls published by major media outlets are also used.

Which Polls Do You Use?

The methodology of polling is not simple. There are numerous subtle issues involved. But given a collection of polls, which ones should we use? To avoid arbitrary judgments, there has to be an unambiguous rule. This site originally had a simple policy: use the most recent major poll. There are two key words here: 'recent' and 'major.' Most polls take 2-3 days to conduct. For these polls, the poll ending most recently wins.

However, Rasmussen conducts monthly tracking polls in which they poll people every day for an entire month. These polls are treated differently because they cover such a long time span. If some other poll has a middle date on or after the 15th of the month, it is considered more recent than the Rasmussen poll, otherwise not. For example, a poll taken July 14-16 is more recent than the July Rasmussen poll, but a poll taken July 13-15 is not.

It has been suggested to average the the last three polls instead of using only the most recent one. Obviously this is possible, and some election websites do this. However, suppose the most recent one ended yesterday, the next most recent ended 59 days ago and the one before that ended 89 days ago. Should these really be averaged?

The next improvement is to let polls expire: only count 'recent' ones. But what is recent? Less than 30 days? 60 days? 90 days? It begins to get arbitary here. Even worse, this formula leads to strange effects. Consider the case cited above with polls yesterday, 59 days ago, and 89 days ago. Somebody is ahead. Next day, even in the absence of any polls, the oldest poll expires and a new average is computed. If one of the candidates has made great progress in the past three months in some state and all of a sudden the old poll weighing him down magically vanishes, he unexpectedly leaps ahead. It could thus happen that the score changes dramatically even with no new poll because some poll taken three months ago has just gone beyond the 90-day look-back period. Consequently different websites, newspapers, pollsters, etc. may come to different conclusions about the horse race, even using exactly the same polling reports. And you thought you could understand the election campaign without a Ph.D. in statistics?

Starting Oct 4 the methodology was changed. There were so many polls and they were so far apart that the most recent 3 polls per state were averaged.

The second key word is 'major.' What is a major poll? Is requesting your grandmother in Michigan to ask all her friends who they are going to vote for a major poll? Probably not. Not even if she is a wonderful lady and has many, many friends. Well-designed scientific polls are not easy to conduct. The primary issue is selecting participants at random from the correct pool, which itself is arguable (all adults? all registered voters? all likely voters? what about voters living overseas who are registered in the state?). For purposes of this site, a major poll is one conducted by a professional polling organization and published by a mainstream media outlet. These organizations include half a dozen or so national polling companies, and some regional ones. However, an increasing number of universities are also getting into the act because they have the key ingredients in abundance, namely, professors who understand political science and statistics and lots of cheap labor, sometimes referred to as students.

What Does Margin of Error Mean Exactly?

There is no concept as confusing as 'Margin of Error.' It is used a lot but few people understand it. Suppose a polling company calls 1000 randomly selected people in a state that is truly divided 50-50, they may, simply by accident, happen to call 520 Democrats and 480 Republicans and announce that Kerry is ahead 52% to 48%. But another company on the same day may happen to get 510 Republicans and 490 Democrats and announce that Bush is ahead 51% to 49%. The variation caused by having such a small sample is called the margin of error and is usually between 2% and 4% for the sample sizes used in state polling. This means that with a margin of error of, say, 3%, a reported 51% really means that there is a 95% chance that the correct number is between 48% and 54% (and a 5% chance that it is outside this range).

In the first above example, with a 3% MoE, the 95% confidence interval for Kerry is 49% to 55% and for Bush 45% to 51%. Since these overlap, we cannot be 95% certain that Kerry is really ahead, so this is called a statistical tie. Nevertheless, the probability that Kerry is ahead is greater than the probability that Bush is ahead, only we cannot be very sure of the conclusion. When the ranges of the candidates do not overlap (i.e., the difference between them is at least twice the margin of error), then we can be 95% certain the leader is really ahead.

For this reason, the white states in our maps should be regarded as tossups no matter who is currently slightly ahead; the results could easily flip in the next poll without a single voter changing his or her mind. Of course, the margin of error can be reduced by using a bigger sample, but that takes longer and costs more money, so most clients opt for 500 to 1000 respondents.

Back to the main page.


If you like this website, please link to it to improve its Google PageRank and tell your friends about it.