All pollsters are aware of the difficulty of getting a representative sample these days because, as we note, above, some Republicans don't trust the media (except Fox) and won't talk to pollsters. They also suspect that if the first sentences they uttered after "Hello" were: "Hi, I'm a pollster from [X]. By any chance are you one of those Republicans who refuse to talk to pollsters?" the results might not be so helpful. Some pollsters try to correct for this nonresponse bias by simply weighting the Republicans who are willing to take the poll more heavily. Again, though, adding more people who love Lisa Murkowski doesn't compensate for not having enough people who love Blake Masters. These voters are not interchangeable. What to do?
We had a Pennsylvania poll Monday from a company called Wick. We had never heard of them before, so we checked them out to see if they were really campaign consultants. They don't appear to be. It looks like they are a small-market research company that is trying some innovative things to at least better understand the nonresponse bias plaguing all (political) pollsters. On the company's website, there are some articles about what they are trying to do. Maybe they are barking up the wrong tree. We don't know (yet). But we do know that random-digit dialing isn't working well anymore due to nonresponse bias, so explicit attempts to deal with it are worth looking at.
A key question is: "Why are people willing to spend 15 minutes talking to a stranger with no benefit to themselves for doing so?" For many people it is out of a feeling of civic duty, but not everyone has this. Equally important is "Why do some people refuse to do the poll?" Often it is time. If the pollster calls a mother with three young children on a Wednesday evening at 7 p.m., all the tea in China isn't going to get her to spend 15 minutes talking to a stranger about how she feels about J.D. Vance. Wick has three theories about the nonresponse:
Any and all of these lead to sampling bias. One thing Wick discovered early on that setting a weight for people with a college degree or more is a bad idea. People with postgraduate degrees are much more likely to be willing to talk to pollsters than people with only a bachelor's degree. So different quotas are needed for each group, separately.
Another thing Wick is looking at is mixing random sampling with nonprobability sampling. So, for example, half the weight could be given to the results from calling people at random and the other half could be taken from a panel in which the same people are asked the same questions every month. The latter can show genuine change over time separate from the luck of the draw in probability sampling. It turns out the nonresponse bias is quite different for online methods vs. random phone calls, and by doing both in every poll, there is some chance of figuring out a way to calibrate the random sampling using the data from the panel or apps. Another method Wick is looking at is texting people links to websites where they can take the poll. It turns out that some people who are not willing to talk to a human interviewer are willing to follow a link and fill out a poll form online. Again, by mixing as many as three different data collection methods for the same poll, it may be possible to use data from one or more of the online methods to correct for the nonresponse bias in the phone interviews. This is still experimental, but given the problems with phone interviews using random digit dialing, it is essential to start looking at alternative methods.
Another surprising thing Wick found is that when polling the Georgia governor's race, the probability and nonprobability methods gave similar results within cities. But in rural areas, there were huge differences between the two methods. They are speculating that the kind of person in a rural area who is comfortable taking a poll on an electronic device is not a typical rural dweller. This needs further investigation.
Another thing Wick turned up is how vaccination status plays a role in polling. For example, in their Arizona poll, 62% of respondents were vaccinated and boosted. Yet CDC statistics show that only 35% of Arizona adults are vaccinated and boosted. The conclusion is that vaccinated people are eager to talk to them and unvaccinated people don't want to talk to them. This suggests that vaccination, rather than education, may be the key to dealing with the nonresponse bias by overweighting the unvaccinated people who do take the poll. So in the case of Arizona, each unvaccinated person could be weighted 1.77 vs 1.00 for each vaccinated person. That might be far better than making sure the number of Republicans was in proportion to the population and thus inadvertently including too many Lisa Murkowski Republicans and too few Blake Masters Republicans. The final words haven't been said here by any means, but pollsters need to conduct more experiments along these lines to see if they can fix the problem. (V)