Are You More Likely to Vote for A) An Evil, Lying, Anti-American Pervert, or B) Candidate X?”

Inside the dark art and weird science of opinion polling

David McKee


"To follow the polls is a fool's game best averted by deep sighing and copious amounts of wine and by ignoring them completely and by rejecting as specious and pointless nearly all stories that bring up poll numbers in hysterical and alarmist tones. Which is, you know, most all of them."



—Mark Morford, SF Gate, September 10


Those sentiments are somewhat "hysterical and alarmist" themselves ... but the man has a point. Just as the news trade cossets a fair amount of show business, polls—and the coverage thereof—provides a mixture of information and razzmatazz. Las Vegas Sun political pundit Jon Ralston says news organizations believe polls function "mostly to generate news. People love to read poll stories."


As pollsters increase exponentially from one election cycle to another, we find ourselves in a news culture where polls are inescapable. And if you don't like the results of the latest one, there'll be another along at the top of the hour, reporting the exact opposite. And as the polls proliferate, even modest movement in somebody's approval rating or public sentiment on the issues of the day can be hyped into cosmic significance.


With so much polling being done, it's no surprise that it has seized a dominant place in the current electoral debate. Mass media, and especially TV, have a documented propensity for "horse race" stories, as opposed to issue-driven ones. The proliferation of pollsters makes that easier. Forget that story on the restructuring of Medicare: We've got a new CNN poll to tell you about!


This year, the discussion isn't just about polls but about polling itself. All of a sudden, we're getting a crash education in the methods and nuances of polling, which, depending on your perspective, is a sophisticated scientific tool or a tricky mix of necromancy and alchemy.




How did we get here?



If anything stirred the polling pot, it was the bounce-bounce-bounce mantra of cable news talking heads. What is "bounce"? Who's got bounce? Who's lost their bounce? Will somebody please bounce these people back into serious journalism?


Much entrail-reading followed the Democratic Convention, as contradictory polls were dissected to determine if Sen. John Kerry had the mojo-like bounce. If so, how much? If not, who stole it?


What really brought the poll pot to a boil was a Gallup poll published in the aftermath of the Republican Covention. In a Peter Parker-to-Spiderman transformation, President George W. Bush went from running in place to being—according to Gallup—14 points ahead.


To put Gallup's numbers in context, here's how they stack up against several contemporaneous polls:


• Zoghby (September 19): Bush +3 percent


• Gallup (September 15): Bush +14 percent


• Democracy Corps (September 14): Bush +2 percent


• Pew Research Center (September 14): Bush +1 percent


• Harris (September 13): Kerry +1 percent


• NDN (September 12): Bush +5 percent


• ICR (September 12): Bush +7 percent


So we have seven polls showing the main contenders at various points within an eight-point spread, and then there's Gallup. It's what polling professionals call an "outlier," which seems to be a nice way of saying, "I'd take those numbers with a grain of salt."


What happened? Gallup had put its foot in a big, steaming pile of ... polling data. Ignoring years of trends, Gallup came to the assumption that "likely voters" (its core database) would be about 40 percent Republican, a phenomenon not seen in decades, if ever. By tailoring its "likely voter" sample to fit the preconception, Gallup ensured that its numbers would have a heavy pro-Bush skew (just as it would have achieved a lopsided pro-Kerry tilt by grossly over-sampling Democrats).


For instance, the Pew Institute's measurements of party identification show a 28 percent Republican, 33 percent Democrat, 39 percent independent alignment in 2000, a 30-31-39 split in 2002 and a 29-33-38 one for 2004 to date. Undeterred by history, Gallup oversampled Republicans, as did a New York Times/CBS poll (which sampled 36 percent Bush voters vs. 28 percent who had voted for Gore in 2000) and Newsweek, which went for a 38 percent Republican, 31 percent Democrat, 31 percent independent split.


When poll analyst Rudy Teixeira re-weighted the CBS numbers to reflect 2000 exit polling (35 percent GOP, 39 percent Dems, 26 percent independents), CBS's eight-point Bush lead shrank to one, bringing it in line with contemporaneous Pew, Harris and Democracy Corps polls (and a re-weighted version of the Gallup numbers). Adds Zachary Roth, in the Columbia Journalism Review, "Gallup took some similar heat in the year 2000, when its poll results swung erratically from day to day, sometimes by as much as 10 points."


Even one Bush pollster was taking the more volatile numbers cum grano salis. Fred Steeper told the Wall Street Journal that "day-to-day reports of polling will exaggerate the changes in this race," adding that "party allegiance does not change in seven days."


So what's this "sampling" business? And isn't Gallup the gold standard of polling? Well, yes, it has been, and not without reason. But first ...




A bit of history




"There is no God-given right way to do a survey."



— Stanley Presser, University of Maryland sociologist


George Gallup made his name as a pollster and revolutionized the science of polling in the 1936 presidential election. Opinion surveys showed President Franklin Delano Roosevelt badly trailing challenger Alf Landon. Trouble was, that opinion "sample" was obtained by calling up people listed in the phone book and by placing cards in magazines, upon which readers could indicate and mail in their presidential preference.


America was still in the Great Depression and targeting people with telephones and magazine subscriptions ensured a poll tilted toward more affluent Americans, those most ill-disposed toward FDR and his policies. Gallup sent his pollsters out to knock on doors, interview people face-to-face and obtain a survey sample that reflected the demographics, geographical distribution and party affiliation of the public at large.


Gallup's poll showed Roosevelt leading 56 percent to 44 percent, and his method was vindicated on election night, when FDR clobbered Landon 61 percent to 37 percent. Gallup called the next three elections correctly, his margin of error decreasing each time.


Then he goofed.


It was 1948, and incumbent Harry Truman was five points down to Thomas Dewey. According to Adam Clymer, director of the National Annenberg Election Survey, with two weeks to go in the race, Gallup simply stopped taking polls. By so doing, he missed Truman's last-minute surge, which reversed a deficit into a five-point margin of victory—a 10-point swing.


Even today, when polling goes almost down to the wire, 11th-hour shockers still slip under the pollsters' radar:


• "VENTURA WINS!" shrieked a gargantuan, shock-ridden headline on the Minneapolis Star Tribune, after wrestler-turned-actor-turned-politician Jesse Ventura roared past Democrat Skip Humphrey and Republican Norm Coleman and into the Minnesota governor's mansion. Polls had shown strong (and growing) support for Ventura, but not that strong.


• Four years earlier (and closer to home), Rep. Jim Bilbray seemed safe in his sinecure as congressman for Southern Nevada. "Everyone thought Bilbray would win in a walk," remembers Ralston. "The R-J had it Bilbray by 17 a few days before the election." What a rude surprise it must have been when Bilbray was knocked off by challenger John Ensign, a beneficiary of the 1994 "Republican revolution." "Why?," asks Ralston. "Hard to tell. Low-turnout elections can skew polling. But in that case, Ensign's support was just hidden."


Brad Coker, of Mason-Dixon Polling & Research (which has handled the Review-Journal's polling since 1990), thinks he knows what happened. "That was the only election race we did where the final poll numbers and the final results would look funny if you held them up side-by-side, but there were some scandalous things around Bilbray that were just starting to break when we did our poll. It all really didn't get flushed out in the laundry until after we were done."


If Bilbray's mishaps arguably cost him his seat, what explains the Ventura surprise (which Mason-Dixon was also involved in tracking)? Coker attributes that upset to a big turnout of younger voters, who could register as late as Election Day itself, and to a dominant Ventura performance in the final debate of the 1998 campaign. "A lot of the Ventura 'surge' came after that debate, at least a week after we stopped polling."


When there's a last-minute shift in the electorate's tectonic plates, pollsters seem likely to miss it, which Coker blames on newspaper deadlines. "The polls only tell you what's going on up to the day the poll is taken. Sometimes, you stop polling seven days before an election, because your newspaper wants to publish on Wednesday and Thursday, and not 'too close' to influence the vote. Things can change in the last seven days of a campaign, and sometimes a lot of people miss that."


Annenberg's Clymer uses something called a "rolling cross section." He explains that "we poll every night. Most polls are conducted for a three-, four- or five-day period. We're asking the same questions every night."




We have ways to make you talk



The days when George Gallup's minions went from door to door are long gone. The preferred method is now random-digit dialing. Residential phone numbers [listed and unlisted] are automatically dialed until a sufficient harvest of responses is collected, from which can be winnowed a representative sample of the state or city or congressional district being surveyed. It kind of brings the art of polling full circle.


Assuming that the person being called is a) at home; b) inclined to answer the phone; and c) doesn't tell the pollster to "F--k off!" before hanging up, willing respondents are then "screened" via a brief questionnaire. This is to make sure they fit the parameters of the survey. (For instance, because I work in the media, I have been "screened out" of a number of polls and surveys.) According to Coker, Mason-Dixon screens roughly 2,000 respondents to obtain the 400-625 "likely voters" it bases its R-J polls upon.


Are you a likely voter?


Let's find out, using the seven-point quiz employed by Gallup ...


1. How much thought have you given to the upcoming election for president: Quite a lot, some or only a little?


2. Do you happen to know where people who live in your neighborhood go to vote?


3. Have you ever voted in your precinct or election district?


4. How often would you say you vote: Always, nearly always, part of the time or seldom?


5. Do you, yourself, plan to vote in the presidential election on November 2?


6. In the last presidential election, did you vote for George W. Bush, Al Gore, Ralph Nader or did things come up to keep you from voting?


7. I'd like you to rate your chances of voting in the upcoming election for president on a scale of 1 to 10. (The higher the number, the greater the likelihood.)


Affirmative answers to all questions gives you a 7 ranking, marking you as likeliest to vote. One negative or equivocal answer gets you bumped down to a 6, and your responses to the poll questions are given a lesser proportional value. If you score a 5 or lower, forget about it. You've been screened out. (CBS's polling doesn't screen out low-scoring respondents, relying instead on a ratio whereby respondents are "weighted" in direct proportional ratio to how high they score.)




Is 7 the Holy Grail of polling?



Pollsters value likely voters higher than merely registered ones, although they keep parallel tracks of both. One of the criticisms that Gallup is facing in the current election campaign is that it's touting its "likely voter" results at a time when large numbers of them are arguably still in play. We call them "undecideds"; pollsters call them "persuadables."


"As you get closer to the election, all the various likely voter models work better," University of New Hampshire Survey Center Director Andrew Smith told the New York Times. Ray Teixeira, who crunches poll numbers for the Center for American Progress, adds (in one of a series of reports for Alter Net) that "sampling likely voters is a technique Gallup developed to measure voter sentiment on the eve of an election and predict the outcome," as opposed to tracking voter's sentiments months out from Election Day.


Even then, Teixeira argues, Gallup's registered-voter numbers have been more accurate election predictors for the1988-2000 presidential races than its likely voter ones. (Gallup's election-eve likely voter polling in 2000 showed Bush up by 2 percent, though others were farther off, and only CBS called it right, as did Gallup's registered-voter sample.)


What "likely voter" samples are measuring, Teixeira contends, is the degree to which each candidate's "base" is motivated at any given point: "partisans of the mobilized party," he writes, "tend to be screened into the likely voter sample and partisans of the demobilized party ... tend to get screened out."




Disenfranchised?



Then there are all the voters who don't get polled at all. According to Gallup itself, "College students living on campus, armed-forces personnel living on military bases ... hospital patients and others living in group institutions are not represented in Gallup's 'sampling frame.'" Cell-phone numbers are also blocked, for all pollsters, and "as a result, an estimated 3 percent of mostly under-30 U.S. households have no chance of being polled," writes Wall Street Journal science correspondent Sharon Begley.


Mason-Dixon's Coker scoffs at such findings. "The first myth is that all younger voters can only be contacted by cell phone or Internet. That's not true," he says. "There are certainly a significant number of younger voters who can be contacted on a landline. They're just a little harder to get, and it takes a little bit more work to get them."


Even so, if collegiate and cell-phone-equipped younger voters are going unsurveyed, pollsters may be setting themselves up for an October surprise this year. An Internet-based poll, conducted by Britain's Economist and U.K. polling firm YouGov.com, discloses some interesting numbers among its 18- to 24-year-old participants. (Disclosure: I am one of YouGov's guinea pigs, albeit in the 25-44 age bracket.) The youngest respondents are strongly pro-Kerry (by a 20-point margin), pro-choice and pro-gay marriage. The race is deadlocked in the next age bracket, and the numbers move in Bush's favor in the 45-or-higher subsets.


"I'm not terribly worried about lots of young people who weren't terribly interested in politics suddenly showing up at the last minute and deciding they're going to vote for Bush or Kerry because they were suddenly motivated to do so," replies Mason-Dixon's Coker. "Election after election after election basically shows that the older you are, the more likely you are to vote, and the younger you are, the less likely you are to vote. There are lots of socio-economic reasons for that."



• • •



"When you gather opinions from people on subjects of which they know little or nothing, you're only collecting interesting garbage."



—Jack Shafer, Slate (June 11, 2004)


Case in point, a poll conducted by the Review-Journal on the Ray Rawson/Bob Beers race, in October 2003. Rather than being confined to constituents in State Senate District 6, it sampled the width and breadth of Clark County. Not surprisingly, 72 percent were "undecided." Given the R-J's blunderbuss approach, what other result could have been expected?


Using a precision tool like a poll as a political scattergun is nothing new for the R-J, which ran 33 tax-related polls in 2003 alone. Given the hysteria-tinged antitax tantrums that regularly issue from the editorial pages of the R-J, one might suspect a cause-and-effect relationship. Some have been heralded with inflammatory headlines and the eventual tax package was characterized as "record tax increase(s)." Coker insists it was business as usual, noting that a plethora of taxes were being considered during that tempestuous 2003 legislative session.


Not that the R-J's editors are shy about spinning poll results to reflect the paper's political agenda. When Nevadans' support for the war in Iraq ticked upward three points (within a four-point margin of error), the paper trumpeted it as a seismic shift.


"Many undecided on doctor-, lawyer-backed initiatives" howled a September 21, 2004, Review-Journal headline, even though two of the three initiatives had 50 percent-plus support in the Mason-Dixon polls and the undecideds were in the 10- to 15-percent range. By contrast, voters were characterized as merely "unsure" in three state Supreme Court races where the undecided vote ran from 38 percent to 54 percent. ("Is it 'yes' at 50 or better?", Coker asks, regarding initiative-and-referendum votes. "As long as 'yes' is under 50—it could be 48 [for] to 38 [against]—I still wouldn't bet my life savings that it's going to pass, because it's a lot easier to convince people to vote 'no' than 'yes.'")


We'd like to tell you more about the R-J's polling, but the newspaper removed many of its Mason-Dixon links a few days after our conversation with Coker.




Garbage in, garbage out



Another example of the irresponsible employment and promulgation of poll results is a table that the online magazine Slate has been running. It's a running tally of the electoral votes, if the election was held today, and it shows a Bush landslide.


That's not the problem. As a caveat posted on AndrewSullivan.com noted, Slate is slopping together state-by-state polls from several different polling organizations (and, as we've seen, each pollster tends to use his or her own formula). "Likely voter" results are mish-mashed with "registered voter" ones, with the end result that there is no leveling factor, no baseline for reconciling this goulash of data.


With so many different polling instruments, time frames and samples being stirred into the recipe, the result is indecipherable without hours of research—and as bogus as predicting a Kerry landslide using the same slapdash methodology would be.


As the R-J's Steve Sebelius maintains, it's crucial to see polling instruments themselves, especially when they posit some radical shift. "There's a lot of science that goes into a legitimate poll," Sebelius says. "You have to see those questions. Did they evenly divide their people with Democrats and Republicans? Are they polling likely voters, as opposed to simply registered voters? Pollsters strive for gender-neutrality, so they have an equal balance of men and women, Republicans and Democrats, all likely voters, randomly dialed so you're not skewing the sample."


Sebelius' opposite number at the Sun, Jon Ralston, shares the skepticism. "Media often report on polls without asking to see the entire instrument. They could be using information obtained after several push questions," Ralston says, "and they rarely explain the significance of polls, usually relying on the pollster, which isn't always a good idea. Polling is a science, but too many people call themselves pollsters and polling experts."




Your neighborhood pusher



Some "polls" aren't polls at all. They fall into the realm of "push polls," designed to produce a predetermined result or—as their name suggests—to push people's buttons. The most infamous example in recent political history is the smear campaign conducted against Sen. John McCain in South Carolina, after he bested George W. Bush in the 2000 New Hampshire primary. Voters received calls from "pollsters," asking questions like, "Would you vote for John McCain if you knew he'd fathered a child by a black prostitute?" Not only is the question predicated on a completely specious premise, it is designed to elicit an emotional reaction rather than to obtain information.


"The push polling is more of a campaign tool, both to get information out and to discover what negative information is best to use," Sebelius says, going on to cite a recent local example. "In the Ann O'Connell race, I know that the Citizens for Fair Taxation—i.e., the gaming industry—did polling, and they asked the matchup question. 'If the election were held today, who would you for: Joe Heck? Ann O'Connell?'," Sebelius relates. "Then they started to ask their push questions. 'Would you vote for her if you knew she signed onto the largest tax increase in Nevada history?' They found that was going to be a very effective weapon against her, so you saw that in the advertising."




I, Robot



The majority of the polling that one is likely to be subjected to is what's called "robo-polls." These are pre-programmed, pre-recorded questionnaires blasted to untold phone numbers, their sponsors often either anonymous or cloaked behind euphemistic names. Here's a quickie deconstruction of some recent robo-polling, much of it of the "push" variety:


"Election Research"


"Do you believe that frivolous and abusive lawsuits end up costing us too much money?"


(The language here is a dead giveaway. Aside from the use of "us" to foster complicity, loaded terms like "frivolous and abusive" are the sort of language that self-respecting pollsters—who labor over value-neutral verbiage—would scorn.)


"On the issue of abortion, do you consider yourself pro-life?"


(No "pro-choice" corollary is offered.)


"Do you support the U.S. position on Iraq?"


(Kind of hard to answer that one when the "position" is left undefined.)


"Helping Hands Support":


"Do you support the NRA and its strong support of gun-owners' rights?"


(The framing of the question doesn't leave much room for those gun owners who don't think the NRA is doing enough on their behalf.


And, again ...)


"Are you pro-life."


These are excellent examples of framing language in action, something that has been critical to Republican electoral victories, dating at least as far back as the 1994 "Contract with America." The elephants have understood and employed "framing" rhetoric with devastating effect for many years. The donkey party (Bill Clinton excepted) still appears not to have grasped it, which would help explain why it's had its ass fed to it in most election cycles of late.


"ITC Research"


"Considering the current state of our economy, do you think Republicans in Congress have done enough to help average Americans?"


(It probably depends on how you define "average," and there's that "our/us" language again. One could plausibly answer "yes" to this question, but the language appears to frame a "no" response.)


"Considering the health and balance of our economy, how important is it for you to restore a Democratic majority to the House of Representatives? Very important, somewhat important [or] not important."


(This question virtually presumes a Democratic respondent. Conservatives could answer "not important," but it's pretty obvious where the questioner is going with this one.)


Unidentified:


"Even though you're not supporting Ralph Nader now, what are the chances that you might support Ralph Nader in the election for president? Fair? Small? Very slight chance/no chance at all?"


(Somebody is obviously trolling for Nader votes here. Even the most negative response would allow a "very slight chance," leaving the door open for conversion.)


"Opinion Research" [paid for by Defenders of Wildlife Action]:


"In the November election, how important are wildlife and conservation issues when deciding your vote for president? Very, somewhat or not important at all?"


(DWA, which supplies a callback number at the end of the survey, is an unabashedly green organization, and the question appears designed to generate strong pro-environment numbers, given that two of the three available responses are positive, which can be spun as campaign fodder. The language here, though, is closer to value-neutrality than in any of the other push polls.)




Artistic endeavors




"This is art. This isn't science. Nobody knows."



— Pollster Peter Hart, in the Wall Street Journal


That's a mighty pessimistic viewpoint, especially coming from one of the deans of American political polling. But if any thread comes through in hours of discussion, it's that what at its best has the precision of a scalpel is too often employed with the finesse of a meat cleaver.


The most interesting aspects of a poll are also those likeliest to go unreported: the numbers inside the numbers. "I find most news organizations don't do a good job of putting polls in context and looking at the whole poll to see context," Ralston says. What's the good of a pollster slicing and dicing all that demographic data if only the crudest version of your painstaking research is what's going to be broadcast to the world?


Part of the vulnerability of polling to misinterpreting and second-guessing is that, while preserving data in amber, events themselves tumble forward, always threatening to render the pollster's findings obsolete. "A poll is a snapshot," says Sebelius. "It's just where the people are at that moment in time ... It is not sports-book accurate. We're not setting betting lines, and we're not saying we've got a degree of confidence approaching 100 percent metaphysical certitude that this is where it's going to end up."


"It's just a little art and a little science, and sometimes numbers get thrown by little factors here and there. I don't profess to have a crystal ball or ESP," Coker says, laughing. "If I could predict the future, I'd be a much richer man than I am right now."


Polls are the sultry concubine of journalism; alluring, but untrustworthy enough to be kept at arms' length. They read the tarot cards of public opinion and offer seductive interpretations that can make the rational person second-guess his own faculties, flinging one from elation to depression and back again. They are the sybils whose occult knowledge feeds the maw of the news cycle, and the wilder the prophecy the better headline it makes. They comfort us when they reaffirm our preconceptions, enrage us when they don't.


Do we report too little and poll too much? I don't know. Maybe we should take a poll.

  • Get More Stories from Thu, Oct 7, 2004
Top of Story