Email

Drew Linzer: The stats man who predicted Obama’s win

Nate Silver of the New York Times explains the science of presidential predictions

Pundits insisted the presidential race was a toss-up, but “polling aggregators” – who analyse polls to make predictions – were being criticised for favouring President Obama. Not any more.

 

Nate Silver of the New York Times explains the science of presidential predictions

In September we called Drew Linzer, an assistant professor of political science at Emory University, to ask for his predictions for the upcoming US presidential election.

Linzer runs the website Votamatic, which uses current election polls and past historical trends to predict the outcome of major elections. He gave the same prediction he had been posting on his site since 23 June.

Obama 332 votes, Romney 206.

Weeks later, the first presidential debate, when Obama’s lacklustre performance kicked off a surge of momentum for the Republican challenger Mitt Romney, Obama’s election odds had sunk like a stone in national polls, and states once considered toss-ups were being assigned as favourites for Romney.

Asked again for his updated prediction, Linzer gave the same answer.

No change, he said: Obama 332 votes, Romney 206.

Now, Obama has been elected to a second term, and election workers are still counting the votes in Florida, which is leaning ever so slightly towards the Democrats. The Romney team admitted to the Miami Herald that they had lost the state, though it has not been officially called. When it is, the final tally in this once too-close-to call election will be:

Obama, 332 votes, Romney 206.

Aside from Barack Obama himself it is people such as Linzer – along with his contemporaries Nate Silver, who writes the Five Thirty Eight blog at the New York Times, and Sam Wang, co-founder of the Princeton Election – who may be this November’s big winners.

In a race that many old-school pundits said was too close to call, Linzer, Silver and Wang, who all run websites that use some version of voter aggregation and statistical analysis to predict elections, had Obama as a clear favourite with a slim but persistent lead.

“We really shouldn’t be all that surprised that our methods ‘worked’ on election day,” says Linzer.

“All this proves is that public opinion research is still a reliable and accurate way to learn about people’s voting preferences… as we’ve known all along. [There’s] no need to go on gut instincts or intuition or whatever else the pundits are doing, when we have actual real information,” says Linzer.

But those who built their living on gut instinct and intuition were surprised. For weeks, journalists and pollsters were convinced that the work of Linzer, Silver and Wang was politically biased or that their maths was wrong.

These men and their statistical models have now been proven correct – and that means re-evaluating what we think we know about politics, polling, and how to win the presidency.

‘Ideologues’?

These aggregators are based on a simple premise.

“Pollsters individually make mistakes, no matter how well-constructed their polls are, but in the aggregate they are quite sound,” says Sam Wang, who in his day job is an associate professor of molecular biology and neuroscience at Princeton University.

And in the past two election cycles, the number of state polls being conducted – along with an advance in computing technology – has allowed those polls to be aggregated, weighted and indexed to produce a clear probability of how people will vote.

Each of the aggregating websites uses a slightly different formula to come to their results, whether it’s looking at historical trends or including economic data and other outside factors to temper the result on voters. But in 2012, all of the websites ran thousands of models predicting a probable win for Barack Obama.

“The polling this year has been remarkably stable,” says Linzer, and even though it dipped after Obama’s disastrous debate performance, it wasn’t enough to radically shake the aggregate predictions – even though individual polls might be fluctuating.

That led to a steady stream of criticism, with Silver – the most widely read – taking the brunt of the abuse from more traditional election-watchers.

Joe Scarborough, a former Congressman and the host of MSNBC’s Morning Joe programme, said: “Anybody that thinks that this race is anything but a toss-up right now is such an ideologue they should be kept away from typewriters, computers, laptops and microphones for the next 10 days, because they’re jokes.”

Critics said the formulae each aggregator used had built-in bias. But for Linzer and his colleagues, their sites aren’t about political machination, but impartial maths.

“State polls have a very good track record, and if that track record is maintained, then what the state polls are telling us is quite clear,” Wang said before the election.

“If the election turns out a different way, then the question isn’t whether my math is wrong, because my math is quite sound, it’s what’s up with these state polls.”

That’s a very different approach from traditional punditry, where value is placed on perceived momentum, age-old political adages and gut instinct.

“One of the values in doing it our way, in which there’s a system, is it’s all in black and white,” says Linzer.

“If it turns out there’s a flaw, we can find it, spot it and we can work on addressing it as opposed to people whose commentary is based on some thoughts in their head.”

Polling is an obsession in the US, and during this campaign schedules were organised around the 13:00 EST release of Gallup’s national tracking poll.

Much of the last few weeks of this year’s election was focused on who was really winning and what the polls really meant.

Wang originally started his site in the hopes of calming some of the polling mania by providing a clear look at what the polls really said. The time spent trying to read the tea leaves, he hoped, would be better spent discussing the issues.

A proven model that correctly predicted outcomes could transform the conversation from a discussion about who might win into one about why someone is going to win. But Wang doubts it. “People love a horse race,” he said.

But the potential power of these numbers to disrupt the typical politico patter was evident even in this election. Even as pundits were fighting about the value of aggregation, the narrative that Mitt Romney was riding a wave of momentum was tempered and in some cases walked back in the face of the unrelenting statistics.

After the results were in, journalist Dan Lyons wrote: “Nate Silver and his computers may not put Scarborough and his ilk out of business – there’s loads of airtime to fill, and windbags are still needed for that.

“But Silver has exposed those guys for what they are, which is propagandists and entertainers.”

Increasingly, those who run campaigns are putting more faith in the value of numbers instead of the conventional wisdom of pundits and polls. Witness Obama’s successful re-election campaign, based in large part on micro-targeting and data analysis.

But Linzer is convinced the two methods can co-exist. “What they do is incredibly valuable and I don’t think what I do replaces that in any way,” he says. “I feel like we’re all working towards a common goal, which is accuracy and understanding.”

It’s easy to see why the old guard would feel threatened. That model was based on the predictive power of spin and narrative. It valued gut feeling, and said that you can change the polls if you spin them convincingly enough.

It’s no wonder many bristled at a system that stripped all the emotion and intuition from the process.

And yet the system was right – which Linzer could have told you in the first place.

Related posts

Death toll in attack on Christmas market in Germany rises to 5 and more than 200 injured

US Senate passes government funding bill, averts shutdown

International students urged to return to US campuses before Trump inauguration