Menu Close

As pollsters, we are rightly in the firing line after the Australian election. What happened? | Peter Lewis | Opinion

As pollsters, we are rightly in the firing line after the Australian election. What happened? | Peter Lewis | Opinion 1

While progressives struggle to make sense of the weekend election result, the pollsters whose projections created the security blanket of certainty around a Labor win are rightly in the firing line.

I don’t think there is any simple answer to address what appears to have gone wrong with the polls this election, rather we need to look at the perfect storm of declining voter engagement, shifting demographics and technological change.

Let’s start with the obvious. None of the major opinion polls – Newspoll, Ipsos, Galaxy or Essential – correctly predicted a Coalition win. In fact they all projected a two-party-preferred vote of 51-49 and 52-48. While the postal votes are still being counted, it appears the final result will be closer to 51-49 to the Coalition.

Technically all these polls can claim to be within the margin of error of a 95% certainty of the poll being within 3% of the actual result, but this would be an absolute cop-out. The reality is all the polls have been consistently tracking a Labor victory for the best part of three years, albeit narrowing over the life of the campaign.

If we look at the Essential poll in particular, it appears the Coalition vote was about 2.5% too low and Labor’s vote about 2.5% too high. The minor parties appear about right, although it seems that, particularly for One Nation and the United Australia party, the preferences have flowed stronger to the Coalition than previous elections. There are a number of explanations for this disparity.

The first is that the polls were actually an accurate reflection of where the public was at the start of the week, and there was a move to the government in the final days of the campaign. Essential’s poll went into the field the previous Friday, so even if this shift occurred in the final week we would have missed it.

We always knew there was a large cohort of voters with extremely light engagement. In fact our final poll was still at 8% of voters undecided and thus, removed from the sample, nearly double the number from previous elections. A further 18% had told us they hadn’t been paying a lot of attention to the campaign. That’s more than one-quarter of the electorate.

I don’t think the result on the weekend is a reason to stop asking questions and being curious about Australians

While in most elections these voters have tended to break in line with the existing two-party-preferred vote, it is possible that on this occasion the most disengaged 10% turned up on election day and voted overwhelmingly for the Coalition. Given the noise of the Clive Palmer campaign, the single-minded mendacity of the Liberals’ tax assault and the relative complexity of Labor’s voter choice proposition, this is not outside the realms of possibility.

The challenge for pollsters is that if these are the people who determine an election, we need to investigate new ways of reaching them. Currently we tend to think about the election as “rusted on” voters and “swinging” voters. These are relatively easy to segment and understand because they can respond to a question like “do you always vote for the same party?”

The problem is what do you do with the people who answer this question with “don’t know”? While it’s cleaner to simply remove them from the survey it means it is only ever telling part of the story. On these critical disengaged late-deciders, we clearly need to come up with something better. We know, for example, that with the marriage equality plebiscite, the late deciders voted “no”. If this reflects a default conservatism, then the Coalition vote is being seriously underestimated when they are removed from the sample.

A second line of inquiry is the quality of poll sampling. All polls seek to reach a representative sample, that is one that generally reflects the broader population on age, gender, income and geography. But this is getting more challenging with traditional phone polls hampered by the reduction in the use of fixed landlines, particularly among younger voters. Not only are people harder to reach but they are also less likely to respond to surveys. Refusal rates are increasing to all forms of contact, and the lower the overall response rate, the more likely the sample is to be skewed.

Working off an online panel, Essential is able to establish a system of interlocking quotas to specific groups. If a particular bucket of voters can’t be filled, we “weight” the responses from others in the group so that their opinion has a proportional influence on the outcome. The problem is that if particular groups are systematically underrepresented, the influence of a smaller number of respondents in their cohort is amplified.

The inherent risk in weighting for pollsters is that when a result emerges as an outlier, there is a tendency to review the weighting to make the result look closer to the established norm. Essential has never altered our results and over the years have published what appear to be “outliers”, but there is a risk of “herding” results, creating its own self-reinforcing echo chamber among the published polls.

This problem is most pronounced in the so-called “robo-polls”, where a recorded message is dialled to an entire electorate, where a much smaller number of usually older voters punch in the numbers and then the results are weighted through algorithms that ensure results are “robust”. The spread of these robos has been at the heart of political research’s own automation crisis – the polls are a fraction of the cost and just good enough to generate headlines. But they are not only inherently flighty, their methodology is unclear and, significantly, they drain research companies of the day-to-day quantitative work they can help us build, test and refine more robust models.

My final point of reflection though is not so much about polls but the way we all, myself included, have tended to use them. For the past decade they have become the default scoreboard for the political contest. They have become the justification of internal power plays and the fodder of lazy political analysis, part of a perpetual self-reinforcing feedback loop.

This is why we need to focus a lot more of our research on broader questions about attitudes to contemporary issues through the eyes of voters who do not see their lives through a political prism. It’s why we often test awareness of facts before worrying about opinions, and why we attempt to dig deeper into demographics, particularly around gender and age. It’s also why our best work occurs when we take these “quant” findings and dig deeper through face-to-face focus groups where people get to unpack what they really mean.

All of which is to say: I don’t think the result on the weekend is a reason to stop asking questions and being curious about Australians. But it does challenge all of us to be more critical about the information we collect and dig deeper into what it really means.

• Peter Lewis is the executive director of Essential and a Guardian Australia columnist

Source, Peter Lewis