About the Author

Avatar photo

Professor Roger Bowden

Polls, MMP, and the ‘Bugger Off’ Factor


Print Friendly and PDF
Posted on
By

It’s 7 pm and you’re either making the dinner or eating it in peace. The phone rings. You think you know why, but family is always a concern, so you have to answer it. No, it’s not an Indian call centre trying to flog off time sharing or phone shifting. It’s a survey, and do you have a few moments? No, you don’t; or if you’re less polite, ‘bugger off and stop wasting my time!’ And the same for online surveys, of which I get one a week, all asking for ‘just ten minutes of your time’.

That is one reason why the response rate to political polls is at best something like 25-30%. Other delivery mechanisms (online etc.) come with their own problems. In all cases, you have to ask what sort of people might respond and why; whether the questions are framed to promote a particular view; and which way the poll outcome will drive the result. One thing is for sure: the media line ‘measured with a confidence of 2.5% sampling error’ is stuff and nonsense. It might be true for some mythical statistical population (Plato’s ideal form, so to speak), but what exactly is the reference group involved, and how might their responses be affected by the poll itself?

I got interested in such things in the mid ‘eighties: the political economy of polls. Why did nobody come to the 1984 Los Angeles Olympics; who was behind that survey on father roles and mathematical achievement my daughter brought home; and was this or that opinion poll really believable? As a good academic economist, I had to give the whole thing a game theoretic twist, but there was a fair bit of statistical theory to fill it out. The resulting journal articles and book 1 evidently struck a chord. Journal reprint requests flooded in from such unlikely places as an Aerobics Research Centre in Arizona, health care centres in the UK, an Alaskan hospital, and even from behind the then Iron Curtain.

Since then, the jargon seems to have solidified into ‘non-response bias’ and ‘response bias’. The first (i.e. non response) means that the telephone survey is limited to nice people who have lots of time, no strategic interests, and often make up their mind on the spur of the moment. The ‘amiable dimwit’ factor is one reason why telephone surveys in succeeding weeks often show such volatility.

‘Response bias’ means that I do respond, but slant the reply to accord with my own views or interests. Strategic response, in other words. If enough people are like me, it works.

Even without strategic response, poll results can drive outcomes. People stayed away from the 1984 Olympics in droves, not because it had been boycotted by the Soviet Union, but because the polls predicted its popularity (hence, expensive hotels etc.).

Moving into the present century and Godzone, many commentators think poll outcomes drive strategic voting in MMP elections. Parties that poll below the threshold won’t make it on election night because nobody wants to waste their party vote.

I should say that ACT is in that position right now. Polls show it’s well below the threshold. I suspect, just from web comments, that there are many former National Party voters who are deeply troubled by damaging outcomes like the foreshore and seabed legislation, or dismayed by the dishonesty of the political process involved. But if we are to believe the polls, a protest vote for ACT or NZ First is a vote wasted.

Or is it? Would foreshore and seabed worriers (to continue the example) be likely to go through with that telephone survey? Much less likely, if they’re anything like me. Just maybe, the poll results are very wrong. But that does not stop their distortionary outcomes in an MMP world.

Should we discard poll results altogether, as some would advocate? Not necessarily. They can provide information, provided you identify the reference group. So we believe the popular, so called ‘representative polls’, but only in so far as they apply to the amiable dimwits. We believe online surveys (such as the NZCPR snap polls) but only so far as they apply to the site’s reference group.

Reference groups do differ. Age is one important determinant – I like to think it’s because psychologists have identified 35 as the official getting of wisdom 2. But there are others, e.g. socioeconomic. Reference groups and media use may be linked: young people use mobiles and not landlines, so for them you have to go online. In addition, the reference group composition can vary for strategic reasons. Classical survey statisticians might say, if they ever thought about it, that this is de facto stratified sampling.

The next job is to combine the results from the different reference groups into a combined result for the population at large. I suspect this is why simply averaging the results over different polls, and therefore different reference groups, sometimes produces a more accurate prediction. But for enhanced accuracy, you’d want some sort of estimate as to the relative numbers in each of your groups. How many baby boomers, how many young and feckless, how many amiable dimwits, and so on.

The moral, for the discriminating voter, is to believe the polls, but don’t believe the spin doctoring that goes with them. Roll your own version for that. There’s going to be plenty of practice in the months ahead: a Rugby World Cup at home, followed by an election that the All Blacks will win. The pollsters are going to have a merry Christmas.

  1. Statistical Games and Human Affairs, New York: Cambridge University Press, 1988.
  2. For a comforting account, see Barbara Strauch (2010), ‘Secrets of the Grown-up Brain’, Black Inc /Viking Penguin, ISBN 9781863954730