19 April 2008
The problem with surveys

Some questions to ask next time you see a survey…

Is it measuring what it claims to be measuring?  Usually, there is an input and an output eg number of cigarettes smoked and mortality.  Are both being measured accurately?  Can both be measured accurately?  I heard a report the other day that had claimed to have been able to measure self-esteem in children.  How on earth do you measure that?

Is the thing they are measuring actually as good/bad as the surveyers claim?  For instance if policy initiative X is supposed to have given rise to increase in observable phenomenon Y then is Y as good/bad a thing as the surveyers think it is?  A good example of this, for instance, is museum attendance.  Good, if you like that sort of thing.  Bad, if you’re the seven-year old child who would far rather be playing Nintendo.

Is the sample big enough?  We’re getting into some fairly heavy duty statistics here.  Or, at least we could be, but an awful lot of surveys have pitiful samples.  My rule of thumb is ignore it unless it involves at least 500 people (assuming it’s a people survey).

Is there a control?  Was it done properly?  For instance, one of the earliest smoking surveys pitched a random group of smokers against a group of non-smoking doctors (or so I am told).  Not surprisingly the smokers had shorter lives.

Does correlation prove causation?  If you increase X and observe an increase in Y that does not mean that X causes Y.  Something else might have.  Indeed, Y might cause X.  Look for a time lag.  If X changes and then Y changes maybe there is causation.  Also, look for other likely causes.  Has the survey factored all of these out?

Is it being reported correctly?  Tell tale phrases like “up to” and “as much as” are dead giveaways that the reporters are trying to dramatise things.  Also, is what the report says what was said actually what was said.  Similarly, if the combination of survey and reportage seems to be leading you to a ready-made conclusion (especially one to do with state policy) be very suspicious.  For instance, the other day I heard one claiming that children who wore ethnic dress to school had higher self-esteem.  Conclusion, let Muslim parents foist hajibs on their daughters.  Just a bit too convenient isn’t it?

The most important thing to bear in mind is that scientists don’t always get it right and reporters certainly don’t.  Don’t ever take these things at face value.  It’s also worth bearing in mind that there’s a whole branch of the public relations industry dedicated to raising clients’ profiles and one popular way is by releasing surveys purporting to demonstrate a need for the client’s product or service.

By the way, this is just a list of things that came off the top of my head.  Does anyone out there have some other examples?

PermalinkFeedback (3)Media


  1. I saw a report on the news today.  The claim was that children who attend nursery have a lower chance of contracting leukaemia.

    What was interesting was what they didn’t say.  They didn’t tell us the size of the survey.  They didn’t tell us how lower the chance was or what the chance was overall.

    For all we know the chance could have gone down from 1 in 100,000 to 1 in 110,000.

    Worth changing your child’s life just for that?

    Posted by Patrick Crozier on 29 April 2008 at 04:25pm

  2. Here’s the link.

    Posted by Patrick Crozier on 30 April 2008 at 02:19am

  3. If the survey says X causes Y, you might want to ask what else correlates with X (and perhaps be the real cause).

    You might also want to keep an eye out for selection bias.

    Posted by Rob Fisher on 08 May 2008 at 04:52pm

Commenting is not available in this weblog entry.