OPINION22 March 2011

Online sampling concerns top of mind in quality debate

Ask a room-full of researches to talk about the quality issues the industry faces today, and it’s a safe bet that online sampling will be one of the main topics up for discussion.

That was the case as IJMR editor-in-chief Peter Mouncey, Mesh Planning’s Fiona Blades, Ipsos Mori’s Ben Page, Heineken UK’s Frances Dobson and Real Research’s Adam Phillips gathered for a panel debate.

Dobson kicked things off by acknowledging the benefits of online sampling, but asking whether researchers really know enough about their respondents, and that they are who they say they are.

For Phillips, the concern was about whether online samples were managed properly to be truly representative while Page wondered whether the industry was losing a skill set because researchers are simply “doing what the computer tells them” and don’t understand the basics of things such as sampling and weighting.

“Are we having a go at online just because we can” asked a voice from the floor? No, replied Dobson: “There are concerns about panels. A lot of agencies can’t answer when asked how representative their sample is.”

But Survey Sampling International chairman Simon Chadwick blasted the panel for “having a conversation routed in the past and ignoring the elephant in the room.” The time to be discussing panel quality was between 2002 and 2009, Chadwick said, and the industry should now instead be focusing more on “how we treat respondents online and the garbage we put in front of them”.

Ipsos Mori boss Ben Page acknowledged survey design as an are the industry “needed to do better in”, but he maintained that there will always be concerns and questions asked about the people that take part in online surveys while there is nobody to “go and knock on the door” to find out more about them.

@RESEARCH LIVE

6 Comments

14 years ago

Leave it to Chadwick to throw oil on a fire that he himself fed the same cattlefeed at conference after conference. This truly is an aged conversation that no one company is willing to address as to save themselves from persecution. Maybe researchers should get more personal with their data than smoozing and feeding this chain cycle of bullsh*t marketing fluff and tell it like it is.

Like Report

14 years ago

Until research buyers enter the discussion, whatever the vendors say is moot.

Like Report

14 years ago

What the "online sampling" is? Does “online sampling” mean selection of sample from the self-selected databases? Surely that these self-selected databases, called online panels, cannot “represent” anything else but itself. This implies that such self-selected databases and samples drawn from these databases cannot be used to make any general statements about larger population such as UK population. Unfortunately opposite is true. Numerous examples can be found in daily press and websites of online-only MR companies.

Like Report

14 years ago

Yahoo, it's 1998 again! Nice progressive debate here. But for the traditionalists to go on about their fears on not knowing who online respondents are, don't get too comfortable. All commercial research sampling is flawed, imperfect, and not representative. We have only ever interviewed people who were prepared to take part in market research, and then assumed they were representative of everyone.

Like Report

14 years ago

If we want to have a discussion on quality concerns, why is it that we always start to criticise online? Lets be honest with ourselves, what possible reason could a respondent have to lie in an online survey, that does not also apply with a telephone or face to face survey? Remember, online panel members have opted in, where as CATI/F2F respondents are beiong hasseled over dinner or on the weekend. But more importantly, if there is a very small proportion of people not being honest, it pales in significance to the interviewer who, because they are either being paid by the interview, or is constantly harassesed to get the strike rate up and their interview time down, doesn't always entirely enter data correctly. It appears to me that there are to many people involved in this argumaent that are just to far removed from the nuts and bolts of a real phone room or field department. Do we honestly believe cheating does not occur through the 'older' methods of interviewing. One or two respondent lying in a sampel of 500 is insignificant. 1 or 2 interviewers cheating in the same sample is corrupted data.

Like Report

14 years ago

I am convinced by Ben Brown's comments! Everyone say this topic as a aged conversation. Why dont some one take a step forward and prove this with data?? Here is how this can be traced out. Design a short questionnaire on any topic along with demographics (Age, Gender, Region, Education & Income) among Gen Pop. Consider top 5 Online Sampling companies and ask them to deploy only 10K invitations into the market as per the nations representation. Let the survey be in field for 24 hours, if you have more fund achieve 200 per sample company,if not have 100 per sampling company (but this would be too low to draw conclusion). Close the study and then view the demographic data of completed interviews, Drop outs (Stopped in between the survey) and the quality of the data across each sampling company. Check the following with your data... Comparision of demographics with Nat rep from Completed interviews. Compare data of key questions across all the sample companies. Draw the conclusion accordingly. Hope the data for key questions should match across all the vendors. And the demo may get varried depending on the response rate from particular company.. If the data quality is too poor of any agency then you can conclude that particular agency do not entertain their panelist with rewards and they are responsible for poor quality of data, but not the respondents. If 2 or 4 respondents are lying then they are accepted and to be thrown out of the survey. Any comments are welcome

Like Report