Wednesday, January 27, 2016

Public opinion surveys and cellphones: A tipping point?

The Pew Research Center just announced it will conduct more survey interviews by cellphone in 2016, as much as 75 percent of its interviews. And depending on how much you know about cellphones and public opinion surveys, that's either one kind of surprise--can they do that?--or another--weren't they already doing that?

Pew Research does a good job answering those questions, which more or less share an answer: You can call cellphones, but it's difficult and expensive to do so, sometimes doubling the cost of the interview process. It used to be that sampling homes with no landline was the biggest hurdle in accurate public opinion surveys; now that's still true, but with a cellphone to complicate things. Here's how Pew describes that:
Cellphone-only individuals are considerably younger than people with a landline. They tend to have less education and lower incomes than people with a landline. They are also more likely to be Hispanic and to live in urban areas. For this reason, excluding cellphones from a poll – or not including enough of them – would provide a sample that is not representative of all U.S. adults.
So if your company or organization cares about audience data in those populations--or just more accurate aggregate findings for U.S. adults--this is good news.

I think it's essential for communications pros to not only share and use public opinion data in their work, but to understand more about how these data are gathered, so methodology changes of this type are must-knows. I also love the reminders in this recent post, A Psephologist's Lament, which explains why sample size and margins of error may not be the significant indicators they appear to be:
A word of caution. Don't be thrown by sample size and the margin of error. For example, the margin of error is a statistical concept that largely relates to the numbers of people interviewed. It is often misunderstood in that it is not really an error at all but the acceptable range that poll findings would fall within had you interviewed the entire population. Who you interview, how you interview them, and how you model your data are more significant indicators of quality than the number of people in a poll. Put it this way, if you have a badly constructed sample, the more people you interview the more inaccurate your results will be. The errors in your data will multiply while the margin of error will shrink making the poll appear more precise and rigorous.
I'll be sharing that kind of thinking with two groups of clients I'm working with this month, starting with last week's workshop on Communicating with Non-Scientists: Audiences and Stakeholders for the American Association for the Advancement of Science and its Science & Technology Policy Fellows, scientists who come from all over the U.S. to work in Washington in congressional and federal offices needing science advisement. In that session, I shared lots of sources and findings from opinion research on non-scientist public audiences--and encouraged the Fellows to do the same, since so many of them work in agencies that hold treasure troves of such data. And I'll be presenting a review of public opinion on human-computer interaction to a board committee of SIGCHI, a special-interest group of the Association for Computing Machinery, with a special focus on audience data that may be useful in developing public messaging. In both cases, understanding more about the methodology in public opinion research helps these groups think through how useful--or not--it can be as a tool in public communication.

(Creative Commons licensed photo by Garry Knight)

No comments: