Phone survey response rate analysis

Here is an interesting summary of research from the Pew Center about phone survey responses. Some of the highlights:

  • Phone survey response rates hover around 9%, and this has been a stable rate for several years.
  • People who answer phone surveys are more likely than average to be civically and politically engaged.
  • “Low response rate phone polls still conform closely to high response rate in person surveys on measures of political and religious identification.”
  • Pew Center surveys are awesome (says the Pew Center) (but I tend to agree).

Google Surveys: An MTurk Alternative?

Maybe I’m still a bit freaked out by “The Circle” but I’m a little cynical about Google Surveys. Please note I haven’t tried using Google Surveys for data collection and so my cynicism comes mostly from what I read in their online materials. Here are my topline thoughts:

  1. It’s expensive. To ask 2-10 questions of a general American sample it costs $1. Add in gender or location and that goes to $1.50. Ten questions doesn’t get you very far–at all–in academic research.
  2. They recommend having 1500 participants. For validity. Of course they do!
  3. Not sure who the respondents are, or how they are being compensated.
  4. I worry, frankly, that Google has access to the data—something Amazon does not.

On the other hand, they do have panels which are kind of cool.

There’s a study by Pew about Google Surveys and there’s nothing really breakthrough to report.

Have you used Google Surveys?  What do you think?

On MTurk, iPad responses are different (sort of)

That’s the click bait finding from a new study by Findlater and colleagues. They noted all the research showing Turkers perform similarly to lab populations (validity wise) but wondered if this held true when looking at Turkers doing work on mobile  devices with touchscreens. . They argued since mobile devices are the most popular form of access to the Internet, there may be more variability due to posture of the respondent and in movement of the device.

So, they compared 30 people in a lab to 303 people on MTurk.  People had to do a number of tasks involving the interface that they were working on such as dragging things across a screen. Findings show:

“(1) separately analyzing the crowd and lab data yields different study conclusions-touchscreen input was significantly less error prone than mouse input in the lab but more error prone online;

(2) age-matched crowdsourced participants were significantly faster and less accurate than their lab-based counterparts, contrasting past work;

(3) variability in mobile device movement and orientation increased as experimenter control decreased–a potential factor affecting the touchscreen error differences. ”

The study itself (which you can download from the link) shows that the mobile device used was an iPad, and there were specific instructions on how and where to place the device.

I think the abstract somewhat overstates the findings, since this was an interaction task and not a task like a survey which requires fewer manipulation skills and this isn’t really that clear in the abstract. But kind of interesting.

Review of Amazon’s Mechanical Turk for Academics

PsycCRITIQUES

April 10, 2017, Vol. 62, No. 15, Article 3

© 2017 American Psychological Association

A Condensed yet Comprehensive Guide to Mturk

A Review of Amazon’s Mechanical Turk for Academics: The HIT Handbook for Social Science Research

by Kim Bartel Sheehan and Matthew Pittman, Irvine, CA: Melvin & Leigh, 2016. 141 pp. ISBN 978-0-97-866386-5, (paperback). $29.95, paperback http://dx.doi.org/10.1037/a0040796

Reviewed by Richard D. Harvey , Dulce Vega

Amazon’s Mechanical Turk for Academics: The HIT Handbook for Social Science Research by Kim Bartel Sheehan and Matthew Pittman is a short but very comprehensive guide for those interested in online research and Amazon’s Mechanical Turk (Mturk) platform in particular.

As suggested by the title, the target audiences for the book are “academics”; however, virtually anyone interested in collecting data online, whether they are academics or nonacademics (e.g., marketers, public opinion pollsters), will find this book helpful. Indeed, the range of activities that are available on Mturk go beyond mere survey work and include more complex tasks such as transcribing audio, writing catalog product descriptions, conducting web searches, and labeling images. Additionally, while the book focuses primarily on Mturk, the reader will learn a great deal about other crowdsourcing platforms as well. As an overarching scheme, the book covers the tripartite structure of MTurk: requesters, workers, and the vendor, Amazon.

The first few chapters focus on the workers. The workers are the “participants/subjects” in the projects, which are called Human Intelligence Tasks or HITS. For example, Chapter 2 provides a very comprehensive demographic profile of the typical worker. Herein, the reader will learn how well Mturk samples match the general U.S. population (turns out to be better than college samples). Moreover, there is a good deal of information concerning the variance among Mturk workers regarding their levels of activity, times and days when they are most active, sex, age, marital status, sexual orientation, political orientation, education, and income levels. With respect to the latter two demographics, it’s clear that Mturk is not just for the poor and uneducated. Over half of workers report incomes ranging from $40,000 to $300,000. Also, 50 percent report having a bachelor’s degree or higher. Beyond mere demographics, there is some good information concerning the competence of Mturk workers with respect to their capacity to complete more cognitive and technically complex HITS as well as their general scientific fluency. In general, Mturk workers tend to be either equal or slightly above the general U.S. population. One place where there seems to be afairly big discrepancy between the typical Mturk worker and the general population is judging the emotional expressions of characters made out of Legos (see Bartneck, Denser, Moltchanova, & Zawieska, 2015). Who knew?

Beyond demographic information on the workers, there is a good deal of validation research covered in the book. As mentioned earlier, this information will be helpful for anyone needing an apologetic for using Mturk or even crowdsourcing in general. Readers are likely to be astonished at the amount of research on Mturk workers.

The information provided for potential requesters covers everything from how to set up an Mturk account to how to decide compensation. The authors provide a very detailed step-bystep protocol for setting up an Mturk account. They also provide some good tips on ensuring a more engaged sample. Workers are rated, and requesters can require a worker to be at a certain rating in order to participate. The pros and cons of this are discussed.

Perhaps the biggest challenge to using Mturk is determining a compensation amount that is fair and motivating to workers and yet still economical for requestors. After all, Mturk is chosen because it is seen as an option for those requestors without big budgets. For the authors, this is as much an ethical issue as it is a practical one. A somewhat related ethical issue is the labeling of “participants” as “workers” within the Mturk platform. This has led to the formation of a tension between MTurk and the institutional review boards (IRBs). For example, Ipeirotis (2009) argued that the designation of Mturk participants as salaried workers combined with anonymity should render an IRB approval unnecessary for MTurk studies.

Overall, Sheehan and Pittman have written a great condensed guide to Mturk. The book is a quick read with concise chapters packed with interesting information. Also, readers are treated to a fair amount of information about crowdsourcing methods in general, including some Mturk competitors (e.g., Qualtrics Panels). Frankly, no one should be using Mturk without this book.

Bartneck, C., Denser, A., Moltchanova, E., & Zawieska, K. (2015). Comparing the similarity

of responses received from experiments conducted in Amazon Mechanical Turk to

experiments conducted with traditional methods. PLOS One, 10, Article e0121595.

PsycINFO →

Ipeirotis, P. (2009). Mechanical Turk, human subjects, and IRBs. Retrieved from

http://www.behind-the-enemy-lines.com/2009/01/mechanical-turk-humans-subjectsand-

irs.html

MTurk workers use of social media

Don’t be misled by the title of this article. It is called “Crowdsourcing Social Media for Military Operations” but it really isn’t about that. It is a report of a survey of almost 800 Turkers and which social media they use.

Almost all (93.1%) use Facebook, about 60% use Twitter, and all the way down the list until we learn about 12% use Yahoo Answers, which I’ve never even heard of. Interestingly, Snapchat (the kids’ choice) was left off the list.

The study makes some connections to military operations but it really is more of a summary of how Turkers use social  media. It isn’t a great article, by any means (no stats among other things) but an interesting snapshot of a community.

Comparing MTurk to PA, CF and CBDR

How does MTurk compare to Prolific Academic, Crowdflower and the Carnegie Mellon poll (also known as CBDR)?

This new articles tells you.

CBDR was the ‘control’, so to speak, and comparing MTurk to PA and CF shows:

“In two studies, we found that participants on both platforms  (PA and CF) were more naïve and less dishonest compared to MTurk participants. Across the three platforms, CF provided the best response rate, but CF participants failed more attention-check questions and did not reproduce known effects replicated on ProA and MTurk. Moreover, ProA participants produced data quality that was higher than CF’s and comparable to MTurk’s. ProA and CF participants were also much more diverse than participants from MTurk.”

So it looks like a thumbs up to Prolific Academic.

Nice job from some very expert MTurk researchers!

Eyal Peer, Laura Brandimarte, Sonam Samat, Alessandro Acquisti, Beyond the Turk: Alternative platforms for crowdsourcing behavioral research, Journal of Experimental Social Psychology, Volume 70, May 2017, Pages 153-163, ISSN 0022-1031, http://doi.org/10.1016/j.jesp.2017.01.006.
(http://www.sciencedirect.com/science/article/pii/S0022103116303201)

Resisting exploitation of gig workers

Here’s a great piece, published in the UK, by Mark Graham and Alex Wood.

Every word in this is excellent, but here is the most important thing (IMHO):

 

“…because almost all large online work platforms are currently privately owned firms, they rarely have the best interests of workers at heart. They capture large rents – often 20 per cent of wages – by simply providing a platform that allows clients to meet workers. There is no reason that platforms cannot instead be run by and for workers, as cooperatives, in order to allow workers to capture more of the value that they are creating.”

New Chapter on Law and Crowdwork

This abstract came across my feed today and it looks like it could be a great read about novel thinking in legal protection of crowdworking. The citations (which aren’t behind a paywall) include lots of MTurk stuff (and if you’re looking for a good ‘primer’ on law and crowdwork, this bibliography does a great job of providing that).

Citation:

Prassl, Jeremias, and Martin Risak. “The Legal Protection of Crowdworkers: Four Avenues for Workers’ Rights in the Virtual Realm.” In Policy Implications of Virtual Work, pp. 273-295. Springer International Publishing, 2017.