Rochelle tweeted about this study and it is just another one of those that doesn’t really get MTurk. It promoting the ‘fast and cheap’ story while suggesting that paying 80 cents for a 10 minute study is ok (it isn’t). Some other things I want to pick on:
Quote 1: “Specifically, if participants are all paid to take surveys, what incentivizes them to answer accurately and how do we know that they answer accurately? For example, Oppenheimer, Meyvis, & Davidenko (2009) found that online survey participants are often less attentive than those watched by experimenters in a lab, meaning they may pay less attention to the treatment and bias the experiment. Especially given that the amount of payment is small and fixed, the only way Turkers can increase payment per hour worked is working faster.”
My response: it is basically impossible to do a random survey any longer. I would suggest that the great majority of all research results in someone being paid–either with money, or with ‘points’ toward gift cards, or with entries into a sweepstakes. In fact, I got an email just yesterday asking me to take part in a survey for a publisher to help with course materials and I thought first thing “what am I being paid?” (answer: nothing). So compensation is getting to be a requirement, in my opinion.
At the same time, this author seemed to confound that being paid suggests you won’t answer accurately. They conflate fast with accurate. There are many studies that show that Turkers are accurate. Just read this blog. And in the author’s favor, they did state that the rejection possibility is an incentive to answer correctly and take one’s time.
Quote 2: “How many participants fail catch trials?It depends on the difficulty of the catch trials. Rouse (2015) found that ~5% of his population did not pass checks, while Antin & Shaw (2012) found 5.6% of theirs. These numbers can vary widely — in an experiment I personally ran, I found 10-30% of people would fail comprehension checks. More importantly, survey completion rates and catch trial pass rates have equaled or exceeded that of other online survey samples or traditional college student samples (Paolacci, Chandler, & Ipeirotis, 2010; Berinsky, Huber, & Lenz, 2012). However, care must be taken to selecting catch trials that participants do not have prior exposure to (see Kahan, 2013).
My response: I did read the author’s study, and I think the questions he used for his comprehension checks were challenging and not that great. The literature on the ability of Turkers to answer catch trials is deep and convincing. Cherrypicking literature to show otherwise is problematic.
Quote 3: (regarding the size of the MTurk population): “Therefore, completing a 10,000 person study could take months or years, which could be a substantial concern given that these samples may be necessary for animal advocacy researchers attempting to detect small treatment effects.”
My response: Who needs 10,000 people? Some ‘rule of thumb’ data show that you can get by with 1000 for many people. And the number for experiments–which the author is talking about–should be smaller. 10,000 people in an experiment?
Quote:”Therefore, I recommend offering a wage of $3/hr-$5/hr, which appears close to the mean wage offered by most studies and is respectfully above the average wage on the platform. Notably, this does conflict with Casey, et. al. (2016) who state “minimum acceptable pay norms of $0.10 per minute” ($6/hr or 83% FMW), but this appears to be a statement based more on ethics of justice (which are certainly important and could prevail depending on your point of view) than data accuracy”
My response: I will always side with ethics over this person’s calculations, which lack validity.
Quote 4: “Avoid putting study-specific language in HIT titles or descriptions.”
My response: yes, because we want to trick people into doing our work. I’m being sarcastic.
I know I seem angry in this post, but I get so fed up with people writing this type of muck that taints MTurk studies for all of us.