Google Scholar reports more than 3100 articles this year that use and/or discuss Mechanical Turk. This seems to have fallen off a bit given the increase in the costs, or perhaps that more concerns with journals are being seen.
It distresses me that I keep hearing about ‘blanket’ rejections of academic research using MTurk for anecdotal reasons (“my undergrad got some weird data”) and hearsay (“I hear it is all people from China who spoofed their way into the system”). There is a lot of academic research identifying best practices out there–and for the next few weeks, I’ll highlight some of this research.
Because as all researchers know, garbage in, garbage out. A poorly designed study–using MTurk or anything else–will get you bad data. While our book helps people design good studies, there are PLENTY of resources out there that do the same.
And it really kills me when people say “a good student sample is better than MTurk.” Really? In what universe is a student sample at all representative of the real world? And the data on students’ lack of engagement with academic surveys grows expands every day.
And while I’ve used Qualtrics in the past, my own hearsay channels tell me that Qualtrics uses MTurk to fill in when they don’t have enough panelists on their own. I haven’t seen any substantive data to support this, but I’d believe it (I hear some people put in ‘how did you first hear about this survey’ and that is how the Qualtrics connection gets established.
Anyway–I’m probably talking to the choir here. Join me in educating our colleagues about the value of MTurk. Thanks!