Review of Amazon’s Mechanical Turk for Academics

PsycCRITIQUES

April 10, 2017, Vol. 62, No. 15, Article 3

© 2017 American Psychological Association

A Condensed yet Comprehensive Guide to Mturk

A Review of Amazon’s Mechanical Turk for Academics: The HIT Handbook for Social Science Research

by Kim Bartel Sheehan and Matthew Pittman, Irvine, CA: Melvin & Leigh, 2016. 141 pp. ISBN 978-0-97-866386-5, (paperback). $29.95, paperback http://dx.doi.org/10.1037/a0040796

Reviewed by Richard D. Harvey , Dulce Vega

Amazon’s Mechanical Turk for Academics: The HIT Handbook for Social Science Research by Kim Bartel Sheehan and Matthew Pittman is a short but very comprehensive guide for those interested in online research and Amazon’s Mechanical Turk (Mturk) platform in particular.

As suggested by the title, the target audiences for the book are “academics”; however, virtually anyone interested in collecting data online, whether they are academics or nonacademics (e.g., marketers, public opinion pollsters), will find this book helpful. Indeed, the range of activities that are available on Mturk go beyond mere survey work and include more complex tasks such as transcribing audio, writing catalog product descriptions, conducting web searches, and labeling images. Additionally, while the book focuses primarily on Mturk, the reader will learn a great deal about other crowdsourcing platforms as well. As an overarching scheme, the book covers the tripartite structure of MTurk: requesters, workers, and the vendor, Amazon.

The first few chapters focus on the workers. The workers are the “participants/subjects” in the projects, which are called Human Intelligence Tasks or HITS. For example, Chapter 2 provides a very comprehensive demographic profile of the typical worker. Herein, the reader will learn how well Mturk samples match the general U.S. population (turns out to be better than college samples). Moreover, there is a good deal of information concerning the variance among Mturk workers regarding their levels of activity, times and days when they are most active, sex, age, marital status, sexual orientation, political orientation, education, and income levels. With respect to the latter two demographics, it’s clear that Mturk is not just for the poor and uneducated. Over half of workers report incomes ranging from $40,000 to $300,000. Also, 50 percent report having a bachelor’s degree or higher. Beyond mere demographics, there is some good information concerning the competence of Mturk workers with respect to their capacity to complete more cognitive and technically complex HITS as well as their general scientific fluency. In general, Mturk workers tend to be either equal or slightly above the general U.S. population. One place where there seems to be afairly big discrepancy between the typical Mturk worker and the general population is judging the emotional expressions of characters made out of Legos (see Bartneck, Denser, Moltchanova, & Zawieska, 2015). Who knew?

Beyond demographic information on the workers, there is a good deal of validation research covered in the book. As mentioned earlier, this information will be helpful for anyone needing an apologetic for using Mturk or even crowdsourcing in general. Readers are likely to be astonished at the amount of research on Mturk workers.

The information provided for potential requesters covers everything from how to set up an Mturk account to how to decide compensation. The authors provide a very detailed step-bystep protocol for setting up an Mturk account. They also provide some good tips on ensuring a more engaged sample. Workers are rated, and requesters can require a worker to be at a certain rating in order to participate. The pros and cons of this are discussed.

Perhaps the biggest challenge to using Mturk is determining a compensation amount that is fair and motivating to workers and yet still economical for requestors. After all, Mturk is chosen because it is seen as an option for those requestors without big budgets. For the authors, this is as much an ethical issue as it is a practical one. A somewhat related ethical issue is the labeling of “participants” as “workers” within the Mturk platform. This has led to the formation of a tension between MTurk and the institutional review boards (IRBs). For example, Ipeirotis (2009) argued that the designation of Mturk participants as salaried workers combined with anonymity should render an IRB approval unnecessary for MTurk studies.

Overall, Sheehan and Pittman have written a great condensed guide to Mturk. The book is a quick read with concise chapters packed with interesting information. Also, readers are treated to a fair amount of information about crowdsourcing methods in general, including some Mturk competitors (e.g., Qualtrics Panels). Frankly, no one should be using Mturk without this book.

Bartneck, C., Denser, A., Moltchanova, E., & Zawieska, K. (2015). Comparing the similarity

of responses received from experiments conducted in Amazon Mechanical Turk to

experiments conducted with traditional methods. PLOS One, 10, Article e0121595.

PsycINFO →

Ipeirotis, P. (2009). Mechanical Turk, human subjects, and IRBs. Retrieved from

http://www.behind-the-enemy-lines.com/2009/01/mechanical-turk-humans-subjectsand-

irs.html

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s