Turkers like Mustangs, Jeeps and Teslas

This article just makes me giggle for many reasons.

  • “(MTurk)…is similar to the popular SurveyMonkey.” Well, except for all the ways it isn’t similar.
  • “Gold Eagle  carefully screens participants to meet a series of qualifying categories prior to the survey.” Would love to know more! I’m not sure who Gold Eagle is, though.
  • Gen Xers like Camaros? Are people messing with them?
  • People who identify as car experts say Mustangs are their dream cars.

The comments are very amusing.

MTurk vs Prolific: Update

Prolific Academic is changing their pricing structure and so starting in the middle of next month they will charge a flat 30% response rate—an email I received today suggests that the average study will cost about 6% more. This continues to be less expensive than Mturk if you don’t use something like Turk Prime which addresses the 40% issue.

Prolific also updated their comparison chart between Prolific and Turk, and that information is available here. Keep in mind that chart is created by Prolific.

Turkers versus algorithms! c

A new study shows Turkers’ predictions of recidivism are pretty much as accurate as predictions by an algorithm. Read it here!

It must be noted that this study paid workers $1 and workers could receive a $5 bonus for accuracy.  The rest of this is completely out of my wheelhouse, but interesting nonetheless.

 

The study is by Julia Dressel and Hany Farid and can be found in: Science Advances  17 Jan 2018: Vol. 4, no. 1, eaao5580 DOI: 10.1126/sciadv.aao5580

 

 

New population estimates

A new study uses some interesting statistical techniques developed from ecology to estimate that number of workers on MTurk.  The big finding–from 40,000 unique worker– is that at any given time, there are 2,450 people on the platform available for work. This is in contrast to an earlier study (Stewart et al) that found that the average lab can reach only about 7500 people  for a given study (corrected with thanks to an author from that study).

Citation:

Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics
and Dynamics of Mechanical Turk Workers. In Proceedings of WSDM 2018:
The Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey, CA, USA, February 5–9, 2018 (WSDM 2018), 9 pages.

Stewart et al study:

Neil Stewart, Christoph Ungemach, Adam J. L. Harris, Daniel M. Bartels, Ben R.
Newell, Gabriele Paolacci, and Jesse Chandler. 2015. The average laboratory
samples a population of 7,300 Amazon Mechanical Turk workers.
Judgment and Decision Making 10, 5 (2015), 479–491

Are Turkers healthy?

One new study says no. Well, it says they say they aren’t as healthy as others.

“Demographic, socioeconomic, and health status variables in an adult MTurk sample collected in 2016 (n=1916), the 2015 MEPS household survey component (n=21,210), and the 2015 BRFSS (n=283,502).

Results:

Our findings indicate statistically significant differences in the demographic, socioeconomic, and self-perceived health status tabulations in the MTurk sample relative to the unweighted and weighted MEPS and BRFSS. The MTurk sample is more likely to be female (65.8% in MTurk, 50.9% in MEPS, 50.2% in BRFSS), white (80.1% in MTurk, 76.9% in MEPS, and 73.9% in BRFSS), non-Hispanic (91.1%, 82.4%, and 81.4%, respectively), younger, and less likely to report excellent health status (6.8% in MTurk, 28.3% in MEPS, and 20.2% in BRFSS).”

 

Results:

Our findings indicate statistically significant differences in the demographic, socioeconomic, and self-perceived health status tabulations in the MTurk sample relative to the unweighted and weighted MEPS and BRFSS. The MTurk sample is more likely to be female (65.8% in MTurk, 50.9% in MEPS, 50.2% in BRFSS), white (80.1% in MTurk, 76.9% in MEPS, and 73.9% in BRFSS), non-Hispanic (91.1%, 82.4%, and 81.4%, respectively), younger, and less likely to report excellent health status (6.8% in MTurk, 28.3% in MEPS, and 20.2% in BRFSS).

 

The study concludes that researchers should be hesitant in using MTurk for health research.   I think this study is overrepresenting females on  MTurk, but perhaps women were more interested in an MTurk study than men were. The brief report available here does not indicate whether the study was limited to people in the US.  I’m also unclear on whether self-reports for healthiness are valid, but that might be just me.

 

 

Title: Self-reported Health Status Differs for Amazon’s Mechanical Turk Respondents Compared With Nationally Representative Surveys
Author: Karoline Mortensen, Manuel Alcalá, Michael French, et al
Publication: Medical Care
Publisher: Wolters Kluwer Health, Inc.
Date: Dec 21, 2017

The Downsides of Flexibility in Crowdwork

A new study looks at flexibility in crowdwork on platforms like MTurk and how such flexibility supports and inhibits workers.  The full text is availlable here.

Researcher at Oxford points out the biggest issue with MTurk:

“One finding is that workers’ ability to choose their hours of work is limited by structural constraints: the availability of work on the platform at any given moment, and the degree to which the worker depends on that work for their living. The severity of these constraints varied significantly between the three platforms. Mechanical Turk was formally the freest platform, but its design resulted in intense competition between workers. This made it necessary for workers who depended on MTurk income to remain constantly on call, lest someone else grab the decently-paying tasks.”

The paper talks about the support systems among workers that keep people engaged and motivated. Community in crowdwork is so important. Why doesn’t Amazon get that??

Lehdonvirta, V. (2018). Flexibility in the Gig Economy: Managing Time on Three Online Piecework Platforms. New Technology, Work & Employment (forthcoming).

 

 

“Ten years” of crowdsourcing

This new article (full text available for free!) gives an overview of ‘ten years’ of crowdsourcing—the article begins with Howe’s quote from 2006 so it is really 12 years and MTurk launched to the public in 2005 so OK, ten years more or less. The article is a nice overview of crowdsourcing and its benefits, tapping into a bunch of different studies on crowdsourcing accessed by the authors. It includes an overview of benefits and concerns, and is a basic and straightforward albeit cursory analysis of crowdsourcing.

The paragraph on ethics says “As crowdsourcing is a nascent field, there is no Review Ethics Board (REB) or Institutional Review Board (IRB) process specific to it, to the author’s knowledge, despite it being quite different from other methodologies.” This was published in a UK journal but it is important to note that many IRBs in the US are providing very specific information on addressing MTurk.

This very important paragraph is sort of buried, so let me highlight it here:

Finally, some authors reviewed gave tips for using crowdsourcing in research. Most importantly, selecting a clear and appropriate research question was emphasised. Having a big challenge, and clear, measurable goals that are communicated to participants was seen as important as this helps motivate the participants, along with as providing options regarding levels and modes of participation. Finally, the importance of acknowledging participation was highlighted

Citation:Wazny, K. (2017). “Crowdsourcing” ten years in: A review. Journal of Global Health7(2)..

 

 

MTurk: Life in the Iron Mills

Some professors from UT Dallas, seeking a way to bring the book “Life in the Iron Mills” to life. “Life in the Iron Mills” is a novella about unregulated work in the 18th century where the protagonist creates a work of art out of the waste from the mill to symbolize the mindlessness of industrial labor.

Here’s where MTurk comes in:

“Burrough and Starnaman  (the professors) …offered Mechanical Turk workers an unusual, self-reflective task. “Each month, we ask nine workers how this virtual platform affects their bodies,” Burrough said. “They respond, and trace and measure their hands for us. The hands are laser-cut from cardboard or wood and the sentiments are embroidered on those, or if the written response is longer, it is shined through a light box.” As a socially engaged artist trying to highlight the workers’ experience, Burrough tries to remove her own input as much as possible. “I’m depicting what the workers send to me,” Burrough said. “I am trying not to speak for them — I’m a conduit for their sentiments.” Since the statue in Life in the Iron Mills is constructed from a byproduct, the cardboard hands used for “The Laboring Self” come from a modern equivalent — recycled packing boxes. “All this stuff gets shipped with so much packaging, and you remove one little thing that you ordered,” Burrough said. “These donated boxes allow us to take the workers’ voices and put it on the byproduct of our era.”

Read the whole story here.

 

 

Recruiting very specific populations on MTurk

This article is from 2016 and discussed how to recruit military veterans using MTurk.  What I like a lot about this article is it gives some specific guidelines on how to ask screening questions that can truly work to get a specific population.

Too many studies would use questions like “are you a military veteran?” to screen and—yeah. People often state they are something they are not in order to do a HIT.  This study on recruiting veterans used these screening questions:

  • What is the acronym for the locations where final physicals are taken prior to shipping off for basic training? (four letters)
  • What is the acronym for the generic term the military uses for various job fields? (three letters)
  • Please put these officer ranks in order: (participants were given visual insignia to rank order).
  • Please put these enlisted ranks in order: (contextualized branch-specific question; participants were given visual insignia to rank order)
  • In which state is your basic training base located? (contextualized branch-specific question)

I’m not a military veteran, and if I had tried to answer some of these questions I would clearly get them wrong. I guess I could try googling the answers but that would take time away from doing other week, so I’m pretty sure I would just move on to another HIT.

This article is a great lesson in how to make sure you get the people you want to get in your studies.

MTurkers and motivation

A new study used Turkers to test ideas about whether ‘early’ or ‘late’ rewards had different effects on motivations. It is a series of five experiments about rewards; four use Turkers. The bottom line is that early rewards are more motivating than late rewards, and higher rewards are more motivating than lower rewards.

A few thoughts on this study:

-The rewards were paid out (usually) as bonuses. Given what I’ve heard, some (many) workers are highly skeptical that they’ll actually get the bonus they are promised. Additional research needs to examine if skepticism is a moderator on motivation.

-These studies were low-paid to begin with; I have to think that someone who takes on these tasks has a different perspective than others. For example, one task had people reading five pages of a book and then answering questions. It took AT LEAST 5 minutes, probably more like 7 or 8, and paid .25. That’s just wrong, people. And it is going to have an influence on motivation.

Citation (apparently it isn’t published in a journal):

Woolley, K., & Fishbach, A. (2017). It’s About Time: Earlier Rewards Increase Intrinsic Motivation.