In which I compare MTurk to Qualtrics and Students and more

Well, it isn’t just me, and there isn’t much ‘more’, but a recent article I co-authored is available for free download through May. Just go here.


Data collection using Internet-based samples has become increasingly popular in many social science disciplines, including advertising. This research examines whether one popular Internet data source, Amazon’s Mechanical Turk (MTurk), is an appropriate substitute for other popular samples utilized in advertising research. Specifically, a five-sample between-subjects experiment was conducted to help researchers who utilize MTurk in advertising experiments understand the strengths and weaknesses of MTurk relative to student samples and professional panels. In comparisons across five samples, results show that the MTurk data outperformed panel data procured from two separate professional marketing research companies across various measures of data quality. The MTurk data were also compared to two different student samples, and results show the data were at least comparable in quality. While researchers may consider MTurk samples as a viable alternative to student samples when testing theory-driven outcomes, precautions should be taken to ensure the quality of data regardless of the source. Best practices for ensuring data quality are offered for advertising researchers who utilize MTurk for data collection.

And for your amusement, I have a colleague who hates MTurk. He wrote an anti-Turk article and it is available for free too, along with our response to him. You can find those pieces, along with several other pieces about different types of methodology, here:

I was just in Boston and I organized a panel about MTurk where it turns out the Colleague/Hater said that he ‘used to represent Qualtrics’ at academic conferences. I’m not sure exactly what he meant by that, but it sounds like there’s a bit of—oh I don’t know—biast? there.


Aussies look at ‘new tool’: MTurk

The Conversation has–what? An opinion piece?–about MTurk.

Not sure exactly what we should call this–it isn’t an article in an academic sense as there are no citations for some of the facts (and allegations).

And calling MTurk ‘new’ is—oh what? Just silly? Because it has been around for 11 years or so and thousands of studies–real academic studies–have been published using it.

The scant academic literature included via links is old—a few pubs from 2011, one from 2015–and an odd link to an MTurk Grind forum discussion.


MTurk results on privacy and security: as good as nationally representative samples


“In this paper, we compare the results of a survey about security and privacy knowledge, experiences, advice, and internet behavior distributed using MTurk (n=480), a nearly census-representative web-panel (n=428), and a probabilistic telephone sample (n=3,000) statistically weighted to be accurate within 2.7% of the true prevalence in the U.S. Surprisingly, we find that MTurk responses are slightly more representative of the U.S. population than are responses from the census-representative panel, except for users who hold no more than a high-school diploma or who are 50 years of age or older. Further, we find that statistical weighting of MTurk responses to balance demographics does not significantly improve generalizability. This leads us to hypothesize that differences between MTurkers and the general public are due not to demographics, but to differences in factors such as internet skill. ”

Read the whole paper here!


Redmiles, Elissa M., Sean Kross, Alisha Pradhan, and Michelle L. Mazurek. How Well Do My Results Generalize? Comparing Security and Privacy Survey Results from MTurk and Web Panels to the US. 2017.

More on defining Amazon as an ’employer’

This article  adds to the debate about whether crowdsource sites are employers or merely an agent that connects employers or employees. This is a key distinction in employment law that many countries and platforms are struggling with.

The author, a professor of Law at Oxford, states in reference to Uber:

“An increasing number of online resources provide insights into the reality of the relationship between the platform and its drivers: through its app, the platform has close control over the routes drivers are to choose and the prices customers will be charged for each ride. All financial transactions take place via the app, which also sits at the core of Uber’s rating system, enlisting customers to act as the platform’s agents in monitoring worker performance. Even the supposed freedom to work when and as desired is mostly illusionary: ratings are carried from engagement to engagement, and a refusal to accept a series of offers will soon have an impact on a drivers’ ratings.

In my mind, there is therefore little doubt that Uber should be classified as the employer of its drivers, who would therefore be guaranteed access to the core of fundamental worker rights in English law. Even customers will profit from such a decision: well-rested drivers will be much safer, and in the unhappy event of an accident or other problems, they too will be able to assert their claims for reparation against the employing platform.”

Looking at this from the perspective of Amazon as an employer versus agent: Amazon can have control over the work people can complete (by issuing blocks). All financial transactions take place through the platform, as do ratings of workers.  However, Amazon isn’t involved in pricing tasks (other than charging for special demographics), and Amazon doesn’t care about the amount of work the worker does.


MTurk and validity

This new study adds to existing literature on the validity of MTurk. This new sutdy examines the validity of the platform for spacial cuing research.

“Ultimately, the present study empirically validated the use of AMT to study the symbolic control of attention by successfully replicating four hallmark effects reported throughout the visual attention literature: the left/right advantage, cue type effect, cued axis effect, and cued endpoint effect.”