Turker perceptions of Minimum Wage

This is kind of interesting: a company called Credit Lane did some research on what Americans believe is the minimum wage. They used MTurk, Reddit, and some Facebook ads to fill in where needed (methodology allllll the way at the end).

-only about 35% knew the US minimum wage was $7.25

-Lower-income participants were slightly more familiar with the federal law than higher earners. Forty-two percent of participants with incomes between $25,000 and $34,999 knew the minimum wage, the best rate of any income segment.

-Oregonians are most knowledgeable of their state minimum wage, Georgians the least:

Screen Shot 2018-04-16 at 5.22.03 PM

Unfortunately, they didn’t compare Turkers to Redditers to Facebookers, because I’d be interested in those data!

It is a good read, although please do note that Credit Loan is a for-profit company.

More evidence regarding validity of MTurk samples

A new study uses MTurk to replicate  experiments that originally used national samples. From the abstract:

“I provide evidence from a series of 15 replication experiments that results derived from convenience samples like Amazon’s Mechanical Turk are similar to those obtained from national samples. Either the treatments deployed in these experiments cause similar responses for many subject types or convenience and national samples do not differ much with respect to treatment effect moderators. Using evidence of limited within-experiment heterogeneity, I show that the former is likely to be the case. Despite a wide diversity of background characteristics across samples, the effects uncovered in these experiments appear to be relatively homogeneous.”

Citation:

Coppock, Alexander. “Generalizing from survey experiments conducted on mechanical Turk: A replication approach.” Political Science Research and Methods (2018): 1-16.

Big data from college campuses

This article from the Chronicle talks about the use of student labor in educational technology.  It compares student labor for educational technology to MTurk labor:

“Predictive analytics, plagiarism-detection software, facial-recognition technology, chatbots — all the things we talk about lately when we talk about ed tech — are built, maintained, and improved by extracting the work of the people who use the technology: the students. In many cases, student labor is rendered invisible (and uncompensated), and student consent is not taken into account. In other words, students often provide the raw material by which ed tech is developed, improved, and instituted, but their agency is for the most part not an issue.”

Samples in Evolutionary Psychology Studies: Still WEIRD

Evolutionary Psychology is a paradigm used often in a variety of studies, and a new study examines how diverse study populations are in two journals devoted to the field. You can download it here.

The bottom line: these samples (collected both on and offline, many from MTurk) are WEIRD, that is, overrepresenting people from cultures that are Western, Educated, Industrialized, Rich and Democratic.

“Our database consists of 166 samples of humans (median samplesize= 206). The majority of sa mples were either online or student samples (60% of samples),
followed by other adult Western samples (19%). 129 of the samples were classified as
‘Western’ (78%, Europe/North America/Australia). The remaining samples were
predominantly from Asia (N= 26; 16%, mostly Japan). Only a small fraction of the samples wa s  cl assified as cross-cultural (5), South American (3) or African (2). The median samplesize did not significantly differ between continents, but online samples (both paid and unpaid)were typically larger than samples sourced offline. While it seems that the samples used aremore diverse than those that have been reported in reviews of the literature from social anddevelopmental psychology, it also apparent that the majority of samples remain WEIRD.”

Preprint available here: http://scholar.google.com/scholar_url?url=https://osf.io/7h24p/download%3Fformat%3Dpdf&hl=en&sa=X&scisig=AAGBfm1a79QbmwDkavfgmol4YSr2cfrCXg&nossl=1&oi=scholaralrt

Citation:

Pollet, Thomas, and Tamsin Saxton. “How diverse are the samples used in the journals ‘Evolution & Human Behavior’and ‘Evolutionary Psychology’?.” (2018).

Monopsony and MTurk

This working paper examines monopsony, or a market situation where there is only one buyer.

The paper examined the value created by MTurk workers, with a premise that the flexibility of the gig economy lets workers easily switch employers to take advantage of highest paying offers.  However, the authors found that  Turkers’ wages total less than 20 percent of their productivity—in other words, for every dollar of value produced on MTurk, workers receive less than 20 cents. Compare this with standard US metrics, whcih suggests that workers get 50 to 80 cents of every dollar.

Why aren’t employers competing for workers by providing higher wages? It could be that there are enough workers on MTurk that while workers can shop around, employers can too.

Plus, we know that everything that Amazon does is for the benefit of Requesters.

Citation:

Monopsony in Online Labor Markets
Arindrajit Dube, Jeff Jacobs, Suresh Naidu, and Siddharth Suri
NBER Working Paper No. 24416
March 2018
JEL No. J01,J42

The value of MTurk

What has been the impact of MTurk to researchers, particularly in the past seven years when so many have taken utilized the platform? Buhrmester, Talaifar and Gosling update Buhrmester et al.’s 2011 study.

The authors discuss the benefits (range of uses, participant pool, speed, cost and accessibility) and limitations (inattention, non-naivete and attrition) and provide a brief discussion on best practices and alternatives.

citation:

Buhrmester, Michael D., Sanaz Talaifar, and Samuel D. Gosling. “An Evaluation of Amazon’s Mechanical Turk, Its Rapid Rise, and Its Effective Use.” Perspectives on Psychological Science 13, no. 2 (2018): 149-154.

Accountants not sure about MTurk

I can’t find the full article, but a bunch of accountants aren’t too hot on MTurk:

“The external validity of conclusions from behavioral accounting experiments is in part dependent upon the representativeness of the sample compared to the population of interest. Researchers are beginning to leverage the availability of workers via online labor markets, such as Amazon’s Mechanical Turk (M-Turk), as proxies for the general population (e.g., investors, jurors, and taxpayers). Using over 200 values-based items from the World Values Survey (WVS), the purpose of the current study is to explore whether U.S. M-Turk workers’ values are similar to those of the U.S. population. Results show for the majority of items collected, M-Turk participants’ values are significantly different from the WVS participants (e.g., values related to trust, ethics, religious beliefs, and politics). We present select items and themes representing values shown to influence judgments in prior research and discuss how those values may affect inferences of behavioral accounting researchers.”

The WVS is a well-known survey, but has been criticized as not replicating some other well established measures, such as the Big Five (see Ludeke, Steven G., and Erik Gahner Larsen. “Problems with the big five assessment in the world values survey.” Personality and Individual Differences 112 (2017): 103-105).  As I can’t access the actual survey, I’m not sure what N the MTurk sample has. However, past research has shown that MTurk panels skew toward liberals in the US sample, and that might be driving some of this research as well.

Citation:

William D. Brink, Lorraine S. Lee, and Jonathan S. Pyzoha (2018) Values of Participants in Behavioral Accounting Research: A Comparison of the M-Turk Population to a Nationally Representative Sample. Behavioral Research in Accounting In-Press.

 

Collusion on MTurk?

Several articles recently are addressing the issue/concern with collusion on MTurk. This is really nothing I’ve thought about to any great extent.  An article by Chen et al says three types of collusion exist

“1)Duplicated Submission: once workers understand what answers generate the reward (e.g. are accepted HITs), they share that information so everyone gives the same answer and gets paid.
2)  Group Plagiarism: one worker shares his/her answers/results with others and so people quickly provide those answers without doing the real task.
3)Spam Accounts: workes with multiple accounts.”
I’ve never really felt a concern about any of the studies I run on MTurk, but then I do surveys where keeping track of all the answers might take more time than it is worth for someone who wants to share those answers. I am looking around for evidence of collusion on MTurk but not seeing a lot. However, I do see several articles addressing this including:
Chen, Peng-Peng, Hai-Long Sun, Yi-Li Fang, and Jin-Peng Huai. “Collusion-Proof Result Inference in Crowdsourcing.” Journal of Computer Science and Technology 33, no. 2 (2018): 351-365.
Daniel, Florian, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. “Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions.” ACM Computing Surveys (CSUR) 51, no. 1 (2018): 7.
DCruz, Premilla, and Ernesto Noronha. “Abuse on online labour markets: targets’ coping, power and control.” Qualitative Research in Organizations and Management: An International Journal just-accepted (2018): 00-00.
FREY, SETH, MAARTEN W. BOS, and ROBERT W. SUMNER. “Can you moderate an unreadable message?“Blind” content moderation via human computation.” (2017).

MTurk and the current crisis of privacy

My doctoral dissertation was on privacy, and I moved away from that area of research during my career because it seemed to become—well, not a non-issue but rather giving up privacy was seen as part of living in the digital world.

I think, dear readers, that might be changing.

The Electronic Freedom Foundation published an article about The Cambridge Analytica thing and here is what I learned—Cambridge Analytica solicited MTurk users to take their FB survey. So the data that was compromised is data from Turkers. The whole article is interesting–mostly about how FB should give us easier ways to say third parties can’t have our data, and why FB doesn’t.

Technology Review posted the Turkopticon reviews of the person who collected the data: as you know, asking Turkers to download an app is against MTurk’s terms of service.

I think there’s a study here, and I think I’m going to do that study right now.

 

Turkers and Canadian college students label a bunch of images similarly

Turkers and Canadian students did not differ significantly in their abilities to label clip art and photographic images.

It has been slow here at the Turker Analysis Center.

Citation:

Saryazdi, Raheleh, Julie Bannon, Agatha Rodrigues, Chris Klammer, and Craig G. Chambers. “Picture perfect: A stimulus set of 225 pairs of matched clipart and photographic images normed by Mechanical Turk and laboratory participants.” Behavior Research Methods (2018): 1-13.