This article from PBS Newshour says basically a psychological study using MTurk workers is flawed because workers are experienced and become ‘robotic’ when they answer. As a result, workers will figure out what the researcher is trying to figure out and answer ‘correctly’.
I’m pretty ‘naive’ in terms of worker experience, and I’ve encountered numerous studies that are either variations of the same study (I’ve done the one about giving something to someone at least twice) or that use the same damn scales over and over. So here are my thoughts:
1. Do innovative research that isn’t just slapping an established set of scales on a slightly different context
2. Encourage newer workers to participate (ie don’t limit your eligibility to people who have completed several thousands of HITs and don’t feel compelled to use and pay for Master’s workers)
3. Include attention and memory checks, or one or two silly off the wall questions, to keep workers more engaged and out of ‘automatic response’ mode.
4. Indicate how long you expect someone to spend on the task, and then pay them appropriately. On my last ‘creativity’ task, I asked people to spend five minutes on the task (which they could have shot through in two minutes) and paid appropriately. Data show people spent the right amount of time and my response set is very rich.