A new study looks at selective attrition on MTurk, and the authors pretty much dump all over the service. Selective attrition is when people accept a HIT, start on it, and then return it without finishing it. The authors of this study that the high level of attrition to MTurk studies can create a number of problems for researchers, including false results.
The ‘rule of thumb’ is that attrition above 20% is problematic, particularly in experiments with random assignments with quotas. If I randomly assign people to 1 or 2 conditions, and the people in condition ‘1’ are the people who quit without finishing, at some point an algorithm will start assigning people to that condition to make up the numbers correctly and then it is no longer random. It hurts internal validity. The author(s) have a series of experiments with high attrition and they do a good job of describing why there was attrition (people thought that the task was too challenging/time consuming). They state that they paid ‘the right amount’ (10 cents per minute) which is debatable.
They end with tips on how to minimize attrition, such as make sure that people understand what the task is up front. They also recommend making people feel guilty about thinking about quitting by stating something like “Many workers don’t like answering open ended questions. If a sizable number of people quit halfway through, our data is compromised. We depend on you for good quality data.”
I also learned that Qualtrics won’t report ‘partial’ completes until a study is closed, which is interesting, and a way to learn about what the attrition rate was. I collected data recently and just closed my study to see what the attrition was. It said I had no attrition. Just sayin’.