When conducting survey research online using MTurk, researchers are under an ethical obligation to give participants the ability to withdraw from a study without incurring a financial penalty. In practice, however, this is often not the case.
In most of the academic Qualtrics surveys I’ve seen, researchers tell participants something to the effect of: “you can withdraw at any time by exiting the survey (e.g., by closing your web browser or navigating away from the survey)”. Most academic researchers also use the Survey Completion Code mechanic, whereby participants must reach the end of the survey to receive a unique code that they submit to MTurk for compensation.
Taken together, these two elements create a situation where, if participants withdraw by closing their browser or navigating away, then they will not receive compensation for their participation. Arguably, this creates a financial penalty for withdrawing—the penalty of not receiving payment—and thus an ethical dilemma.
This dilemma can be avoided by adding a withdraw button in the survey that, once clicked, will take participants to the end of the survey (or a custom withdraw page) where they may still obtain a completion code. Qualtrics doesn’t have this functionality baked-in, but fortunately it’s pretty easy to implement. In this tutorial, I describe the method that I use.
Continue reading Ethical Survey Withdraws: Add a Withdraw Button to Qualtrics Surveys
MTurk samples differ by time-of-day and day-of-week on “characteristics known to impact political attitudes”. This is the conclusion of a recent in-preparation article (“Intertemporal Differences Among MTurk Worker Demographics” — a preprint can be found at https://osf.io/preprints/psyarxiv/8352x). The reason for this finding is simple: different kinds of people tend to use MTurk at different times of the day, and days of the week.
In light of these findings, researchers should take precautions to avoid temporal bias in their data. In this tutorial, I’ll share an approach to doing just that.
Continue reading Avoid temporal bias in MTurk samples: Publish microbatches on a fixed interval
The survey you created for Mechanical Turk has come to an end, and all of the results are in. Now you need to validate the survey codes so you can decide who to “approve” and who to “reject”. In this tutorial, I’ll describe an efficient method for validating large numbers of survey codes. It might take a bit to learn, but once you know how to do it, it will save you TONS of time and effort. Continue reading Efficient Method for Validating LOTS of Survey Codes
In this tutorial, I explain how to setup Qualtrics to generate survey codes for Mechanical Turk. Continue reading Survey Completion Codes in Qualtrics
In this tutorial, I describe how to prevent blank survey code submissions. Continue reading Preventing Blank Code Submissions
Sometimes you need to post an MTurk task multiple times, but you don’t want individuals completing the task more than once.
Other times you need to prevent individuals who’ve taken one HIT from taking another because it’s related to the same type of research — participating in one task would bias their responses to another.
This is a common problem in psychology studies where deception is used or naïve/inexperienced participants are desired. This is because knowledge of the study’s purpose can change responses (as with the social desirability bias in studies of social phenomena), or learning might take place (as researchers studying economic decisions are now finding with experienced MTurk samples).
Mechanical Turk doesn’t provide any user-friendly tools for solving this problem. But never fear: with a little bit of code it’s super easy to screen participants! Continue reading Screening by WorkerIDs to Prevent Retakes
Please stand-by while I add some tutorials. 🙂