How (not) to Incent Crowd Workers | Business & Information Systems Engineering Skip to main content
Log in

How (not) to Incent Crowd Workers

Payment Schemes and Feedback in Crowdsourcing

  • Research Paper
  • Published:
Business & Information Systems Engineering Aims and scope Submit manuscript

Abstract

Crowdsourcing gains momentum: In digital work places such as Amazon Mechanical Turk, oDesk, Clickworker, 99designs, or InnoCentive it is easy to distribute human work to hundreds or thousands of freelancers. In these crowdsourcing settings, one challenge is to properly incent worker effort to create value. Common incentive schemes are piece rate payments and rank-order tournaments among workers. Tournaments might or might not disclose a worker’s current competitive position via a leaderboard. Following an exploratory approach, we derive a model on worker performance in rank-order tournaments and present a series of real effort studies using experimental techniques on an online labor market to test the model and to compare dyadic tournaments to piece rate payments. Data suggests that on average dyadic tournaments do not improve performance compared to a simple piece rate for simple and short crowdsourcing tasks. Furthermore, giving feedback on the competitive position in such tournaments tends to be negatively related to workers’ performance. This relation is partially mediated by task completion and moderated by the provision of feedback: When playing against strong competitors, feedback is associated with workers quitting the task altogether and, thus, showing lower performance. When the competitors are weak, workers tend to complete the task but with reduced effort. Overall, individual piece rate payments are most simple to communicate and implement while incenting performance is on par with more complex dyadic tournaments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Baron RM, Kenny DA (1986) The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol 51(6):1173–1182

    Article  Google Scholar 

  • Berinsky AJ, Huber GA, Lenz GS (2012) Evaluating online labor markets for experimental research: Amazon.com’s mechanical turk. Polit Anal 20(3):351–368

    Article  Google Scholar 

  • Boudreau M-C, Gefen D, Straub DW (2001) Validation in information systems research: a state-of-the-art assessment. MIS Q 25(1):1–16

    Article  Google Scholar 

  • Bracha A, Fershtman C (2013) Competitive incentives: working harder or working smarter? Manag Sci 59(4):771–781

    Article  Google Scholar 

  • Briggs RO, Schwabe G (2011) On expanding the scope of design science in is research. In: Jain H, Sinha AP, Vitharana P (eds) DESRIST 2011, LNCS 6629. Springer, Heidelberg, pp 92–106

    Google Scholar 

  • Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6(1):3–5

    Article  Google Scholar 

  • Bull C, Schotter A, Weigelt K (1987) Tournaments and piece rates: an experimental study. J of Polit Econ 95(1):1–33

    Article  Google Scholar 

  • Chilton LB, Horton JJ, Miller RC, Azenkot S (2010) Task search in a human computation market. In: ACM SIGKDD workshop on hum comput (HCOMP 2010), New York, pp 1–9

  • Cohen J (1988) Statistical power analysis for the behavioral sciences. Lawrence Erlbaum Associates, Hillsdale

    Google Scholar 

  • Donabedian A (1980) Explorations in quality assessment and monitoring: the definition of quality and approaches to its assessment, vol 1. Health Administration Press, Ann Arbor

    Google Scholar 

  • Donabedian A (2003) An introduction to quality assurance in health care. Oxford University Press, New York

    Google Scholar 

  • Eccles JS, Wigfield A (2002) Motivational beliefs, values, and goals. Annu Rev of Psychol 53(1):109–132

    Article  Google Scholar 

  • Ehrenberg RG, Bognanno ML (1990) Do tournaments have incentive effects? J of Polit Econ 98(6):1307–1324

    Article  Google Scholar 

  • Eriksson T, Teyssier S, Villeval MC (2009a) Self-selection and the efficiency of tournaments. Econ Inq 47(3):530–548

    Article  Google Scholar 

  • Eriksson T, Poulsen A, Villeval MC (2009b) Feedback and incentives: experimental evidence. Labour Econ 16:679–688

    Article  Google Scholar 

  • Fershtman C, Gneezy U (2011) The tradeoff between performance and quitting in high power tournaments. J Europ Econ Assoc 9(2):318–336

    Article  Google Scholar 

  • Fischbacher U (2007) Z-Tree: zurich toolbox for ready-made economic experiments. Exp Econ 10(2):171–178

    Article  Google Scholar 

  • Gill D, Prowse V (2012) A structural analysis of disappointment aversion in a real effort competition. Am Econ Rev 102(1):469–503

    Article  Google Scholar 

  • Hammon L, Hippner H (2012) Crowdsourcing. Bus Inf Syst Eng 4(3):163–166

    Article  Google Scholar 

  • Harrison GW, List JA (2004) Field experiments. J Econ Lit 42(4):1009–1055

    Article  Google Scholar 

  • Hayes AF (2009) Beyond baron and kenny: statistical mediation analysis in the new millennium. Commun Monogr 76(4):408–420

    Article  Google Scholar 

  • Horton JJ, Rand DG, Zeckhauser RJ (2011) The online laboratory: conducting experiments in a real labor market. Exp Econ 14(3):399–425

    Article  Google Scholar 

  • Ipeirotis PG (2010) Analyzing the Amazon mechanical turk marketplace. XRDS 17(2):16–21

    Article  Google Scholar 

  • Ipeirotis PG, Provost F, Wang J (2010) Quality management on amazon mechanical turk. In: ACM SIGKDD workshop on hum comput (HCOMP 2010), Washington DC, pp 64–67

  • Kaufmann N, Schulze T, Veit D (2011) More than fun and money. Worker motivation in crowdsourcing – a study on mechanical turk. In: 17th Am conf on inf syst (AMCIS 2011), Detroit, paper 340

  • Kittur A, Khamkar S, André P, Kraut RE (2012) CrowdWeaver: visually managing complex crowd work. In: ACM 2012 conf on comput support coop work (CSCW 2012), Seattle, pp 1033–1036

  • Kittur A, Nickerson JV, Bernstein MS, Gerber EM, Shaw A, Zimmerman J, Lease M, Horton JJ (2013) The future of crowd work. In: 2013 conf on comput support coop work (CSCW 2013), San Antonio, pp 1301–1318

  • Kokkodis M, Ipeirotis PG (2013) Have you done anything like that? Predicting performance using inter-category reputation. In: 6th ACM int conf on web search and data min (WSDM 2013), Rome, pp 435–444

  • Kuhnen CM, Tymula A (2012) Feedback, self-esteem, and performance in organizations. Manag Sci 58(1):94–113

    Article  Google Scholar 

  • Lazear EP, Rosen S (1981) Rank-order tournaments as optimum labor contracts. J of Polit Econ 89(5):841–864

    Article  Google Scholar 

  • Leimeister JM (2010) Collective intelligence. Bus Inf Syst Eng 2(4):245–248

    Article  Google Scholar 

  • Malone TW, Laubacher R, Dellarocas C (2010) The collective intelligence genome. MIT Sloan Manag Rev 51(3):21–31

    Google Scholar 

  • Mao A, Chen Y, Gajos KZ, Parkes D, Procaccia AD, Zhang H (2012) TurkServer: enabling synchronous and longitudinal online experiments. In: HCOMP (2012)

  • Mason W, Suri S (2012) Conducting behavioral research on Amazon’s mechanical turk. Behav Res Methods 44(1):1–23

    Article  Google Scholar 

  • Mason W, Watts DJ (2009) Financial incentives and the performance of crowds. ACM SigKDD Explor Newsl 11(2):100–108

    Article  Google Scholar 

  • Paolacci G, Chandler J, Ipeirotis PG (2010) Running experiments on amazon mechanical turk. Judgm Decis Mak 5(5):411–419

    Google Scholar 

  • Pederson EC, Denson TF, Goss RJ, Vasquez EA, Kelley NJ, Miller N (2011) The impact of rumination on aggressive thoughts, feelings, arousal, and behavior. Br J Soc Psychol 50:281–301

    Article  Google Scholar 

  • Pilz D, Gewald H (2013) Does money matter? Motivational factors for participation in paid- and non-profit-crowdsourcing communities. In: 11th Int Conf on Wirtschaftsinformatik (WI2013), Leipzig

  • Preacher KJ, Hayes AF (2004) SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behav Res Methods, Instrum Comp 36(4):717–731

    Article  Google Scholar 

  • Pull K, Bäker H, Bäker A (2013) The ambivalent role of idiosyncratic risk in asymmetric tournaments. Theor Econ Lett 3(3A):16–22

    Article  Google Scholar 

  • Roth AE (1986) Laboratory experimentation in economics. Econ Philos 2:245–273

    Article  Google Scholar 

  • Roth AE (1987) Introduction and overview. In: Roth AE (ed) Laboratory experimentation in economics: six points of view. Cambridge University Press, Cambridge, pp 1–13

    Chapter  Google Scholar 

  • Ryan RM, Deci EL (2000) Intrinsic and extrinsic motivations: classic definitions and new directions. Contemp Educ Psychol 25(1):54–67

    Article  Google Scholar 

  • Shaw AD, Horton JJ, Chen DL (2011) Designing incentives for inexpert human raters. In: ACM 2011 conf on comput support coop work (CSCW 2011), Hangzhou, pp 275–284

  • Stebbins R (2001) Exploratory research in the social sciences. Sage Pubs, Thousand Oaks

    Google Scholar 

  • Straub T, Gimpel H, Teschner F, Weinhardt C (2014a) Feedback and performance in crowd work: a real effort experiment. In: ECIS 2014 Proc, Tel Aviv

  • Straub T, Gimpel H, Teschner F, (2014b) The negative effect of feedback on performance in crowd labor tournaments. In: Nickerson J, Malone T (eds) Proc of collective intell 2014

  • Teschner F, Gimpel H (2013a) Crowd labor markets as platform for IS research: first evidence from electronic markets. In: 2013 Int conf on inf syst (ICIS 2013), Milan

  • Teschner F, Gimpel H (2013b) Validity of MTurk experiments in IS research: results from electronic markets. Working paper

  • Van Dijk F, Sonnemans J, van Winden F (2001) Incentive systems in a real effort experiment. Europ Econ Rev 45(2):187–214

    Article  Google Scholar 

  • Wang J, Ipeirotis PG, Provost F (2013) Quality-based pricing for crowdsourced workers. Working Paper. http://ssrn.com/abstract=2283000. Accessed 13 Mar 2015

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Straub.

Additional information

Accepted after one revision by the editors of the special issue.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Straub, T., Gimpel, H., Teschner, F. et al. How (not) to Incent Crowd Workers. Bus Inf Syst Eng 57, 167–179 (2015). https://doi.org/10.1007/s12599-015-0384-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12599-015-0384-2

Keywords