ABSTRACT:
Given the imperative to innovate and stay ahead of their competition, organizations—called “seekers”—increasingly rely on the “wisdom of the crowds” to gain business insight. Motivated by monetary incentives, the desire to learn and the opportunity to enhance their reputation, “solvers” from the crowd participate in contests hosted by such seekers on crowdsourcing platforms. While the collective intelligence of the crowd is widely acknowledged to yield a diverse range of innovative solutions, the impact of the factors evoking contest competitiveness on the quality of these solutions remains unclear. Specifically, how competitiveness engendered by the interplay among the contest situational factors such as the nature of the contest’s economic incentives, the number of solving teams involved, their proximities to the winning position (relative position), and the contest’s complexity affects the solutions’ quality is not well understood. To bridge this knowledge gap, our research examines the direct effects of the number of solving teams and prize inequality on performance and clarifies the moderating effect of task complexity. We also investigate how the direct effects change across the relative positions of the solving teams (e.g. high-performing versus low-performing teams). The study extends the boundaries of tournament literature by demonstrating how crowdsourcing contests evoke distinct behavioral responses compared to traditional tournaments. Practically, it provides novel insights for contest designers on crafting effective prize structures to optimize participant performance.
Key words and phrases: Crowdsourcing contests, innovation tournaments, business competitiveness, prize structure, task complexity, crowdsourcing, innovation