Title | Fine-tuning Gold Questions in Crowdsourcing Tasks using Probabilistic and Siamese Neural Network Models |
Publication Type | Journal Article |
Year of Publication | 2019 |
Authors | Pinto, J. M. G., K. E. Maarry, and W. - T. Balke |
Journal | The Journal of Web Science (JWS) |
Volume | 6 |
Date Published | 2019 |
Publisher | NOW Publishers |
ISSN | 2332-4031 |
Abstract | The economic benefits of crowdsourcing have furthered its widespread use over the past decade. However, increasing numbers of fraudulent workers threaten to undermine the emerging crowdsourcing economy: requestors face the choice of either risking low-quality results or having to pay extra money for quality safeguards such as gold questions or majority voting. The more safeguards injected into the workload, the lower are the risks imposed by fraudulent workers, yet the higher are the costs. So, how many of them are actually needed? Is there a generally applicable number or percentage? This paper uses deep learning techniques to identify custom-tailored numbers of gold questions per worker for individually managing the cost/quality balance. Our new method follows real-life experiences: the more we know about workers before assigning a task, the clearer our belief or disbelief in this worker’s reliability gets. Employing probabilistic models, namely Bayesian belief networks and certainty factor models, our method creates worker profiles reflecting different a-priori belief values, and we prove that the actual number of gold questions per worker can indeed be assessed. Our evaluation on real-world crowdsourcing datasets demonstrates our method’s efficiency in saving money while maintaining high-quality results. |
Attachment | Size |
---|---|
Crowdsourcing - JWS 6-2019.pdf | 1.45 MB |