Micro-task Crowdsourcing provide a remarkable opportunity for academic and industry sectors by offering a high scale, on demand and low cost pool of a geographically distributed workforce for completing complex tasks that can be divided into a set of short and simple online tasks, such as annotations, data collection, or participation in an subjective test
Our focus is to investigate “quality” in every aspect of the crowdsourcing process from design of work-flows and integration of AI systems, to application domains.
Besides others, we work on building state-of-the-art methods for conducting valid, reliable, and reproducible Subjective Tests using Crowdsourcing for different media: speech, video, gaming, text.These methods can be used for evaluating output of AI models (like speech enhancement, denoising, translation, summarization, etc), codecs, or studying the trade-offs between influencing factors and perceived quality. Our group is actively participating in the standardization activities in ITU-T Study Group 12, leading and participating in different Working Items, including P.Crowd (speech and crowdsourcing), P.CrowdV (video and crowdsourcing), P.CrowdG (gaming and crowdsourcing), and P.CrowdCon (conversation test in crowdsourcing).
Our research can be categorized as the following: