This book evaluates the impact of relevant factors affecting the results of speech quality assessment studies carried out in crowdsourcing. The author describes how these factors relate to the test structure, the effect of environmental background noise, and the influence of language differences. He details multiple user-centered studies that have been conducted to derive guidelines for reliable collection of speech quality scores in crowdsourcing. Specifically, different questions are addressed such as the optimal number of speech samples to include in a listening task, the influence of the environmental background noise in the speech quality ratings, as well as methods for classifying background noise from web audio recordings, or the impact of language proficiency in the user perception of speech quality. Ultimately, the results of these studies contributed to the definition of the ITU-T Recommendation P.808 that defines the guidelines to conduct speech quality studies in crowdsourcing.