Abstract: Current crowdsourcing platforms provide an attractive solution for processing of high-volume tasks at low cost. However, problems of quality control remain a major concern. In the present work, we developed a private crowdsourcing system (PCSS) running in an intranetwork, which allows us to devise quality control methods. We introduce four worker selection methods and a grade-based training method. The four worker selection methods consist of preprocessing filtering, real-time filtering, post-processing filtering, and guess-processing filtering. In addition to a basic approach involving initial training or the use of gold-standard data, these methods include a novel approach, utilizing collaborative filtering techniques. We collected a large amount of vocabulary data for natural language processing (NLP), such as voice recognition and text to speech using PCSS. The quality control methods increased accuracy 32.4 points in collecting vocabulary tasks. We also implemented the grade-based training method to avoid claims of unfair dismissal and shrinkage of the market of crowdsourcing caused by excluding workers. This training method uses Bayesian networks to calculate correlations between tasks based on workers’ records, and then allocates learning tasks to the workers to raise the results of target tasks according to the correlations. In an experiment, the method automatically allocated learning tasks for target tasks, and after the training of the workers, we confirmed that the workers raised the accuracy of target task 10.77 points on average. Therefore, by combining the filtering methods and the training method, task requesters in microtask crowdsourcing can obtain higher-quality results without dismissing valuable workers.