Affiliations: [a] Key Laboratory of Dependable Service Computing in Cyber Physical Society (Chongqing University), Ministry of Education, China. E-mail: txiang@cqu.edu.cn | [b] College of Computer Science, Chongqing University, Chongqing 400044, China. E-mails: yli069@cqu.edu.cn, xguoli@cqu.edu.cn, zhongshigang@cqu.edu.cn | [c] School of Information Technology, Deakin University, Melbourne, Australia. E-mail: syu@deakin.edu.au
Abstract: Ensemble learning plays an important role in big data analysis. A great limitation is that multiple parties cannot share their knowledge extracted from ensemble learning model with privacy guarantee, therefore it is a great demand to develop privacy-preserving collaborative ensemble learning. This paper proposes a privacy-preserving collaborative ensemble learning framework under differential privacy. In the framework, multiple parties can independently build their local ensemble models with personalized privacy budgets, and collaboratively share their knowledge to obtain a stronger classifier with the help of central agent in a privacy-preserving way. Under this framework, this paper presents the differentially private versions of two widely-used ensemble learning algorithms: collaborative random forests under differential privacy (CRFsDP) and collaborative adaptive boosting under differential privacy (CAdaBoostDP). Theoretical analysis and extensive experimental results show that our proposed framework achieves a good balance between privacy and utility in an efficient way.
Keywords: Ensemble learning, differential privacy, random forests, adaptive boosting