Affiliations: Division of Computer Science and Engineering, School of EECS, Louisiana State University, Baton Rouge, LA 70803-4020, USA. E-mail: jianhua@csc.lsu.edu
Abstract: Scalability has become an increasingly important issue for data mining and knowledge discovery in this “big data” era. Random sampling can be used to tackle the problem of scalable learning. Adaptive sampling is superior to traditional batch sampling methods as adaptive sampling typically requires a much lower sample size and thus better efficiency while assuring guaranteed level of estimation accuracy and confidence. In recent works, a new adaptive sampling method was developed by the author and colleagues, which has then been applied to build an efficient, scalable boosting learning algorithm. New results are presented in this paper on theoretical analysis of the proposed sampling method. Specifically, for controlling absolute error, a bound is established on the probability that the new sampling method would stop too early. It is also shown that the criterion function for controlling relative error is in fact a convex function. A new variant of the sampling method is also presented. Empirical simulation results indicate that our methods, both the new variant and the original algorithm, often use significantly lower sample size (i.e., the number of sampled instances) while maintaining competitive accuracy and confidence when compared with batch sampling method. An empirical study is also reported here showing efficient classification of Web advertisements using a sampling-based ensemble learner.
Keywords: Adaptive sampling, sample size, web mining, scalable learning