Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Bostanian, Zohreha | Boostani, Rezaa; * | Sabeti, Maliheb | Mohammadi, Mokhtarc
Affiliations: [a] CSE & IT Department, Electrical and Computer Engineering Faculty, Shiraz University, Shiraz, Iran | [b] Department of Computer Engineering, North Tehran Branch, Islamic Azad University, Tehran, Iran | [c] Department of Information Technology, Lebanese French University, Erbil, Kurdistan Region, Iraq
Correspondence: [*] Corresponding author: Reza Boostani, CSE & IT Department, Electrical and Computer Engineering Faculty, Shiraz University, Shiraz, Iran. Tel./Fax: +98 7136474605; E-mail: boostani@shirazu.ac.ir.
Abstract: Ensemble learners and deep neural networks are state-of-the-art schemes for classification applications. However, deep networks suffer from complex structure, need large amount of samples and also require plenty of time to be converged. In contrast, ensemble learners (especially AdaBoost) are fast to be trained, can work with small and large datasets and also benefit strong mathematical background. In this paper, we have developed a new orthogonal version of AdaBoost, termed as ORBoost, in order to desensitize its performance against noisy samples as well as exploiting low number of weak learners. In ORBoost, after reweighting the distribution of each learner, the Gram-Schmidt rule updates those weights to make a new samples’ distribution to be orthogonal to the former distributions. In contrast in AdaBoost, there is no orthogonality constraint even between two successive weak learners while there is a similarity between the distributions of samples in different learners. To assess the performance of ORBoost, 16 UCI-Repository datasets along with six big datasets are deployed. The performance of ORBoost is compared to the standard AdaBoost, LogitBoost and AveBoost-II over the selected datasets. The achieved results support the significant superiority of ORBoost to the counterparts in terms of accuracy, robustness, number of exploited weak learners and generalization on most of the datasets.
Keywords: Gram-Schmidt, orthogonal AdaBoost, AveBoost, LogitBoost
DOI: 10.3233/IDA-205705
Journal: Intelligent Data Analysis, vol. 26, no. 3, pp. 805-818, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl