Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Gaber, Mohamed Medhat; * | Atwal, Harinder Singh
Affiliations: School of Computing, University of Portsmouth, Portsmouth Hampshire, UK
Correspondence: [*] Corresponding author: Mohamed Medhat Gaber, School of Computing, University of Portsmouth, Buckingham Building, Lion Terrace, Portsmouth Hampshire PO1 3HE, UK. E-mail: mohamed.m.gaber@gmail.com
Abstract: Data classification is a major problem in data mining and machine learning. The process involves construction of a model from a set of historical data instances having one of the features designated as the class. This model is then used to classify instances in which the class feature is unknown. An important development in data classification has been the use of a set of classifiers that are built from different, but possibly overlapping sets of instances. This approach is known as ensemble-based classification. Random Forests is an example of an ensemble-based classification where the model outputs of many trees are used to classify an instance. Developed by Breiman in 2001, this technique has proved to be effective and a representative of the state-of-the-art in data classification. In this paper we propose an important enhancement to the technique in order to boost the overall performance of Random Forests. Random Forests take two parameters: the number of trees and the number of features to be randomly drawn from the set of all the features at each split in the tree. We shall investigate and incorporate the use of an information theoretic approach to evaluating the predictive power of the features in a given dataset, namely, Information Gain. We shall show experimentally that the predictive power of the features provides a guide to the setting of the second parameter of Random Forests (number of randomly drawn features to split on).
Keywords: Data classification, Random Forests, entropy, information gain
DOI: 10.3233/IDT-130171
Journal: Intelligent Decision Technologies, vol. 7, no. 4, pp. 319-327, 2013
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl