Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Hussain, Muhammad; * | Alotaibi, Fouziah | Qazi, Emad-ul-Haq | AboAlSamh, Hatim A.
Affiliations: Department of Computer Science, Visual Computing Lab, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
Correspondence: [*] Corresponding author. Muhammad Hussain, Department of Computer Science, Visual Computing Lab, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia. E-mail: mhussain@ksu.edu.sa.
Abstract: The face is a dominant biometric for recognizing a person. However, face recognition becomes challenging when there are severe changes in lighting conditions, i.e., illumination variations, which have been shown to have a more severe effect on recognition performance than the inherent differences between individuals. Most of the existing methods for tackling the problem of illumination variation assume that illumination lies in the large-scale component of a facial image; as such, the large-scale component is discarded, and features are extracted from small-scale components. Recently, it has been shown that large-scale component is also important; in addition, small-scale component contains detrimental noise features. Keeping this in view, we introduce a method for illumination invariant face recognition that exploits large-scale and small-scale components by discarding the illumination artifacts and detrimental noise using ContourletDS. After discarding the unwanted components, local and global features are extracted using a convolutional neural network (CNN) model; we examined three widely employed CNN models: VGG-16, GoogLeNet, and ResNet152. To reduce the dimensions of local and global features and fuse them, we employ linear discriminant analysis (LDA). Finally, ridge regression is used for recognition. The method was evaluated on three benchmark datasets; it achieved accuracies of 99.7%, 100%, and 79.76% on Extended Yale B, AR, and M-PIE, respectively. The comparison reveals that it outperforms the state-of-the-art methods.
Keywords: Face recognition, deep learning, convolutional neural network (CNN)
DOI: 10.3233/JIFS-212254
Journal: Journal of Intelligent & Fuzzy Systems, vol. 43, no. 1, pp. 383-396, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl