Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Alahmadi, Amani | Hussain, Muhammad; * | Aboalsamh, Hatim | Azmi, Aqil
Affiliations: Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
Correspondence: [*] Corresponding author. Muhammad Hussain, Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia. E-mail: mhussain@ksu.edu.sa.
Abstract: Smartphone-based periocular recognition (SPR) has gained significant attention because of the limitations of face and iris biometric modalities. For this problem, most of the existing methods employ hand-crafted features. On the other hand, deep convolutional neural networks (CNN), which learn features automatically, have shown outstanding performance for many visual recognition tasks over hand-crafted features. In view of this paradigm shift, we propose an SPR method based on CNN model. A CNN model needs a huge volume of data, but for periocular recognition problem only limited amount of data is available. One solution for this issue is to use a CNN model pre-trained on the dataset from a related domain, but this raises the questions of how to extract discriminative features from a pre-trained CNN model and classify them. We introduce a simple, efficient and compact image representation method based on a pre-trained CNN model (VGG-Net). This method employs the wealth of information and sparsity existing in the activations of convolutional layers of a CNN model. For recognition, we use an efficient and robust Sparse Augmented Collaborative Representation based Classification (SA-CRC) technique. For a thorough evaluation of ConvSRC (the proposed system), experiments were carried out on the VISOB database, which was presented as the challenge dataset in ICIP2016. The results show the superiority of ConvSRC over the state-of-the-art methods; it obtains a GMR of more than 99% at FMR = 10-3 and outperforms the first winner of the ICIP2016 challenge by 10%.
Keywords: Periocular biometric, mobile biometrics, deep learning, convolutional neural network, sparse collaborative representation
DOI: 10.3233/JIFS-190834
Journal: Journal of Intelligent & Fuzzy Systems, vol. 38, no. 3, pp. 3041-3057, 2020
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl