Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Jia, Wanjun | Li, Changyong; *
Affiliations: Institute of Modern Industries for Intelligent Manufacturing, Xinjiang University, Urumqi, Xinjiang, China
Correspondence: [*] Corresponding author. Changyong Li, Institute of Modern Industries for Intelligent Manufacturing, Xinjiang University, Urumqi, China. E-mail: lcyxjdx@qq.com.
Abstract: This study proposes a method to help people with different degrees of hearing impairment to better integrate into society and perform more convenient human-to-human and human-to-robot sign language interaction through computer vision. Traditional sign language recognition methods make it challenging to get good results on scenes with backgrounds close to skin color, background clutter, and partial occlusion. In order to realize faster real-time display, by comparing standard single-target recognition algorithms, we choose the best effect YOLOv8 model, and based on this, we propose a lighter and more accurate SLR-YOLO network model that improves YOLOv8. Firstly, the SPPF module is replaced with RFB module in the backbone network to enhance the feature extraction capability of the network; secondly, in the neck, BiFPN is used to enhance the feature fusion of the network, and the Ghost module is added to make the network lighter; lastly, in order to introduce partial masking during the training process and to improve the data generalization capability, Mixup, Random Erasing and Cutout three data enhancement methods are compared, and finally the Cutout method is selected. The accuracy of the improved SLR-YOLO model on the validation sets of the American Sign Language Letters Dataset and Bengali Sign Language Alphabet Dataset is 90.6% and 98.5%, respectively. Compared with the performance of the original YOLOv8, the accuracy of both is improved by 1.3 percentage points, the amount of parameters is reduced by 11.31%, and FLOPs are reduced by 11.58%.
Keywords: Machine vision, sign language recognition, YOLO, deep learning, lightweight
DOI: 10.3233/JIFS-235132
Journal: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 1, pp. 1663-1680, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl