Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Srihari, Pasalaa; * | Harikiran, Jonnadulab | Sai Chandana, B.b | Surendra Reddy, Vintab
Affiliations: [a] Department of Computer Science and Engineering- Artificial Intelligence and Machine Learning, B V Raju Institute of Technology, Narsapur, Hyderabad, India | [b] School of Computer Science Engineering, VIT-AP University, Amaravati, Andhra Pradesh, India
Correspondence: [*] Corresponding author. P. Srihari. Department of Computer Science and Engineering- Artificial Intelligence and Machine Learning, B V Raju Institute of Technology, Narsapur, Hyderabad, India. E-mail: psrihari851@gmail.com.
Abstract: Recognizing human activity is the process of using sensors and algorithms to identify and classify human actions based on the data collected. Human activity recognition in visible images can be challenging due to several factors of the lighting conditions can affect the quality of images and, consequently, the accuracy of activity recognition. Low lighting, for example, can make it difficult to distinguish between different activities. Thermal cameras have been utilized in earlier investigations to identify this issue. To solve this issue, we propose a novel deep learning (DL) technique for predicting and classifying human actions. In this paper, initially, to remove the noise from the given input thermal images using the mean filter method and then normalize the images using with min-max normalization method. After that, utilizing Deep Recurrent Convolutional Neural Network (DRCNN) technique to segment the human from thermal images and then retrieve the features from the segmented image So, here we choose a fully connected layer of DRCNN as the segmentation layer is utilized for segmentation, and then the multi-scale convolutional neural network layer of DRCNN is used to extract the features from segmented images to detect human actions. To recognize human actions in thermal pictures, the DenseNet-169 approach is utilized. Finally, the CapsNet technique is used to classify the human action types with Elephant Herding Optimization (EHO) algorithm for better classification. In this experiment, we select two thermal datasets the LTIR dataset and IITR-IAR dataset for good performance with accuracy, precision, recall, and f1-score parameters. The proposed approach outperforms “state-of-the-art” methods for action detection on thermal images and categorizes the items.
Keywords: Human action recognition, deep recurrent convolutional neural network, thermal images, classification, CapsNet, feature extraction, DenseNet-169
DOI: 10.3233/JIFS-230505
Journal: Journal of Intelligent & Fuzzy Systems, vol. 45, no. 6, pp. 11737-11755, 2023
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl