Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Khurana, Khushbooa; * | Deshpande, Umeshb
Affiliations: [a] Department of Computer Science and Engineering, Shri Ramdeobaba College of Engineering and Management, Nagpur, India | [b] Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), Nagpur, India
Correspondence: [*] Corresponding author. Khushboo Khurana, Department of Computer Science and Engineering, Shri Ramdeobaba College of Engineering and Management, Nagpur, India. E-mails: khuranakp@rknec.edu, khuranakpk@gmail.com.
Abstract: In this information age, there is exponential growth in visual content and video captioning can address many real-life applications. Automatic generation of video captions can be beneficial to comprehend a video in a short time, assist in faster information retrieval, video analysis, indexing, report generation, etc. Captioning of industrial videos is of importance to get a visual and textual summary of the work ongoing in the industry. The generated captioned summary of the video can assist in remote monitoring of industries and these captions can be utilized for video question-answering, video segment extraction, productivity analysis, etc. Due to the presence of diverse events processing of industrial videos are more challenging compared to other domains. In this paper, we address the real-life application of generating the descriptions for the videos of a labor-intensive industry. We propose a keyframe-based approach for the generation of video captions. The framework produces a video summary by extraction of keyframes, thereby reducing the video captioning task to image captioning. These keyframes are passed to the image captioning model for description generation. Utilizing these individual frame captions, multi-caption descriptions of a video are generated with a unique start and end time of each caption. For image captioning, a merge encoder-decoder model with a stacked decoder for caption generation is used. We have performed experimentation on a dataset specifically created for the small-scale industry. We have also shown that data augmentation on the small dataset can greatly benefit the generation of remarkably good video descriptions. Results of extensive experimentation performed by utilizing different image encoders, language encoders, and decoders in the merge encoder-decoder model are reported. Apart from presenting the results on domain-specific data, results on domain-independent datasets are also presented to show the applicability of the technique in general. Performance comparison with existing datasets - OVSD and Flickr8k and Flickr30k are reported to demonstrate the scalability of our method.
Keywords: Video Localized Captioning, Keyframe extraction, Video Segmentation, Image Captioning, Merge Encoder-Decoder models, Stacked Bi-LSTM Decoder, Deep Learning
DOI: 10.3233/JIFS-212381
Journal: Journal of Intelligent & Fuzzy Systems, vol. 43, no. 4, pp. 4107-4132, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl