Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Javed, Hiraa; * | Sufyan Beg, M.M.a | Akhtar, Nadeema | Alroobaea, Roobaeab
Affiliations: [a] Department of Computer Engineering, Zakir Husain College of Engineering and Technology, AMU, Aligarh | [b] Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
Correspondence: [*] Corresponding author. Hira Javed, Department of Computer Engineering, Zakir Husain College of Engineering and Technology, AMU, Aligarh. E-mail: hira.javed@zhcet.ac.in.
Abstract: Vlogs, Recordings, news, sport coverages are huge sources of multimodal information that do not just limit to text but extend to audio, images and videos. Applications such as summary generation, image/video captioning, multimodal sentiment analysis, cross modal retrieval requires Computer Vision along with Natural Language Processing techniques to extract relevant information. Information from different modalities must be leveraged in order to extract quality content. Hence, reducing the gap between different modalities is of utmost importance. Image to text conversion is an emerging field and employs the use of encoder decoder architecture. Deep CNNs extract the feature of images and sequence to sequence models are used to generate text description. This paper is a contribution towards the growing body of research in multimodal information retrieval. In order to generate the textual description of images, we have performed 5 experiments using the benchmark Flickr8k dataset. In these experiments we have utilized different architectures - simple sequence to sequence model, attention mechanism, transformer-based architecture to name a few. The results have been evaluated using BLEAU score. Results show that the best descriptions are attained by making use of transformer architecture. We have also compared our results with the pretrained visual model vit-gpt2 that incorporates visual transformer.
Keywords: Multimodal, captioning, summarization, etc
DOI: 10.3233/JIFS-219394
Journal: Journal of Intelligent & Fuzzy Systems, vol. Pre-press, no. Pre-press, pp. 1-13, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl