Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 315.00Impact Factor 2024: 1.7
The purpose of the Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology is to foster advancements of knowledge and help disseminate results concerning recent applications and case studies in the areas of fuzzy logic, intelligent systems, and web-based applications among working professionals and professionals in education and research, covering a broad cross-section of technical disciplines.
The journal will publish original articles on current and potential applications, case studies, and education in intelligent systems, fuzzy systems, and web-based systems for engineering and other technical fields in science and technology. The journal focuses on the disciplines of computer science, electrical engineering, manufacturing engineering, industrial engineering, chemical engineering, mechanical engineering, civil engineering, engineering management, bioengineering, and biomedical engineering. The scope of the journal also includes developing technologies in mathematics, operations research, technology management, the hard and soft sciences, and technical, social and environmental issues.
Article Type: Other
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1937-1937, 2019
Authors: Thampi, Sabu M. | El-Alfy, El-Sayed M.
Article Type: Editorial
Keywords: Soft computing, deep learning, machine learning, fuzzy systems, networked systems, image and video processing, pattern recognition, signal and speech processing, natural language processing, security
DOI: 10.3233/JIFS-169905
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1939-1944, 2019
Authors: Mohan, Jesna | Nair, Madhu S.
Article Type: Research Article
Abstract: The ever growing video data over the internet has raised the challenge of efficient storage and retrieval of multimedia data. Video Summarization is one of the solutions to the problem which extracts interesting parts of a video. These summaries capture the essential content of the original video and enable the viewers to pick out interesting videos from huge video repositories by viewing its compact representation. To address the challenge of managing the video data, a new method for static video summarization is proposed here which is capable of describing input videos using a precise yet meaningful subset of frames. The …method utilizes HOG descriptors of Gabor maps of input frames. A high-level representation of HOG descriptors is created using sparse auto-encoders (SAE). The final summary of the video is formed using candidate frames obtained after K-means clustering of these feature vectors. The summarization step is preceded by redundancy elimination to overcome computational burden. The method provides better results compared to the other state-of-the-art video summarization methods. Show more
Keywords: Video Summarization, Gabor maps, Histogram of Oriented Gradients (HOG), Sparse autoencoder (SAE), K-means clusterings
DOI: 10.3233/JIFS-169906
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1945-1955, 2019
Authors: Mohan, Vysakh S. | Vinayakumar, R. | Sowmya, V. | Soman, K.P.
Article Type: Research Article
Abstract: Deep Rectified System for High-speed Tracking in Images (DRSHTI) is a unified open-source web portal developed for object detection in images. It aims to be a platform for the end user, where he/she can perform object detection on images without going through the hassles of debugging countless lines of code or setting up the right environment to perform computer vision tasks. By making the platform open-source, this work targets beginners in computer vision to form a basic understanding of object detection as an artificial intelligence task. This is made possible by releasing source codes, tools and tutorials on its usage …via GitHub. This open-source portal offers two detection pipelines based on Faster-RCNN – a model to detect ground vehicles in aerial images and a model to detect everyday objects in 37 different classes in normal images. The former model is trained on VEDAI dataset, which gave 98.6% accuracy during testing and is offered as proof-of-concept that showcases the models ability to perform small target detections, but the latter model is trained on the PASCAL VOC dataset. Making the project open-source also aims at bringing in more development and tweaking to the existing vehicle detection module. The web portal can be accessed via https://drshti.github.io , where user can upload images and get annotations on objects present in it. Tutorials and source codes can be found at https://github.com/vyzboy92/Object-Detection-Net . Show more
DOI: 10.3233/JIFS-169907
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1957-1965, 2019
Authors: Bansod, Suprit | Nandedkar, Abhijeet
Article Type: Research Article
Abstract: Anomaly detection from crowd is a widely addressed problem in the field of computer vision. It is an essential part of video surveillance and security. In surveillance videos, very little information about anomalous behaviors is available, so it becomes difficult to identify such activities. In this work, transfer learning technique is used to train the network. A convolutional neural network (CNN) based VGG16 pre-trained model is used to learn spatial level appearance features for anomalous and normal patterns. Two approaches are explored to detect anomalies, i) homogeneous approach and ii) hybrid approach. In homogeneous approach, pre-trained network is used to …fine-tune CNN for each dataset, while testing, single dataset is considered. Whereas, in hybrid approach, pre-trained network is used to fine-tune CNN on one dataset and it is further used to fine-tune another dataset. The performance of proposed system is verified on standard benchmark datasets such as UCSD and UMN available for anomaly detection, also the results of proposed system are compared with existing deep learning approaches. Show more
Keywords: Anomaly detection, surveillance, transfer learning, CNN, fine-tune
DOI: 10.3233/JIFS-169908
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1967-1975, 2019
Authors: Vijayan, Vineetha | Sherly, Elizabeth
Article Type: Research Article
Abstract: One of the major issues of road accidents all over the world is drowsiness state of the driver. It is a complex phenomenon to measure a driver’s consciousness in a direct manner. This work proposes with three deep neural architecture for learning facial features which consists of 68 attributes from the RGB video input of a driver. The experimentation is conducted by three different CNN models such as ResNet50, VGG16 and InceptionV3. These three networks are combined for representation learning which then put together the features to form a feature fused architecture(FFA). The trained features as well as facial movements …such as eye blinking, yawning and head swaying are again trained with a softmax classifier to classify the drowsiness state of driver. Out of the three networks and FFA, InceptionV3 shows 78% accuracy. Show more
Keywords: Convolutional neural networks, drowsiness, deep learning
DOI: 10.3233/JIFS-169909
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1977-1985, 2019
Authors: Kumar, Aiswarya S. | Nair, Jyothisha J.
Article Type: Research Article
Abstract: Deep neural networks have dramatically gained immense potential in almost every field like computer vision, natural language processing, biomedical informatics etc. Among these networks, autoencoders are popular in performing dimensionality reduction task, while learning a representation for an unlabeled dataset. A usual way of dealing with such networks is to pre-train them in a layer-wise fashion, and consequently fine-tune the whole stack in a supervised manner. In this paper, a pair-wise training strategy is proposed to determine optimum model parameters by reducing training time as well as the complexity of training a convolutional autoencoder without compromising its accuracy. The proposed …approach works in a fully unsupervised manner and has been tested on datasets like MNIST, CIFAR10, and CIFAR100 and it shows that the training time has improved by an average of 25% on these three datasets. Show more
Keywords: Deep learning, convolutional autoencoder, pair-wise training
DOI: 10.3233/JIFS-169910
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1987-1995, 2019
Authors: Anubha Pearline, S. | Sathiesh Kumar, V. | Harini, S.
Article Type: Research Article
Abstract: Plant species recognition from images or videos is challenging due to a large diversity of plants, variation in orientation, viewpoint, background clutter, etc. In this paper, plant species recognition is carried out using two approaches, namely, traditional method and deep learning approach. In traditional method, feature extraction is carried out using Hu moments (shape features), Haralick texture, local binary pattern (LBP) (texture features) and color channel statistics (color features). The extracted features are classified using different classifiers (linear discriminant analysis, logistic regression, classification and regression tree, naïve Bayes, k-nearest neighbor, random forest and bagging classifier). Also, different deep learning architectures …are tested in the context of plant species recognition. Three standard datasets (Folio, Swedish leaf and Flavia) and one real-time dataset (Leaf12) is used. It is observed that, in traditional method, feature vector obtained by the combination of color channel statistics+LBP+Hu+Haralick with Random Forest classifier for Leaf12 dataset resulted in a plant recognition accuracy (rank-1) of 82.38%. VGG 16 Convolutional Neural Network (CNN) architecture with logistic regression resulted in an accuracy of 97.14% for Leaf12 dataset. An accuracy of 96.53%, 96.25% and 99.41% is obtained for Folio, Flavia and Swedish leaf datasets using VGG 19 CNN architecture with logistic regression as a classifier. It is also observed that the VGG (Very large Convolutional Neural Network) CNN models provided a higher accuracy rate compared to traditional methods. Show more
Keywords: Plant species recognition, deep learning, convolutional neural network, machine learning classification
DOI: 10.3233/JIFS-169911
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 1997-2004, 2019
Authors: Jagannathan, S. | Sathiesh Kumar, V. | Meganathan, D.
Article Type: Research Article
Abstract: Elephant play a major role in maintaining the ecosystem. Elephants move out of their corridor in search of food and water, result in rise of human-elephant conflicts. Human-Elephant conflict arises in different form such as destruction of field by elephants, elephants are runover by train, elephants getting electrocuted etc. Prevention of elephants entering into human living areas, will reduce the human-elephant conflict to a greater extent. In this paper, performance of conventional image processing techniques, custom Convolutional Neural Network (CNN) architecture and transfer learning based CNN architectures is investigated in the context of human-elephant conflict management system. On identification of …elephant from the acquired video feed, the sounds (humming of Bee, Tiger growls) are generated to mitigate the progress of elephant into human living areas. It is observed that the convolutional neural networks produced a higher accuracy or prediction rate compared to conventional image processing technique. It is also observed that, VGG16 CNN model produced an accuracy of 94% with an average computation time varying between 1.5 to 2.1 s. For real time implementation, SqueezeNet CNN model is used because of its lower computation time (varying between 0.02 to 0.05 s) and moderate accuracy of 92.67%. Show more
Keywords: Image classification, Haralick feature, histogram of oriented gradients, support vector machines, convolutional neural networks, Transfer learning, VGG-16, SqueezeNet, Raspberry Pi
DOI: 10.3233/JIFS-169912
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 2005-2013, 2019
Authors: Abraham, Bejoy | Nair, Madhu S.
Article Type: Research Article
Abstract: Grading of prostate cancer is usually done using Transrectal Ultrasound (TRUS) biopsy followed by microscopic examination of histological images by the pathologist. TRUS is a painful procedure which leads to infections of severe nature. In the recent past, Magnetic Resonance Imaging (MRI) has emerged as a modality which can be used for the diagnosis of prostate cancer without subjecting patients to biopsies. A novel method for grading of prostate cancer based on MRI utilizing Convolutional Neural Networks (CNN) and LADTree classifier is explored in this paper. T2 weighted (T2W), high B-value Diffusion Weighted (BVALDW) and Apparent Diffusion Coefficient (ADC) MRI …images obtained from the training dataset of PROSTATEx-2 2017 challenge are used for this study. A quadratic weighted Cohen’s kappa score of 0.3772 is attained in predicting different grade groups of cancer and a positive predictive value of 81.58% in predicting high-grade cancer. The method also attained an unweighted kappa score of 0.3993, and weighted Area Under Receiver Operating Characteristic Curve (AUC), accuracy and F-score of 0.74, 58.04 and 0.56, respectively. The above-mentioned results are better than that obtained by the winning method of PROSTATEx-2 2017 challenge. Show more
Keywords: Prostate cancer, CNN, LADTree, Gleason grading, Transfer learning
DOI: 10.3233/JIFS-169913
Citation: Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 2015-2024, 2019
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl