Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 315.00Impact Factor 2024: 1.7
The purpose of the Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology is to foster advancements of knowledge and help disseminate results concerning recent applications and case studies in the areas of fuzzy logic, intelligent systems, and web-based applications among working professionals and professionals in education and research, covering a broad cross-section of technical disciplines.
The journal will publish original articles on current and potential applications, case studies, and education in intelligent systems, fuzzy systems, and web-based systems for engineering and other technical fields in science and technology. The journal focuses on the disciplines of computer science, electrical engineering, manufacturing engineering, industrial engineering, chemical engineering, mechanical engineering, civil engineering, engineering management, bioengineering, and biomedical engineering. The scope of the journal also includes developing technologies in mathematics, operations research, technology management, the hard and soft sciences, and technical, social and environmental issues.
Authors: Adar-Yazar, Elanur | Karatop, Buket | Karatop, Selim Gökcan
Article Type: Research Article
Abstract: Many factors such as population growth, development of industry/technology, and increase in production-consumption disrupt the ecological balance and cause climate change, which is a global problem. Determining the criteria that cause climate change is very important in finding effective solutions to the problem. In the study, the criteria were determined, weighted with a new method, Step-wise Weight Assessment Ratio Analysis (SWARA), and ranked according to their priorities with two-layer fuzzy logic model. The Fuzzy SWARA method allows the evaluation process, which becomes complicated due to the difficulties and factors experienced in decision-making, to be carried out more effectively and realistically. …The risk and effect of climate change in Turkiye were evaluated regionally. However, the developed model also has a wide application area. Research findings revealed that the highest risk/effect of climate change have the Marmara and Central Anatolia regions. The lowest risk region is the Eastern Anatolia. Air pollution, population growth and deforestation have the highest weights. Important suggestions have presented especially for priority criteria. In this way, the factors that should be prioritized in climate change environmental problem solutions have been revealed and will make it easier for researchers and managers to provide more effective management. Show more
Keywords: Climate change, two-layer, fuzzy SWARA, Turkiye, risk
DOI: 10.3233/JIFS-236298
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10695-10711, 2024
Authors: Ramkumar, N. | Karthika Renuka, D.
Article Type: Research Article
Abstract: In BCI (brain-computer interface) applications, it is difficult to obtain enough well-labeled EEG data because of the expensive annotation and time-consuming data capture procedure. Conventional classification techniques that repurpose EEG data across domains and subjects lead to significant decreases in silent speech recognition classification accuracy. This research provides a supervised domain adaptation using Convolutional Neural Network framework (SDA-CNN) to tackle this problem. The objective is to provide a solution for the distribution divergence issue in the categorization of speech recognition across domains. The suggested framework involves taking raw EEG data and deriving deep features from it and the proposed feature …selection method also retrieves the statistical features from the corresponding channels. Moreover, it attempts to minimize the distribution divergence caused by variations in people and settings by aligning the correlation of both the source and destination EEG characteristic dissemination. In order to obtain minimal feature distribution divergence and discriminative classification performance, the last stage entails simultaneously optimizing the loss of classification and adaption loss. The usefulness of the suggested strategy in reducing distributed divergence among the source and target Electroencephalography (EEG) data is demonstrated by extensive experiments carried out on KaraOne datasets. The suggested method achieves an average accuracy for classification of 87.4% for single-subject classification and a noteworthy average class accuracy of 88.6% for cross-subject situations, which shows that it surpasses existing cutting-edge techniques in thinking tasks. Regarding the speaking task, the model’s median classification accuracy for single-subject categorization is 86.8%, while its average classification accuracy for cross-subject classification is 87.8%. These results underscore the innovative approach of SDA-CNN to mitigating distribution discrepancies while optimizing classification performance, offering a promising avenue to enhance accuracy and adaptability in brain-computer interface applications. Show more
Keywords: Brain-computer interface, supervised domain adaptation, Convolutional Neural Network, Electroencephalography, distribution divergence
DOI: 10.3233/JIFS-237890
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10713-10726, 2024
Authors: Mohana, M. | Subashini, P. | Shukla, Diksha
Article Type: Research Article
Abstract: In recent years, face detection has emerged as a prominent research field within Computer Vision (CV) and Deep Learning. Detecting faces in images and video sequences remains a challenging task due to various factors such as pose variation, varying illumination, occlusion, and scale differences. Despite the development of numerous face detection algorithms in deep learning, the Viola-Jones algorithm, with its simple yet effective approach, continues to be widely used in real-time camera applications. The conventional Viola-Jones algorithm employs AdaBoost for classifying faces in images and videos. The challenge lies in working with cluttered real-time facial images. AdaBoost needs to search …through all possible thresholds for all samples to find the minimum training error when receiving features from Haar-like detectors. Therefore, this exhaustive search consumes significant time to discover the best threshold values and optimize feature selection to build an efficient classifier for face detection. In this paper, we propose enhancing the conventional Viola-Jones algorithm by incorporating Particle Swarm Optimization (PSO) to improve its predictive accuracy, particularly in complex face images. We leverage PSO in two key areas within the Viola-Jones framework. Firstly, PSO is employed to dynamically select optimal threshold values for feature selection, thereby improving computational efficiency. Secondly, we adapt the feature selection process using AdaBoost within the Viola-Jones algorithm, integrating PSO to identify the most discriminative features for constructing a robust classifier. Our approach significantly reduces the feature selection process time and search complexity compared to the traditional algorithm, particularly in challenging environments. We evaluated our proposed method on a comprehensive face detection benchmark dataset, achieving impressive results, including an average true positive rate of 98.73% and a 2.1% higher average prediction accuracy when compared against both the conventional Viola-Jones approach and contemporary state-of-the-art methods. Show more
Keywords: AdaBoost, Computer Vision (CV), face detection algorithm, particle swarm optimization, Viola-Jones
DOI: 10.3233/JIFS-238947
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10727-10741, 2024
Authors: Dhivya, S. | Rajeswari, A.
Article Type: Research Article
Abstract: The utilization of the spectrum is optimized through which primary users of modern wireless communication technologies might obtain a higher chance of detection. The research aims to study how the NI-USRP hardware platform can be used to set up greedy cooperative spectrum sensing for cognitive radio networks. Research primarily deals with energy detection and eigenvalue-based detection approaches, both of which are highly recognized for their capacity to sense the spectrum without having prior knowledge of the primary user signals. In the hardware arrangement, there is one transmitter and two cognitive radio receivers. LABVIEW makes it simple to deploy and maximizes …the detection probability across a large sample. Here, it was demonstrated that cooperative spectrum sensing is superior to non-cooperative spectrum sensing, which results in a reduction in the risk of errors occurring during detection. The research discovered that the OR combination rule has a higher detection probability than the AND rule at the same time. The research emphasizes the significance of expanding cooperative spectrum sensing to improve overall detection capabilities. SNRs that are more than 10 dB allow the energy detector to operate, and the eigenvalue detector continues to work when the SNR drops to –9 dB. Show more
Keywords: Cognitive radio, cooperative spectrum sensing, NI-USRP hardware implementation, energy detection, eigenvalue-based detection
DOI: 10.3233/JIFS-239871
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10743-10755, 2024
Authors: Ding, Xiaoting | Jiang, Jiuchuan | Wei, Mengting | Leng, Yue | Wang, Haixian
Article Type: Research Article
Abstract: Analyzing physiological signals in the brain under outdoor conditions, like observing animal behavior, forms the normative basis for the outdoor task and provides new insights into the cognitive neuronal mechanisms of children’s functional brain systems. Here we investigated EEG data from a cohort of seventeen children (6–7 years old, 30-channel EEG) in the resting state and animal-observation state, using the microstate method combined with source-localization analysis to identify the changes in network-level functional interactions. Our study suggested that: while observing animal behavior, the parameters (global explained variance, occurrence, coverage, and duration) of microstates showed a regular trend, and the dynamic …reorganization patterns of children’s brains were associated with verbal input networks and higher-order cognitive networks; the activity of the brain network in the frontal and temporal lobes of children increased, while the activity of the insula brain area decreased after observing the behavioral activities of animals. This study may be essential to understand the effects of animal behavior on changes in healthy children’s emotions and have important implications for education. Show more
Keywords: Naturalistic observation task, healthy children, EEG microstates, brain development
DOI: 10.3233/JIFS-235533
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10757-10771, 2024
Authors: Yu, Jiamao | Yu, Ying | Qian, Jin | Han, Xing | Zhu, Feng | Zhu, Zhiliang
Article Type: Research Article
Abstract: Efficient feature representation is the key to improving crowd counting performance. CNN and Transformer are the two commonly used feature extraction frameworks in the field of crowd counting. CNN excels at hierarchically extracting local features to obtain a multi-scale feature representation of the image, but it struggles with capturing global features. Transformer, on the other hand, could capture global feature representation by utilizing cascaded self-attention to capture remote dependency relationships, but it often overlooks local detail information. Therefore, relying solely on CNN or Transformer for crowd counting has certain limitations. In this paper, we propose the TCHNet crowd counting model …by combining the CNN and Transformer frameworks. The model employs the CMT (CNNs Meet Vision Transformers) backbone network as the Feature Extraction Module (FEM) to hierarchically extract local and global features of the crowd using a combination of convolution and self-attention mechanisms. To obtain more comprehensive spatial local information, an improved Progressive Multi-scale Learning Process (PMLP) is introduced into the FEM, guiding the network to learn at different granularity levels. The features from these three different granularity levels are then fed into the Multi-scale Feature Aggregation Module (MFAM) for fusion. Finally, a Multi-Scale Regression Module (MSRM) is designed to handle the multi-scale fused features, resulting in crowd features rich in high-level semantics and low-level detail. Experimental results on five benchmark datasets demonstrate that TCHNet achieves highly competitive performance compared to some popular crowd counting methods. Show more
Keywords: Crowd counting, Transformer, CNN, multi-granularity, progressive learning
DOI: 10.3233/JIFS-236370
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10773-10785, 2024
Authors: Huang, Cheng | Hou, Shuyu
Article Type: Research Article
Abstract: To address the issue of target detection in the planar grasping task, a position and attitude estimation method based on YOLO-Pose is proposed. The aim is to detect the three-dimensional position of the spacecraft’s center point and the planar two-dimensional attitude in real time. First, the weight is trained through transfer learning, and the number of key points is optimized by analyzing the shape characteristics of the spacecraft to improve the representation of pose information. Second, the CBAM dual-channel attention mechanism is integrated into the C3 module of the backbone network to improve the accuracy of pose estimation. Furthermore, the …Wing Loss function is used to mitigate the problem of random offset in key points. The incorporation of the bi-directional feature pyramid network (BiFPN) structure into the neck network further improves the accuracy of target detection. The experimental results show that the average accuracy value of the optimized algorithm has increased. The average detection speed can meet the speed and accuracy requirements of the actual capture task and has practical application value. Show more
Keywords: Pose estimation, planar grasp, convolutional neural network, attention mechanism, feature fusion
DOI: 10.3233/JIFS-234351
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10787-10803, 2024
Authors: Hajiloei, Mehdi | Jahromi, Alireza Fakharzadeh | Zolmani, Somayeh
Article Type: Research Article
Abstract: Density based methods are significant approaches in outlier detection for high dimensional datasets and Local correlation integral (LOCI) is one of the best of them. To extend LOCI for fuzzy datasets, we should employ suitable metrics to measure the distance between two fuzzy numbers. Euclidean distance measure is a classic one in metric learning, but to overcome curse of dimensionality, we apply fractional distance metric too. Then, after introducing the FLOCI outlier detection algorithm for identifying the fuzzy outliers, we study the efficiency of the proposed method by doing some numerical experiments, in which the obtained results were completely successfull. …We also compared the results with Fuzzy versions of Distance based ABOD and SOD methods to prove robustness of this approache. More than the above, one of the main advantages of the new approach is the determination of outlierness factor for each data which is not presented in classical LOCI method. Show more
Keywords: Outlier data, Multi-granularity deviation factor, Triangular fuzzy number, LOCI method, Fractional distance metric
DOI: 10.3233/JIFS-234448
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10805-10812, 2024
Authors: Liang, Yonghong | Ge, Xianlong | Jin, Yuanzhi | Zheng, Zhong | Zhang, Yating | Jiang, Yunyun
Article Type: Research Article
Abstract: The rapid development of modern cold chain logistics technology has greatly expanded the sales market of agricultural products in rural areas. However, due to the uncertainty of agricultural product harvesting, relying on the experience values provided by farmers for vehicle scheduling can easily lead to low utilization of vehicle capacity during the pickup process and generate more transportation cost. Therefore, this article adopts a non-linear improved grey prediction method based on data transformation to estimate the pickup demand of fresh agricultural products, and then establishes a mathematical model that considers the fixed vehicle usage cost, the damage cost caused by …non-linear fresh fruit and vegetable transportation damage and decay rate, the cooling cost generated by refrigerated transportation, and the time window penalty cost. In order to solve the model, a hybrid simulated annealing algorithm integrating genetic operators was designed to solve this problem. This hybrid algorithm combines local search strategies such as the selection operator without repeated strings and the crossover operator that preserves the best substring to improve the algorithm’s solving performance. Numerical experiments were conducted through a set of benchmark examples, and the results showed that the proposed algorithm can adapt to problem instances of different scales. In 50 customer examples, the difference between the algorithm and the standard value in this paper is 2.30%, which is 7.29% higher than C&S. Finally, the effectiveness of the grey prediction freight path optimization model was verified through a practical case simulation analysis, achieving a logistics cost savings of 9.73%. Show more
Keywords: Pick-up routing problems, fresh logistics, gray prediction, hybrid simulated annealing
DOI: 10.3233/JIFS-235260
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10813-10832, 2024
Authors: Faheem Nikhat, H. | Sait, Saad Yunus
Article Type: Research Article
Abstract: To ensure a safe and pleasant user experience while watching content on YouTube, it is necessary to identify and classify inappropriate content, especially content that is inappropriate for children. In this work, we have concentrated on establishing an efficient system for detecting inappropriate content on YouTube. Most of the work focuses on manual pre-processing; however, it takes too much time, requires manpower support, and is not ideal for solving real-time problems. To address this challenge, we have proposed an automatic preprocessing scheme for selecting appropriate frames and removing unwanted frames such as noise and duplicate frames. For this purpose, we …have utilized the proposed novel auto-determined k-means (PADK-means) algorithm. Our PADK-means algorithm automatically determines the optimal cluster count instead of manual specifications. By doing this, we have solved the manual cluster count specification problem in the traditional k-means clustering algorithm. On the other hand, to improve the system’s performance, we utilized the Proposed Feature Extraction (PFE) method, which includes two pre-trained models DenseNet121 and Inception V3 are utilized to extract local and global features from the frame. Finally, we employ a proposed double-branch recurrent network (PDBRNN) architecture, which includes bi-LSTM and GRU, to classify the video as appropriate or inappropriate. Our proposed automatic preprocessing mechanism, proposed feature extraction method, and proposed double-branch RNN classifier yielded an impressive accuracy of 97.9%. Show more
Keywords: DenseNet121, inappropriate YouTube content detection, InceptionV3, PADK-means, PFE, PDBRNN
DOI: 10.3233/JIFS-236871
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10833-10845, 2024
Authors: You, Miaona | Zhuang, Sumei | Luo, Ruxue
Article Type: Research Article
Abstract: This study proposes a weighted composite approach for grey relational analysis (GRA) that utilizes a numerical weather prediction (NWP) and support vector machine (SVM). The approach is optimized using an improved grey wolf optimization (IGWO) algorithm. Initially, the dimension of NWP data is decreased by t-distributed stochastic neighbor embedding (t-SNE), then the weight of sample coefficients is calculated by entropy-weight method (EWM), and the weighted grey relational of data points is calculated for different weather numerical time series data. At the same time, a new weighted composite grey relational degree is formed by combining the weighted cosine similarity of NWP …values of the historical day and to be measured day. The SVM’s regression power prediction model is constructed by the time series data. To improve the accuracy of the system’s predictions, the grey relational time series data is chosen as the input variable for the SVM, and the influence parameters of the ideal SVM are discovered using the IGWO technique. According to the simulated prediction and analysis based on NWP, it can be observed that the proposed method in this study significantly improves the prediction accuracy of the data. Specifically, evaluation metrics such as root mean squared error (RMSE), regression correlation coefficient (r 2 ), mean absolute error (MAE) and mean absolute percent error (MAPE) all show corresponding enhancements, while the computational burden remains relatively low. Show more
Keywords: t-SNE, power forecasting, IGWO, NWP
DOI: 10.3233/JIFS-237333
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10847-10862, 2024
Authors: Sundara Kumar, M.R. | Mohan, H.S.
Article Type: Research Article
Abstract: Big Data Analytics (BDA) is an unavoidable technique in today’s digital world for dealing with massive amounts of digital data generated by online and internet sources. It is kept in repositories for data processing via cluster nodes that are distributed throughout the wider network. Because of its magnitude and real-time creation, big data processing faces challenges with latency and throughput. Modern systems such as Hadoop and SPARK manage large amounts of data with their HDFS, Map Reduce, and In-Memory analytics approaches, but the migration cost is higher than usual. With Genetic Algorithm-based Optimization (GABO), Map Reduce Scheduling (MRS) and Data …Replication have provided answers to this challenge. With multi objective solutions provided by Genetic Algorithm, resource utilization and node availability improve processing performance in large data environments. This work develops a novel creative strategy for enhancing data processing performance in big data analytics called Map Reduce Scheduling Based Non-Dominated Sorting Genetic Algorithm (MRSNSGA). The Hadoop-Map Reduce paradigm handles the placement of data in distributed blocks as a chunk and their scheduling among the cluster nodes in a wider network. Best fit solutions with high latency and low accessing time are extracted from the findings of various objective solutions. Experiments were carried out as a simulation with several inputs of varied location node data and cluster racks. Finally, the results show that the speed of data processing in big data analytics was enhanced by 30–35% over previous methodologies. Optimization approaches developed to locate the best solutions from multi-objective solutions at a rate of 24–30% among cluster nodes. Show more
Keywords: Big data analytics, hadoop distributed file system, non-dominated sorting genetic algorithm, map reduce scheduling based non-dominated sorting genetic algorithm, map reduce scheduling, genetic algorithm-based optimization
DOI: 10.3233/JIFS-240069
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10863-10882, 2024
Authors: Chiadamrong, Navee | Suthamanondh, Pisacha
Article Type: Research Article
Abstract: Competitiveness in the global market is getting more intense. Due to resource and budget constraints, firms need to achieve their expected goals and satisfy all investment constraints under uncertainty. Selecting the set of projects among other candidates to get the most efficient portfolio requires a lot of attention from the Decision Makers (DMs) as this consideration no longer relies purely on the financial term. This problem becomes a multi-objective problem under uncertainty where the financial return and risk from uncertainty are required into the trading off consideration. Due to the financial uncertainty, the chance-constrained programming has been employed in this …study for defuzzifying and solving uncertain optimization problems at a specified confidence level that is defined by the DMs. Then, various kinds of investment or financial risk measures, Lower-Semi Variance Index (LSVI), the absolute deviation with the expected FNPV, and the absolute mean-Conditional Value at Risk (CVaR) gap are provided in the selection of such risk measures to show their differences in characteristics and performances in the obtained results. Since, such problems can consist of many project candidates and complex constraints, which may grow beyond the application of the exact optimization approach, a meta-heuristic method, Genetic Algorithm (GA), is introduced to optimize this problem through designing and constructing a decision support tool for the investment portfolio selection and optimization. The applicability of the proposed comparative approach and the constructed tool are illustrated through examples. Show more
Keywords: Multi-objective portfolio selection and optimization, risk of uncertainty, absolute mean-conditional value at risk, Lower Semi-Variance Index (LSVI), absolute deviation with the expected FNPV
DOI: 10.3233/JIFS-233036
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10883-10906, 2024
Authors: Parisae, Veeraswamy | Nagakishore Bhavanam, S.
Article Type: Research Article
Abstract: The goal of speech enhancement is to restore clean speech in noisy environments. Acoustic scenarios with low signal-to-noise ratios (SNR) make it quite challenging to extract the target speech from its noise. In the current study, to enhance noisy speech, we propose a feature recalibration based multi-scale convolutional encoder-decoder architecture with squeeze temporal convolutional networks (S-TCN) bottleneck. Each multi-scale convolutional layer in encoder and decoder is followed by time-frequency attention module (TFA). The recalibration based multi-scale 2D convolution layers are used to extract local and contextual information. Additionally, the recalibration network is equipped with a gating mechanism to control the …flow of information among the layers, enabling weighting of the scaled features for noise suppression and speech retention. The fully connected layer (FC) in the bottleneck part of encoder-decoder contains a few neurons, which capture the global information from the multi-scale 2D convolution layer and reduce parameters. A S-TCN, inspired by the popular temporal convolutional neural network (TCNN), is inserted between the encoder and the decoder to model long-term dependencies in speech. The TFA is a highly efficient network component, that operates through two simultaneous attentions, one focused on time frames, and the other on frequency channels. These attentions work together to explicitly exploit positional information to create a two-dimensional attention map to effectively capture the significant time-frequency distribution of speech. Utilizing the common voice dataset, our proposed model consistently enhances results compared to the current benchmarks, as demonstrated by two extensively utilized objective measures PESQ and STOI. The proposed model shows significant improvements, with average PESQ and STOI scores increasing by 45.7% and 23.8% respectively for seen background noises, and by 43.5% and 21.4% for unseen background noises, when compared to the quality of noisy speech. Tests validate that the proposed approach outperforms numerous cutting-edge algorithms. Show more
Keywords: TFA - time-frequency attention, S-TCN - squeeze temporal convolutional networks, MSCL - multi scale convolutional layer, FR - feature recalibration, FRMSC - feature recalibration based multi scale convolution
DOI: 10.3233/JIFS-233312
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10907-10907, 2024
Authors: Wang, Zhiwen | Zhao, Yibin | Shi, Yaoke | Ling, Guobi
Article Type: Research Article
Abstract: Due to the complexity of the factors influencing membrane fouling in membrane bioreactors (MBR), it is difficult to accurately predict membrane fouling. This paper proposes a multi-strategy of integration aquila optimizer deep belief network (MAO-DBN) based membrane fouling prediction method. The method is developed to improve the accuracy and efficiency of membrane fouling prediction. Firstly, partial least squares (PLS) are used to reduce the dimensionality of many membrane fouling factors to improve the algorithm’s generalization ability. Secondly, considering the drawbacks of deep belief network (DBN) such as long training time and easy overfitting, piecewise mapping is introduced in aquila optimizer …(AO) to improve the uniformity of population distribution, while adaptive weighting is used to improve the convergence speed and prevent falling into local optimum. Finally, the prediction of membrane fouling is carried out by utilizing membrane fouling data as the research object. The experimental results show that the method proposed in this paper can achieve accurate prediction of membrane fluxes, with an 88.45% reduction in RMSE and 87.53% reduction in MAE compared with the DBN model before improvement. The experimental results show that the model proposed in this paper achieves a prediction accuracy of 98.61%, both higher than other comparative models, which can provide a theoretical basis for membrane fouling prediction in the practical operation of membrane water treatment. Show more
Keywords: Membrane bioreactors (MBR), membrane fouling prediction, deep belief network (DBN), aquila optimizer (AO)
DOI: 10.3233/JIFS-233655
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10923-10939, 2024
Authors: Wu, Guangli | Yang, Zhijun | Zhang, Jing
Article Type: Research Article
Abstract: Temporal sentence grounding in videos (TSGV), which aims to retrieve video segments from an untrimmed videos that semantically match a given query. Most previous methods focused on learning either local or global query features and then performed cross-modal interaction, but ignore the complementarity between local and global features. In this paper, we propose a novel Multi-Level Interaction Network for Temporal Sentence Grounding in Videos. This network explores the semantics of queries at both phrase and sentence levels, interacting phrase-level features with video features to highlight video segments relevant to the query phrase and sentence-level features with video features to learn …more about global localization information. A stacked fusion gate module is designed, which effectively captures the temporal relationships and semantic information among video segments. This module also introduces a gating mechanism to enable the model to adaptively regulate the fusion degree of video features and query features, further improving the accuracy of predicting the target segments. Extensive experiments on the ActivityNet Captions and Charades-STA benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art methods. Show more
Keywords: Temporal sentence grounding in videos, Multi-level cross-model interactions, Multi-level text representation
DOI: 10.3233/JIFS-234800
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10941-10953, 2024
Authors: Sujeeth, T. | Ramesh, C. | Palwe, Sushila | Ramu, Gandikota | Basha, Shaik Johny | Upadhyay, Deepak | Chanthirasekaran, K. | Sivasankari, K. | Rajaram, A.
Article Type: Research Article
Abstract: Solar power generation forecasting plays a vital role in optimizing grid management and stability, particularly in renewable energy-integrated power systems. This research paper presents a comprehensive study on solar power generation forecasting, evaluating traditional and advanced machine learning methods, including ARIMA, Exponential Smoothing, Support Vector Regression, Random Forest, Gradient Boosting, and Physics-based Models. Moreover, we propose an innovative Enhanced Artificial Neural Network (ANN) model, which incorporates Weather Modulation and Leveraging Prior Forecasts to enhance prediction accuracy. The proposed model is evaluated using real-world solar power generation data, and the results demonstrate its superior performance compared to traditional methods and other …machine learning approaches. The Enhanced ANN model achieves an impressive Root Mean Square Error (RMSE) of 0.116 and a Mean Absolute Percentage Error (MAPE) of 36.26%. The integration of Weather Modulation allows the model to adapt to changing weather conditions, ensuring reliable forecasts even during adverse scenarios. Leveraging Prior Forecasts enables the model to capture short-term trends, reducing forecasting errors arising from abrupt weather changes. The proposed Enhanced ANN model showcases its potential as a promising tool for precise and reliable solar power generation forecasting, contributing to the efficient integration of solar energy into the power grid and advancing sustainable energy practices. Show more
Keywords: Solar power generation, forecasting, artificial neural network, machine learning, renewable energy, grid management
DOI: 10.3233/JIFS-235612
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10955-10968, 2024
Authors: Wang, Yu
Article Type: Research Article
Abstract: Traditional psychological awareness relating to vocal musical instruction often disregards the impact of earlier experiences on music learning could result in a gap in meeting the needs of individual students. Conventional learning techniques of music related to psychological awareness for each individual has been focused on and addressed in this research. Technological upgrades in Fuzzy Logic (FL) and Big Data (BD) related to Artificial Intelligence (AI) are provided as a solution for the existing challenges and provide enhancement in personalized music education. The combined approach of BD-assisted Radial Basis Function is added with the Takagi Sugeno (RBF-TS) inference system, able …to give personalized vocal music instruction recommendations and indulge psychological awareness among students. Applying Mel-Frequency Cepstral Coefficients (MFCC) is beneficial in capturing variant vocal characteristics as a feature extraction technique. The BD-assisted RBF can identify the accuracy of pitch differences and quality of tone, understand choices from students, and stimulate psychological awareness. The uncertainties are addressed by using the TS fuzzy inference system and delivering personalized vocal training depending on different student preference factors. With the use of multimodal data, the proposed RBF-TS approach can establish a fuzzy rule base in accordance with the personalized emotional elements, enhancing self-awareness and psychological well-being. Validation of the proposed approach using an Instruction Resource Utilization Rate (IRUR) gives significant improvements in engaging students, analyzing the pitching accuracy, frequency distribution of vocal music instruction, and loss function called Mean Square Error(MSE). The proposed research algorithm pioneers a novel solution using advanced AI algorithms addressing the research challenges in existing personalized vocal music education. It promises better student outcomes in the field of music education. Show more
Keywords: Big data, Mel-Frequency Cepstral Coefficients, takagi-sugeno inference system, radial basis function, pitch accurateness, vocal music instruction
DOI: 10.3233/JIFS-236248
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10969-10983, 2024
Authors: Wang, Youwei | Feng, Lizhou
Article Type: Research Article
Abstract: A new bootstrap-aggregating (bagging) ensemble learning algorithm is proposed based on classification certainty and semantic correlation to improve the classification accuracy of ensemble learning. First, two predetermined thresholds are introduced to construct the long and short-text sample subsets, and different deep learning methods are compared to construct the optimal base classifier groups for each sample subsets. Then, the random sampling method employed in traditional bagging classification algorithms is improved, and a threshold group based random sampling method is proposed to obtain long and short training sample subsets of each iteration. Finally, the sample classification certainty of the base classifiers for …different categories is defined, and the semantic correlation information is integrated with the traditional weighted voting classifier ensemble method to avoid the loss of important information during the sampling process. The experimental results on multiple datasets demonstrate that the algorithm significantly improves text classification accuracy and outperforms typical deep learning algorithms. The proposed algorithm achieves the improvements of approximately 0.082, 0.061 and 0.019 on CNews dataset when the F1 measurement is used over the traditional ensemble learning algorithms such as random forest, M_ADA_A_SMV and CNN_SVM_LR. Moreover, it achieves the best F1 values of 0.995, 0.985, and 0.989 on the datasets of Spam, CNews, and SogouCS datasets, respectively, when compared with the ensemble learning algorithms using different base classifiers. Show more
Keywords: Ensemble learning, weak classifier, text classification, deep learning, random sampling
DOI: 10.3233/JIFS-236422
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 10985-11001, 2024
Authors: Ge, Pengqiang | Chen, Yiyang | Wang, Guina | Weng, Guirong | Chen, Hongtian
Article Type: Research Article
Abstract: Active contour model (ACM) is considered as one of the most frequently employed models in image segmentation due to its effectiveness and efficiency. However, the segmentation results of images with intensity non-uniformity processed by the majority of existing ACMs are possibly inaccurate or even wrong in the forms of edge leakage, long convergence time and poor robustness. In addition, they usually become unstable with the existence of different initial contours and unevenly distributed intensity. To better solve these problems and improve segmentation results, this paper puts forward an ACM approach using adaptive local pre-fitting energy (ALPF) for image segmentation with …intensity non-uniformity. Firstly, the pre-fitting functions generate fitted images inside and outside contour line ahead of iteration, which significantly reduces convergence time of level set function. Next, an adaptive regularization function is designed to normalize the energy range of data-driven term, which improves robustness and stability to different initial contours and intensity non-uniformity. Lastly, an improved length constraint term is utilized to continuously smooth and shorten zero level set, which reduces the chance of edge leakage and filters out irrelevant background noise. In contrast with newly constructed ACMs, ALPF model not only improves segmentation accuracy (Intersection over union(IOU)), but also significantly reduces computation cost (CPU operating time T ), while handling three types of images. Experiments also indicate that it is not only more robust to different initial contours as well as different noise, but also more competent to process images with intensity non-uniformity. Show more
Keywords: Image segmentation, partial derivative, intensity non-uniformity, optimization
DOI: 10.3233/JIFS-237629
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11003-11024, 2024
Authors: Guan, Hao | Sadati, Seyed Hossein | Talebi, Ali Asghar | Shafi, Jana | Khan, Aysha
Article Type: Research Article
Abstract: A cubic fuzzy graph is a type of fuzzy graph that simultaneously supports two different fuzzy memberships. The study of connectivity in cubic fuzzy graph is an interesting and challenging topic. This research generalized the neighborhood connectivity index in a cubic fuzzy graph with the aim of investigating the connection status of nodes with respect to adjacent vertices. In this survey, the neighborhood connectivity index was introduced in the form of two numerical and distance values. Some characteristics of the neighborhood connectivity index were investigated in cubic fuzzy cycles, saturated cubic fuzzy cycle, complete cubic fuzzy graph and complementary cubic …fuzzy graph. The method of constructing a cubic fuzzy graph with arbitrary neighborhood connectivity index was the other point in this research. The results showed that the neighborhood connectivity index depends on the potential of nodes and the number of neighboring nodes. This research was conducted on the Central Bank’s data regarding inter-bank relations and its results were compared in terms of neighborhood connectivity index. Show more
Keywords: Cubic fuzzy graph, neighborhood connectivity index, saturated cubic fuzzy cycle, complement cubic fuzzy graph
DOI: 10.3233/JIFS-238021
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11025-11040, 2024
Authors: Zhang, Yu | Wang, Zilong | Zhu, Yongjian | Li, Jianxin
Article Type: Research Article
Abstract: Point cloud object detection is gradually playing a key role in autonomous driving tasks. To address the issue of insensitivity to sparse objects in point cloud object detection, we have made improvements to the voxel encoding and 3D backbone network of the PVRCNN++. We have introduced adaptive pooling operations during voxel feature encoding to expand the point cloud information within each voxel, followed by the utilization of multi-layer perceptrons to extract richer point cloud features. On the 3D backbone network, we have employed adaptive sparse convolution operations to make the backbone network’s channel count more flexible, allowing it to accommodate …a wider range of input data types. Furthermore, we have integrated Focal Loss to tackle the issue of class imbalance in detection tasks. Experimental results on the public KITTI dataset demonstrate significant improvements over the PVRCNN++, particularly in pedestrian and bicycle detection tasks. Specifically, we have observed 1% increase in detection accuracy for pedestrians and 2.1% improvement for bicycles. Our detection performance also surpasses that of other comparative detection algorithms. Show more
Keywords: 3D point cloud object detection, adaptive pooling, sparse convolution, focal loss
DOI: 10.3233/JIFS-238176
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11041-11054, 2024
Authors: Sun, Ling | Jiang, Rong | Wan, Wenbing
Article Type: Research Article
Abstract: In the era of digital intelligence, this paper studies the task allocation algorithm of distributed large data stream group computing, and reasonably allocates the task of group computing to meet the needs of massive computing and analysis of distributed large data stream. According to the idea of swarm intelligence perception and crowdsourcing platform, the task allocation model of distributed large data stream group computing is constructed to realize the task allocation of group computing. A distributed large data stream group computing task model and a user model are constructed, user attributes are initialized by using the accuracy of the answers …submitted by users, the possibility that users can participate in the group computing task is predicted by a logistic regression algorithm, so that user candidate sequences participating in the computing task can be obtained, and the accuracy of the user’s real topics and corresponding topics can be grasped by capturing the candidate users’ real topics and evaluating the accuracy algorithm. Select the users who meet the subject area, update the candidate user sequence, and filter the users again on the basis of fully considering the factors such as information gain, user integrity and cost, so as to get the final user sequence and complete the task allocation of group computing. Experiments show that this method can solve the problem of distributed large data flow group computing task allocation, achieve high accuracy, reduce the cost, and effectively improve the information gain. Show more
Keywords: Age of mathematical intelligence, distributed data flow, calculate task assignment, crowd intelligence perception, crowdsourcing mode, user accuracy
DOI: 10.3233/JIFS-238427
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11055-11066, 2024
Authors: Zhang, Xiwen | Xiao, Hui
Article Type: Research Article
Abstract: Non-speech emotion recognition involves identifying emotions conveyed through non-verbal vocalizations such as laughter, crying, and other sound signals, which play a crucial role in emotional expression and transmission. This paper employs a nine-category discrete emotion model encompassing happy, sad, angry, peaceful, fearful, loving, hateful, brave, and neutral. A proprietary non-speech dataset comprising 2337 instances was utilized, with 384-dimensional feature vectors extracted. The traditional Backpropagation Neural Network (BPNN) algorithm achieved a recognition rate of 87.7% on the non-speech dataset. In contrast, the proposed Whale Optimization Algorithm - Backpropagation Neural Network (WOA-BPNN) algorithm, applied to a self-made non-speech dataset, demonstrated a remarkable …accuracy of 98.6%. Notably, even without facial emotional cues, non-speech sounds effectively convey dynamic information, and the proposed algorithm excels in their recognition. The study underscores the importance of non-speech emotional signals in communication, especially with the continuous advancement of artificial intelligence technology. The abstract thus encapsulates the paper’s focus on leveraging AI algorithms for high-precision non-speech emotion recognition. Show more
Keywords: Non-speech, emotion recognition, emotion classification, self-made data set, WOA-BPNN
DOI: 10.3233/JIFS-238700
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11067-11077, 2024
Authors: Ding, Xiaomei | Ding, Huaibao | Zhou, Fei
Article Type: Research Article
Abstract: Given that cloud computing is a relatively new field of study, there is an urgent need for comprehensive approaches to resource provisioning and the allocation of Internet of Things (IoT) services across cloud infrastructure. Other challenging aspects of cloud computing include IoT resource virtualization and disseminating IoT services among available cloud resources. To meet deadlines, optimize application execution times, efficiently use cloud resources, and identify the optimal service location, service placement plays a crucial role in installing services on existing virtual resources within a cloud-based environment. To achieve load balance in the fog computing infrastructure and ensure optimal resource allocation, …this work proposes a meta-heuristic approach based on the cat swarm optimization method. For more clarity in the difference between the work presented in this research and other similar works, we named the proposed technique MH-CSO. The algorithm incorporates a resource check parameter to determine the accessibility and suitability of resources in different situations. This conclusion was drawn after evaluating the proposed solution in the ifogsim environment and comparing it with particle swarm and ant colony optimization techniques. The findings demonstrate that the proposed solution successfully optimizes key parameters, including runtime and energy usage. Show more
Keywords: Load balancing, cat swarm optimization, fog computing, resource allocation and IoT
DOI: 10.3233/JIFS-233418
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11079-11094, 2024
Authors: Zhang, Juwei | Wang, Jing | Liu, Mingjun | Li, Zhihui
Article Type: Research Article
Abstract: Assessing the effectiveness of physical education instruction, students’ learning, and the feedback received from the teaching process are all vital components of the physical education teaching process in colleges and universities. Improving the quality of physical education instruction in these settings is essential. With its ability to drive the digital revolution of physical education in schools, intelligent technology is bringing about significant changes in the field of education and drawing attention from people from all walks of life. To assess intelligent technology’s impact on physical education instruction in a scientific manner, this study utilizes the latest intelligent analysis and sensing …data mining to design an intelligent physical education measurement and evaluation model, which utilizes GPS positioning, built-in maps, and gravity sensing to provide real-time feedback on the trajectory, distance, and time of the movement, and then calculates the real-time and average speed of the movement, as different students’ body postures to achieve the the same effect when the required speed is not the same, this paper randomly selected students with different BMI index for empirical analysis. The experimental results show that the principal components of the factor analysis extracted four common factors with a cumulative contribution rate of 69.5%, and the test-retest reliability of the four dimensions is 0.665–0.862. Show more
Keywords: Intelligent analysis, sensor data mining, physical education, physical measurement and evaluation
DOI: 10.3233/JIFS-235410
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11095-11110, 2024
Authors: Zhou, Ruohan | Chen, Wei | Xie, Congjin
Article Type: Research Article
Abstract: The field of business management involves a large amount of data and information sources, including market data, customer data, supply chain data, etc. In order to quantify and analyze different resources, help enterprises better plan and allocate resources, and improve resource utilization efficiency, a clustering analysis based digital resource integration algorithm for business management is studied. Build a business management digital resource integration framework, including data layer, integration layer, and storage layer, to integrate and store data from different sources of business management databases, thereby facilitating unified management and utilization of digital resources by enterprises. The data layer collects data …from different business management databases and stores it in the database according to different sources; The integration layer preprocesses the collected data, simply fixes errors and missing information in the data, and improves data quality. Adopting a feature extraction method based on the projection direction uncorrelation strategy of the labeled power set conversion method, the useful feature information of digital resources in enterprise management can be effectively extracted; Based on the two-step clustering analysis method, business management digital resources are clustered according to similar characteristics to complete the classification and integration of business management digital resources, and improve the efficiency of resource utilization; The storage layer adopts the Security Information Diffusion Algorithm (IDA) storage model to store integrated and classified digital resources managed by enterprises, ensuring data security and effectively preventing data leakage and illegal access. The experimental results show that the digital resource structure of business management integrated by this algorithm is clear, with a data redundancy of less than 8% and a difference of less than 11%. The time consumption for data integration is less than 2.11 minutes, indicating good resource integration ability. Show more
Keywords: Cluster analysis, business administration, digitization, resource integration, data storage, resource sharing
DOI: 10.3233/JIFS-235573
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11111-11123, 2024
Authors: Luo, Zhenrong | Jiang, Lei
Article Type: Research Article
Abstract: In order to construct an evaluation index system suitable for tourism management classroom teaching, this article evaluates the teaching effectiveness of teachers and improves the teaching quality of tourism management courses. This article is based on developmental evaluation theory, using Analytic Hierarchy Process, Project Response Theory, and CIPP model to construct an indicator system suitable for tourism management classroom teaching. Then, based on the collected data of 5763 students, the reliability and effectiveness of the tool and indicator system were first verified. Then, the variable of teacher teaching style was introduced to construct an OLS regression model for empirical research. …The research will summarize teacher and student data collected through the platform and conduct reliability analysis in SPSS 22.0 software, using Cronbach α The credibility of coefficient testing and evaluation tools. Cronbach in Environmental Fundamentals α The cβoefficient value is 0.8350. Cronbach for resource allocation α The coefficient is 0.735, and the Cronbah of the implementation process α Cronb Bach with a coefficient of 0.7 47 for teaching performance α The coefficient is 0.7240, indicating that rat ings has high reliability. Research has found that among the four specific types, the holistic type has the greatest impact on the specific situation, the holistic type has the greatest impact on the environmental foundation and resource allocation, and the legislative type has the greatest impact on the implementation process and teaching performance. Show more
Keywords: Tourism management, AHP method, CIPP model, teaching style
DOI: 10.3233/JIFS-235844
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11125-11138, 2024
Authors: He, Fuyun | Feng, Huiling | Tang, Xiaohu
Article Type: Research Article
Abstract: The segmentation of neuronal morphology in electron microscopy images is crucial for the analysis and understanding of neuronal function. However, most of the existing segmentation methods are not suitable for challenging datasets where the neuronal structure is contaminated by noise or has interrupted parts. In this paper, we propose a segmentation method based on deep learning to determine the location information of neurons and reduce the influence of image noise in the data. Specifically, we adapt our neuron dataset based on UNet by using convolution with BN fusion and multi-input feature fusion. The method is named REDAFNet. The model simplifies …the model structure and enhances the generalization ability by fusing the convolution layer and BN layer. The noise interference in the data was reduced by multi-input feature fusion, and the ability to understand and express the data was enhanced. The method takes a neuron image as input and its pixel segmentation map as output. Experimental results show that the segmentation accuracy of the proposed method is 91.96%, 93.86% and 80.25% on the ISBI2012 dataset, U-RISC retinal neuron dataset and N2DH-GOWT1 stem cell dataset, respectively. Compared with the existing segmentation methods, the proposed method can extract more complete feature information and achieve more accurate segmentation. Show more
Keywords: Image segmentation, convolutional neural network, UNet, neuron image
DOI: 10.3233/JIFS-236286
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11139-11151, 2024
Authors: Zhang, Dabin | Yu, Zehui | Ling, Liwen | Hu, Huanling | Lin, Ruibin
Article Type: Research Article
Abstract: As CO2 emissions continue to rise, the problem of global warming is becoming increasingly serious. It is important to provide a robust management decision-making basis for the reductions of carbon emissions worldwide by predicting carbon emissions accurately. However, affected by various factors, the prediction of carbon emissions is challenging due to its nonlinear and nonstationary characteristics. Thus, we propose a combination forecast model, named CEEMDAN-GWO-SVR, which incorporates multiple features to predict trends in China’s carbon emissions. First, the impact of online search attention and public health emergencies are considered in carbon emissions prediction. Since the impact of different variables …on carbon emissions is lagged, the grey relational degree is used to identify the appropriate lag series. Second, irrelevant features are eliminated through RFECV. To address the issue of feature redundancy of online search attention, we propose a dimensionality reduction method based on keyword classification. Finally, to evaluate the features of the proposed framework, four evaluation indicators are tested in multiple machine learning models. The best-performed model (SVR) is optimized by CEEMDAN and GWO to enhance prediction accuracy. The empirical results indicate that the proposed framework maintains good performance in both multi-scenario and multi-step prediction. Show more
Keywords: Carbon emissions prediction, online search attention, machine learning, time series forecasting
DOI: 10.3233/JIFS-236451
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11153-11168, 2024
Authors: Lin, Guangbo | Duan, Ninggui
Article Type: Research Article
Abstract: Integrating the E-commerce system with an enterprise resource planning tool can help the firm improve performance, maintain customers, and increase sales. In Enterprise Resource Planning, integration features can be provided either as developed features or as separate assignments and contributions. Problems with the online platform, improper addresses, rejected payments, and especially apparent transactions are frequent problems for online buyers. The enhanced Adaptive Ant Colony Optimization is utilized to optimize the rural E-commerce express of transportation. Several innovative routes can lower the downlink transportation cost and reach all collecting places with a fast delivery route. Convolutional Neural Networks were utilized to …increase the collective innovation of the E-commerce platform and simplify network communication. E-commerce is a mechanism used to market information services and products. Hence, ERP-AACO-CNN has been designed to integrate Enterprise Resource Planning and E-commerce, and business operations can stream smoothly from the front to the back of the business. Statistics on sales orders, customers, stock levels, price, and essential performance measurement systems. The automated invoices, frequent communications, financial report preparation, product and service delivery, and material requirements planning. The most significant results will likely finance businesses that employ it as a stimulant for a wide-ranging process improvement. In addition, E-commerce is a valuable innovation that connects buyers and sellers in various corners of the globe. Customer satisfaction is projected to be more significant than fault detection at 95.2 % accuracy for the proposed method’s E-commerce system with the superior value. According to client demand, an E-commerce system is the most accurate development at a given input level, and a future ERP is 64.9% efficient. The proposed approach has a 24.5% random error rate and a 13.2% mean square error rate. A comparison of E-commerce and enterprise ERP precision to the proposed technique yields 83.8% better results. Show more
Keywords: Adaptive ant colony optimization, enterprise resource planning, convolutional neural networks, E-commerce system
DOI: 10.3233/JIFS-237998
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11169-11184, 2024
Authors: Nihalani, Rahul | Chouhan, Siddharth Singh | Mittal, Devansh | Vadula, Jai | Thakur, Shwetank | Chakraborty, Sandeepan | Patel, Rajneesh Kumar | Singh, Uday Pratap | Ghosh, Rajdeep | Singh, Pritpal | Saxena, Akash
Article Type: Research Article
Abstract: The human-computer interaction process is a vital task in attaining artificial intelligence, especially for a person suffering from hearing or speaking disabilities. Recognizing actions more traditionally known as sign language is a common way for them to interact. Computer vision and Deep learning models are capable of understanding these actions and can simulate them to build up a sustainable learning process. This sign language mechanism will be helpful for both the persons with disabilities and the machines to unbound the gap to achieve intelligence. Therefore, in the proposed work, a real-time sign language system is introduced that is capable of …identifying numbers ranging from 0 to 9. The database is acquired from the 8 different subjects respectively and processed to achieve approximately 200k amount of data. Further, a deep learning model named LSTM is used for sign recognition. The results were compared with different approaches and on distinct databases proving the supremacy of the proposed work with 91.50% accuracy. Collection of daily life useful signs and further improving the efficiency of the LSTM model is the research direction for future work. The code and data will be available at https://github.com/rahuln2002/Sign-Language-Recognition-using-LSTM-model . Show more
Keywords: Long Short-Term Memory (LSTM), sign language, computer vision (CV), image processing, deep learning (DL)
DOI: 10.3233/JIFS-233250
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11185-11203, 2024
Authors: Chen, Sijia | Wang, Qingquan | Guo, Yuan
Article Type: Research Article
Abstract: MOTIVATION: With the enhancement of people’s awareness of the protection of personal privacy information, how to provide better services on the premise of protecting users’ privacy has become an urgent problem to be solved. Therefore, it is a necessary motivation to build a network intelligent platform for privacy protection and integrated big data mining. OBJECTIVE: In view of the existing network platform of data privacy leakage, low efficiency of data mining and user satisfaction is not high, this paper will adopt advanced privacy technology, to ensure the confidentiality of users’ personal information and security, to enhance the user …trust and use experience, to better meet the needs of users. METHODS: In order to better protect the privacy of users, the network intelligent platform should adopt more advanced privacy protection technology. This paper uses the differential privacy algorithm to reduce the risk of data leakage and abuse, and ensure the accuracy and efficiency of data analysis and mining. In the design of the platform, the performance of the platform is fully taken into account to realize the secure storage and efficient processing of data, with good scalability and flexibility to meet the growing user needs and business needs. The performance of the network intelligent platform is also analyzed by experimental simulation. RESULT: The experimental results of this article indicated that in a network intelligent platform based on privacy protection and integrated big data mining, its data transmission encryption score was 9.5; the data storage encryption score was 9.8; the score of access control mechanism was 9.3; the privacy protection score was 9.6; the response time was 80 ms; the processing speed was 121GB/h; the user satisfaction rating was 6.6. CONCLUSION: This indicated that the network intelligent platform had good platform performance and user friendliness while ensuring data security and privacy protection. It could efficiently conduct data mining and ensure data security and privacy. Show more
Keywords: Construction of network intelligent platform, privacy protection, data mining, integrating big data
DOI: 10.3233/JIFS-236017
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11205-11217, 2024
Authors: Zhang, Yonghong | Li, Shouwei | Li, Jingwei | Tang, Xiaoyu
Article Type: Research Article
Abstract: Electricity market violations affect the overall operations of the electricity market. This paper explores the evolutionary stability strategies of electricity generation enterprises and electricity consumers under two modes: traditional regulation and blockchain regulation to analyze blockchain technology’s mechanism and conditions in solving electricity market violations. The experimental results indicate that the likelihood of consumers accepting electricity and the regulatory capacity of regulatory agencies play a crucial role in determining the violation approach adopted by electricity generation enterprises. Under traditional regulatory models, due to information asymmetry, regulatory agencies may not be able to detect violations promptly. Meanwhile, electricity consumers may choose …to accept violations by power generation companies due to high appeal costs. Blockchain technology enables regulatory agencies to improve their regulatory capabilities by eliminating information asymmetry, reducing the cost of complaints from electricity consumers, thereby elevating the risk for enterprises engaging in market violations and optimizing the evolutionary game towards an optimum state. Show more
Keywords: Blockchain technology, electricity market, violation regulation, evolutionary game
DOI: 10.3233/JIFS-238041
Citation: Journal of Intelligent & Fuzzy Systems, vol. 46, no. 4, pp. 11219-11233, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl