Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 105.00Impact Factor 2024: 0.6
The field of intelligent decision technologies is interdisciplinary in nature, bridging computer science with its development of artificial intelligence, information systems with its development of decision support systems, and engineering with its development of systems. IDT seeks to attract research that is focused on applied technology while exploring its development at a fundamental level. IDT seeks an interchange of research on intelligent systems and intelligent technologies which enhance or improve decision-making in industry, government and academia. IDT publishes research on all aspects of intelligent decision technologies, from fundamental development to the applied system. The journal is concerned with theory, design, development, implementation, testing and evaluation of intelligent decision systems that have the potential to support decision making in the areas of management, international business, finance, accounting, marketing, healthcare, military applications, production, networks, traffic management, crisis response, human interfaces and other applied fields.
The target audience is researchers in computer science, business, commerce, health science, management, engineering and information systems that develop and apply intelligent decision technologies. Authors are invited to submit original unpublished work that is not under consideration for publication elsewhere.
Authors: Abdalla, Mohammed | AbdelAziz, Amr M.
Article Type: Research Article
Abstract: A threat to governments and the medical community globally are infectious illness outbreaks such as COVID-19, Spanish flu, and Ebola, which kill millions of people. Furthermore, because people are afraid to go to their employment, infectious diseases hinder economic progress. Therefore, it is imperative to implement interventions to lessen the spread of infectious diseases. From this angle, the practice of tracking contacts might be viewed as a mitigation strategy to stop the spread of contagious diseases. As this is going on, users’ trace data sets are growing exponentially larger over time, making it difficult to conduct contact tracing queries over …them. In this paper, a novel Spark contact-tracing query processing method is proposed. This method determines if users are suspected cases by analyzing their paths based on two variables: nearby social distance and exposure time. Additionally, the developed method makes full use of the Spark framework to address scalability issues and effectively respond to contact tracing inquiries over a wide range of trajectories. Show more
Keywords: Big data, COVID-19, spark, contact tracing, infectious diseases, GPS traces, preventive medicine, sustainability
DOI: 10.3233/IDT-240945
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1635-1650, 2024
Authors: Gedam, Ajit Narendra | Ajalkar, Deepika A. | Rumale, Aniruddha S.
Article Type: Research Article
Abstract: PROBLEM: Lung cancer is a dangerous and deadly disease with high mortality and reduced survival rates. However, the lung nodule diagnosis performance is limited by its heterogeneity in terms of texture, shape, and intensity. Furthermore, the high degree of resemblance between the lung nodules and the tissues that surround the lung nodules makes the building of a reliable detection model more difficult. Moreover, there are several methods for diagnosing and grading lung nodules; still, the accuracy of detection with the variations in intensity is a challenging task. AIM & METHODS: For the detection of lung nodules …and grading, this research proposes an Eyrie Flock Optimization-based Deep Convolutional Neural Network (Eyrie Flock-DeepCNN). The proposed Eyrie Flock Optimization integrates the fishing characteristics of Eyrie’s and the flocking characteristics of Tusker to accelerate the convergence speed which inturns enhance the training process and improve the generalization performance of the DeepCNN model. In the Eyrie Flock optimization, two optimal issues are considered: (i) segmenting the lung nodule and (ii) fine-tuning hyperparameters of Deep CNN. RESULTS: The capability of the newly developed method is evaluated by the terms of Specificity, Sensitivity, and Accuracy, attaining 98.96%, 95.21%, and 94.12%, respectively. CONCLUSION: Efficiently utilized the Deep CNN along with the help of the Eyrie Flock optimization algorithm which enhances the efficiency of the classifier and convergence of the model. Show more
Keywords: Nodule detection, optimization, deep learning, feature extraction, lung nodule segmentation
DOI: 10.3233/IDT-240605
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1651-1673, 2024
Authors: Borhade, Ratnaprabha Ravindra | Barekar, Sheetal Sachin | Ohatkar, Sharada N. | Mathurkar, Piyush K. | Borhade, Ravindra Honaji | Bangare, Pushpa Manoj
Article Type: Research Article
Abstract: Encephalopathy is the result of epilepsy, which is defined as recurring seizures. Around the world, almost 65 million people suffer with epilepsy. Because an epileptic seizure involves a crucial clinical element and a clear contradiction with everyday activities, it can be difficult to predict it. The electroencephalogram (EEG) has been the established signal for clinical evaluation of brain activities. So far, several methodologies for the detection of epileptic seizures have been proposed but have not been effective. To bridge this gap, a powerful model for epileptic seizure prediction using ResneXt-LeNet is proposed. Here, a Kalman filter is used to preprocess …the EEG signal to reduce noise levels in the signal. Then, feature extraction is performed to extract features, such as statistical and spectral. Feature selection is done using Fuzzy information gain that suggests appropriate choices for future processing, and finally, seizure prediction is done using hybrid ResneXt-LeNet, which is a combination of ResneXt and Lenet. The proposed ResneXt-LeNet achieved excellent performance with a maximum accuracy of 98.14%, a maximum sensitivity of 98.10%, and a specificity of 98.56%. Show more
Keywords: Electroencephalogram (EEG), Kalman filter, epileptic seizure, LeNet, ResneXt
DOI: 10.3233/IDT-240923
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1675-1693, 2024
Authors: Fatima, Zameer | Dhaliwal, Parneeta | Gupta, Deepak
Article Type: Research Article
Abstract: The rapid advancements in deep learning algorithms and the availability of large, open-access databases of fundus and OCT (optical coherence tomography) images have contributed greatly to advancements in computer-assisted diagnostics and the localization of various disorders affecting the retina. This study offers a comprehensive examination of retinal diseases and various recent applications of deep learning strategies for categorising key retinal conditions, such as diabetic retinopathy, glaucoma, age-related macular degeneration, choroidal neovascularization, retinal detachment, media haze, myopia, and dry eyes. Open-access datasets continue to play a critical role in the advancement of digital health research and innovation within the field of …ophthalmology. Thirty open-access databases containing fundus and OCT (optical coherence tomography) pictures, which are often utilised by researchers, were carefully examined in this work. A summary of these datasets was created, which includes the number of images, dataset size, and supplementary items in the dataset, as well as information on eye disease and country of origin. We also discussed challenges and limitations of novel deep learning models. Finally, in conclusion, we discussed some important insights and provided directions for future research opportunities. Show more
Keywords: Optical coherence tomography (OCT), fundus, deep learning, retinal disease, classification, image dataset
DOI: 10.3233/IDT-241007
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1695-1710, 2024
Authors: Zhao, Mei
Article Type: Research Article
Abstract: The current cold chain logistics management and distribution routes generally face problems such as environmental pollution and high costs. Therefore, the study proposed an improved fireworks-artificial bee colony for optimizing delivery paths. Firstly, the artificial bee colony was improved by using neighborhood search. Secondly, for the optimal solution, the fireworks explosion operator was used for the final result search. The benchmark function validation showed that the proposed algorithm performed better than other algorithms in solving optimal value, average value, and standard deviation. In the simulation analysis of a fresh enterprise in China, the total delivery cost was reduced by an …average of 3.30% and 4.93% compared to other algorithms, considering carbon emission and not considering it, respectively. This proposed algorithm had more advantages in reducing quality loss, pollutant emissions, and CE. Therefore, this improved fireworks-artificial bee colony can effectively reduce the economic costs and environmental pollution of cold chain logistics distribution routes. This ensures economic benefits while considering social benefits, achieving the optimal path planning for logistics distribution. Show more
Keywords: Logistics management, artificial bee colony algorithm, fireworks algorithm, neighborhood search, path optimization
DOI: 10.3233/IDT-240576
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1711-1726, 2024
Authors: Jin, Xin
Article Type: Research Article
Abstract: In response to the problems of low efficiency, high cost, and serious environmental pollution faced by traditional logistics scheduling methods, this article introduced the Metaheuristic algorithm into intelligent logistics scheduling and environmentally sustainable development. This article took the Metaheuristic algorithm as the research object. It was based on an in-depth analysis of its core ideas and unique advantages, combined intelligent logistics scheduling with relevant theories and methods such as green environmental protection, and innovatively constructed an intelligent logistics scheduling model based on the Metaheuristic algorithm. This article experimentally compared the effects of different Metaheuristic algorithms on total driving distance, transportation …time, fuel consumption, and carbon emissions. The experimental findings indicated that the ant colony optimization (ACO) algorithm in this article performed the best among them, and the performance of traditional algorithms and Metaheuristic algorithms was also tested in terms of performance. The findings indicated that the computational accuracy of the Metaheuristic algorithm reached 97%, which was better than the traditional 80%. Experimental results have shown that the Metaheuristic algorithm is an efficient and feasible method that can improve the efficiency of logistics scheduling and environmental sustainability. Show more
Keywords: Metaheuristic algorithm, intelligent logistics scheduling, environmental sustainability, path optimization, performance test
DOI: 10.3233/IDT-240280
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1727-1740, 2024
Authors: Marapatla, Ajay Dilip Kumar | E, Ilavarasan
Article Type: Research Article
Abstract: A secured IoT routing model against different attacks has been implemented to detect attacks like replay attacks, version attacks, and rank attacks. These attacks cause certain issues like energy depletion, minimized packet delivery, and loop creation. By mitigating these issues, an advanced attack detection approach for secured IoT routing techniques with a deep structured scheme is promoted to attain an efficient attack detection rate over the routing network. In the starting stage, the aggregation of data is done with the help of IoT networks. Then, the selected weighted features are subjected to the Multiscale Depthwise Separable 1-Dimensional Convolutional Neural Networks …(MDS-1DCNN) approach for attack detection, in which the parameters in the 1-DCNN are tuned with the aid of Fused Grasshopper-aided Lemur Optimization Algorithm (FG-LOA). The parameter optimization of the FG-LOA algorithm is used to enlarge the efficacy of the approach. Especially, the MDS-1DCNN model is used to detect different attacks in the detection phase. The attack nodes are mitigated during the routing process using the developed FG-LOA by formulating the fitness function based on certain variables such as shortest distance, energy, path loss and delay, and so on in the routing process. Finally, the performances are examined through the comparison with different traditional methods. From the validation of outcomes, the accuracy value of the developed attack detection model is 96.87%, which seems to be better than other comparative techniques. Also, the delay analysis of the routing model based on FG-LOA is 17.3%, 12.24%, 10.41%, and 15.68% more efficient than the classical techniques like DHOA, HBA, GOA, and LOA, respectively. Hence, the effectualness of the offered approach is more enriched than the baseline approaches and also it has mitigated diverse attacks using secured IoT routing and different attack models. Show more
Keywords: Attack detection, secured Iot routing, fused grasshopper-based lemur optimization algorithm, Multiscale Depthwise Separable 1-Dimensional Convolutional Neural Networks, multi-objective constraints
DOI: 10.3233/IDT-240476
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1741-1762, 2024
Authors: Liu, Meng | Liu, Xuan | Han, Xueying
Article Type: Research Article
Abstract: Faced with rapid development and increasingly complex power grid structure structure of the power grid, it is necessary to accurately locate faults in modern intelligent distribution networks in order to ensure power supply reliability and improve power supply service quality. Aiming at the low positioning accuracy, long time consumption, and limited coverage types in existing positioning algorithms, a new intelligent distribution network fault positioning algorithm is designed by combining two algorithms to complete the monitoring task. Firstly, intelligent distribution network fault location methods under different distributed power grid connection methods are analyzed. Then, considering the distributed power grid connection, a …fault location algorithm is designed by combining differential evolution algorithm, Sine chaotic mapping, and whale algorithm. The research results indicated that the designed method had good benchmark performance, with accuracy and recall values as high as 0.98 and 0.97, respectively. During the training process, it only required 175 iterations to reach a stable state. In practical applications, the accuracy of this algorithm in testing three types of faulty power grids was 98.90%, 98.41%, and 98.25%, respectively. The method can effectively improve the fault location accuracy, providing better positioning technology for fault monitoring problems in power systems. Show more
Keywords: WOA, DE, distribution network, monitoring, power system, fault location
DOI: 10.3233/IDT-240720
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1763-1774, 2024
Authors: Hu, Wei | Luo, Yipeng
Article Type: Research Article
Abstract: There is an increasing demand for high-quality translations in the realm of intelligent English translation. This paper optimized the traditional Transformer algorithm by enhancing position coding and the softmax layer. Bidirectional long short-term memory (BiLST) was employed to realize position encoding, capturing both contextual and positional information simultaneously. Additionally, the softmax function was replaced with the sparsemax function to obtain sparser results. The translation performance of some algorithms on Chinese and English datasets was compared and analyzed. It was found that that the optimized Transformer algorithm performed better than the RNNSearch, ConvS2S, and Transformer-base algorithms in terms of bilingual evaluation …understudy (BLEU) score on the test set. It achieved an average BLEU score of 23.72, representing an improvement of 1.56 over the RNNSearch algorithm, 1.17 over the ConvS2S algorithm, and 0.73 over the Transformer-base algorithm. The parameter quantity of the optimized algorithm was 6.83 M, which was 0.05 M higher than the Transformer-base algorithm. Furthermore, its running time was 4,862.76 s, showing a marginal increase of 0.21% compared to the Transformer-base algorithm. These findings validate the reliability of the optimized Transformer algorithm for intelligent English translation and its potential practical application. Show more
Keywords: Transformer algorithm, intelligent translation, position coding, softmax function
DOI: 10.3233/IDT-240657
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1775-1782, 2024
Authors: Zhang, Yanyan | Wang, Jiwei
Article Type: Research Article
Abstract: The rapid development of artificial intelligence technology is gradually penetrating into multiple fields such as interior design and spatial planning. The aim of this study is to integrate artificial intelligence with interior design, enhance design artistry and user experience, and address the interactive needs of interior space design choices. A set of indoor space design recognition system has been designed by introducing artificial intelligence networks and attention mechanisms. This study first optimizes the CenterNet algorithm based on attention mechanism and feature fusion to improve its accuracy in identifying complex components. Afterwards, the long short-term memory network and convolutional neural network …are trained to complete the task of spatial layout feature recognition and design. The performance test results showed that after testing 100 images, the software could recognize indoor design space images and create corresponding vector format space maps in about 5 minutes, providing them to the 3D modeling interface to generate 3D scenes. Compared to the approximately 25 minutes required by manual methods, the design efficiency has been significantly improved. The research and design method has a fast convergence speed and low loss during the retraining process. In simulation testing, its mAP value reached 91.0%, higher than similar models. It performs better in detecting walls, doors and windows, bay windows, double doors, and two-way doors. Moreover, it has outstanding ability when facing structures such as short walls and door corners, and can recognize and create vector format spatial maps within 5 minutes, which is accurate and efficient. The system designed in this project has optimized the interaction between designers and clients in interior design, accurately capturing user intentions and assisting designers in improving work efficiency. Show more
Keywords: Interior design, 3D art, space planning, deep learning, CenterNet
DOI: 10.3233/IDT-240615
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1783-1796, 2024
Authors: Rappai, Sherin | Ramasamy, Gobi
Article Type: Research Article
Abstract: Teenage suicidal ideation is on the rise, which emphasizes how crucial it is to recognize and comprehend the variables that contribute to this problem. Convolutional neural networks (CNNs), which are complex machine learning models capable of analysing intricate relationships within a network, are one possible strategy for addressing this issue. In our study, we employed a CNN-LSTM hybrid model to explore the complex relationships between teen suicide ideation and various risk variables, including depression, anxiety, and social support by analysing a substantial dataset of mental health surveys, seeking patterns and risk factors associated with suicidal thoughts. Our objective was clear: …identify adolescents prone to suicidal ideation. With 24 parameters and a sample size of 3075 subjects, our model achieved an impressive F1-score of 97.8%. These findings provide valuable insights which helps in developing effective preventive interventions to address adolescent suicidal ideation, finding out the important patterns and risk variables related to suicidal thoughts. The study results offer important direction for developing preventive interventions that successfully address adolescent suicidal ideation. Show more
Keywords: Convolutional neural network, suicide ideation, suicide detection, adolescents, machine learning, neural networks
DOI: 10.3233/IDT-240790
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1797-1811, 2024
Authors: Yuan, Ye | Wang, Haiyan | Yuan, Xinping | Wu, Ruobing | Zhang, Shilei
Article Type: Research Article
Abstract: Network Destruction Resilience is the ability of a system to maintain good operation in the event of an external attack or internal failure. The resilience of distribution network is crucial to guarantee the reliability of power supply. In this paper, we design a planning algorithm considering network destructibility for the multi-stage ant colony planning problem of distribution networks. The method establishes the multi-stage network planning objective function of distribution network from the perspectives of total investment cost and annual operation cost of multi-node network frame, destruction resistance, active power and reactive network loss of distribution network, etc. Then based on …the constraints of the model, the improved ant colony algorithm is used to solve the multi-stage network planning objective function of distribution network, and the results of the anti-termite colony planning of multi-stage network frame of distribution network are obtained. In order to verify the effectiveness of the algorithm, simulation experiments are carried out on real distribution network data. The results show that the proposed ant colony planning algorithm for multi-stage grid frames can effectively improve the destruction resistance of distribution networks, and reduce the total investment cost and annual operation cost of multi-stage grid frames, and reduce the network loss rate data of multi-stage grid frames of distribution networks after application. It provides an effective method for planning the distribution network. Show more
Keywords: Network destruction resistance, distribution network, multi-stage grid, ant colony planning algorithm, heuristic functions
DOI: 10.3233/IDT-240716
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1813-1826, 2024
Authors: Shen, Lei | Zang, Zhen
Article Type: Research Article
Abstract: Demand forecasting is essential for streamlining supply chain operations in the digital economy and exceeding customer expectations. On the other hand, traditional forecasting techniques cannot frequently present real-time data and respond to dynamic changes in the supply chain network, leading to less-than-ideal decision-making and higher costs. This research aims to create a technique for optimizing the supply chain network based on blockchain-distributed technology (SCN-BT) to overcome these drawbacks and fully utilize the potential of the digital economy. The suggested framework uses the hybridized LSTM network and Grey Wolf Optimization (GWO) algorithm to examine demand forecasting in the supply chain network …for inventory planning. The SCN-BT framework develops a safe and productive, enabling precise and flexible demand by combining blockchain with optimization techniques. A thorough case study utilized information collected from an enterprise supply chain that operates in the digital economy to show the efficiency of the suggested framework. Compared to conventional approaches, the results show considerable gains in demand forecasting precision, responsiveness of the supply chain, and cost-effectiveness. In the context of the digital economy, demand sensing and prediction enable firms to react to changes swiftly, shorten turnaround times, optimize inventory levels, and improve overall supply chain performance. The results highlight how blockchain technology has the potential to enhance collaboration, trust, and transparency inside intricate supply chain networks working in the digital economy. The experimental results show the proposed to achieve prediction rate of demand prediction rate of 128.93, demand forecasting accuracy ratio of 92.18%, optimum efficiency of 94.25%, RMSE rate of 1841.25, MAE rate of 260.74, and sMAPE rate of 0.1002 compared to other methods. Show more
Keywords: Digital economy, demand forecasting, supply chain network optimization, blockchain technology, LSTM, grey wolf optimization
DOI: 10.3233/IDT-240680
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1827-1839, 2024
Authors: Huang, Nannan | Wang, Xue
Article Type: Research Article
Abstract: The study proposes a new algorithm combining a gradient boosting tree algorithm with a hybrid convolutional neural network in order to design a better human resource recommendation algorithm to solve the problem of employment difficulties and recruitment difficulties. The algorithm combines the excellent feature transformation ability of gradient boosting trees with the excellent classification ability of hybrid convolutional neural networks, complementing the shortcomings of each of the two algorithms. The outcomes showed that the algorithm performed best with the learning rate set to 0.3 and the maximum tree depth set to 3. The algorithm now has the lowest loss percentage …and highest F1-Score value. The maximum-median hybrid pooling approach used in the study had considerable improvements over the algorithm’s pooling strategy (PS), and it had the greatest recall and F1-Score values of all pooling strategies (0.8108 and 0.7418, respectively). The new algorithm outperformed the gradient boosting tree algorithm and the hybrid convolutional neural network algorithm in terms of recall and F1-Score values, and regardless of the length of the job recommendation, the recall and F1-Score values of the new algorithm were consistently higher than those of the old algorithm. The recall and F1-Score values of the new algorithm, with a job recommendation length of 70, were 0.8198 and 0.7432, respectively, both greater than those of the other algorithms, according to a comparison of it with other conventional HR recommendation algorithms. The study’s newly created algorithm increases the efficacy and precision of HR suggestions. Show more
Keywords: Hybrid convolutional neural networks, human resource recommendation, gradient boosting trees, ReLU, ELU
DOI: 10.3233/IDT-240670
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1841-1853, 2024
Authors: Ghosh, Anay | Umer, Saiyed | Dhara, Bibhas Chandra | Rout, Ranjeet Kumar
Article Type: Research Article
Abstract: BACKGROUND: Patient sentiment analysis aids in identifying issue areas, timely remediation, and improved patient care by the healthcare professional. The relationship between pain management and patient sentiment analysis is crucial to providing patients with high-quality medical care. Therefore, a self-reported pain level assessment is required for the smart healthcare framework to determine the best course of treatment. OBJECTIVE: An efficient method for a pain sentiment recognition system has been proposed based on the analysis of human facial emotion patterns of patients in the smart healthcare framework. METHODS: The proposed system has been implemented …in four phases: (i) in the first phase, the facial regions of the observation patient have been detected using the computer vision-based face detection technique; (ii) in the second phase, the extracted facial regions are analyzed using deep learning based feature representation techniques to extract discriminant and crucial facial features to analyze the level of pain emotion of patient; (iii) the level of pain emotions belongs from macro to micro facial expressions, so, some advanced feature tunning and representation techniques are built along with deep learning based features such as to distinguish low to high pain emotions among the patients in the third phase of the implementation, (iv) finally, the performance of the proposed system is enhanced using the score fusion techniques applied on the obtained deep pain recognition models for the smart healthcare framework. RESULTS: The performance of the proposed system has been tested using two standard facial pain benchmark databases, the UNBC-McMaster shoulder pain expression archive dataset and the BioVid Heat Pain Dataset, and the results are compared with some existing state-of-the-art methods employed in this research area. CONCLUSIONS: From extensive experiments and comparative studies, it has been concluded that the proposed pain sentiment recognition system performs remarkably well compared to the other pain recognition systems for the smart healthcare framework. Show more
Keywords: Patient sentiment, pain recognition, smart heathcare, compter vision, deep leaning
DOI: 10.3233/IDT-240548
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1855-1877, 2024
Authors: Gong, Weiwei | Zhou, Lingyun | Zhou, Langya | Bao, Jingjng | Chen, Cheng
Article Type: Research Article
Abstract: This study aims to explore an efficient technique for matching multisource homonymous geographical entities in railways to address the identification issues of homonymous geographical entities. Focusing on railway line vector spatial data, this research investigates the matching problem of multisource homonymous geographical entities. Building on statistical feature matching of attribute data, a curve similarity calculation method based on the DTW algorithm is designed to achieve better local elastic matching, overcoming the limitations of the Fréchet algorithm. The empirical study utilizes railway line layer data from two data sources within Beijing’s jurisdiction, fusing 6237 segment lines from source 2 with 105 …long lines from source 1. The structural comparison between the two data sources is conducted through statistical methods, applying cosine similarity and the maximum similarity value of TF-IDF for text similarity calculation. Finally, Python is used to implement the DTW algorithm for curve similarity. The experimental results show an average DTW distance of 3.92, a standard deviation of 4.63, and a mode of 0.005. Similarity measurement results indicate that 95.53% of records are within the predetermined threshold, demonstrating the effectiveness and applicability of the method. The findings significantly enhance the accuracy of railway data matching, promoting the informatization of the railway industry, and hold substantial significance for improving railway operational efficiency and system performance. Show more
Keywords: Railway line matching, geographical entities, dynamic time warping, vector data, python
DOI: 10.3233/IDT-240684
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1879-1891, 2024
Authors: Sun, Lei
Article Type: Research Article
Abstract: Web service composition is crucial for creating valuable services by integrating pre-existing ones. With their service-oriented architecture (SOA), which can be used for any system design, web services can increase flexibility. Fusing Web services architecture with Semantic Web services can better assist supply chain coordination in a distributive, autonomous, and ever-changing corporate environment than current information technology. Decisions must be made quickly and with enough information many systems fail to provide real-time supply chain insight. Forecasting, inventory management, and decision-making may all be impacted by poor data quality. Modifying preexisting systems according to unique organizational needs may be challenging and …expensive. Hence, this paper proposes a semantic web service-based supply chain management framework (SWS-SCMF) to analyze the web services in supply chains and examine how they interact using Web Ontology Language (OWL)-S, including automated discovery, construction, and invocation. The suggested method for improving supply chain data integration uses an ontology-based multiple-agent decision support system. Different accessibility tools, data formats, management information systems, semantic web, and databases are integrated across the five interconnected levels of the system. Businesses may find the proposed approach useful for data and information sharing when dealing with complex supply chain management. The suggested SWS-SCMF is an adaptable, accurate, and effective method for bidirectional chaining composition that uses mediators to enable the automated composition of Semantic Web services. The numerical results show that our proposed method enhances the overall performance ratio by 94%, accuracy ratio by 98%, and supply chain management ratio by 91% compared to other methods. Show more
Keywords: Supply chain management (SCM), web service composition (WSC), Web Ontology Language (OWL)-S, semantic web service architecture (SWSA)
DOI: 10.3233/IDT-240411
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1893-1914, 2024
Authors: Mathur, Shruti | Shrivastava, Gourav
Article Type: Research Article
Abstract: Sentiment analysis, which involves determining the emotional polarity positivity, negativity, or neutrality in the source texts, is a crucial task. Multilingual sentiment analysis techniques were developed to analyze data in several languages; a notable deficiency of resources in multilingual sentiment analysis is one of the primary issues. Furthermore, the developed methods for multilingual sentiment analysis have some limitations such as data dependency, reliability, robustness, and computational complexity. To tackle these shortcomings, this research proposed a multilingual improved multi-attention Deep Learning model (M2PSC-DL), which leverages the advantages of the Bi-directional Long Short Term Memory (BiLSTM) classifier with improved attention mechanisms. The …Multi-metric graph embedding technique encodes the data to provide more contextual information representation. Additionally, the combination of improved Positional Spatial Channel (SPC) attention increases the capability of the model to extract relevant features in the training process which leads to getting accurate results in sentiment analysis tasks. Furthermore, the research proposed an improved sigmoid activation for solving the vanishing gradient issues that help the model avoid gradient saturations. The validation results demonstrate that the M2PSC-DL model attains 96.26% accuracy, 96.06% precision, and 96.18% recall for the XED dataset which is far better than the traditional methods. Show more
Keywords: Sentiment analysis, multilingual improved multi-attention based Deep Learning model, TF-IDF based dependency features, Multi metric graph embedding, Bi-directional Long Short Term Memory
DOI: 10.3233/IDT-240773
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1915-1931, 2024
Authors: Waghmode, Santosh | Patil, Bankat M.
Article Type: Research Article
Abstract: A distributed cloud environment is characterized by the dispersion of computing resources, services, and applications across multiple locations or data centres. This distribution enhances scalability, redundancy, and resource utilization efficiency. To optimize performance and prevent any single node from becoming a bottleneck, it is imperative to implement effective load-balancing strategies, particularly as user demands vary and certain nodes experience increased processing requirements. This research introduces an Adaptive Load Balancing (ALB) approach aimed at maximizing the efficiency and reliability of distributed cloud environments. The approach employs a three-step process: Chunk Creation, Task Allocation, and Load Balancing. In the Chunk Creation step, …a novel Improved Fuzzy C-means clustering (IFCMC) clustering method categorizes similar tasks into clusters for assignment to Physical Machines (PMs). Subsequently, a hybrid optimization algorithm called the Kookaburra-Osprey Updated Optimization Algorithm (KOU), incorporating the Kookaburra Optimization Algorithm (KOA) and Osprey Optimization Algorithm (OOA), allocates tasks assigned to PMs to Virtual Machines (VMs) in the Task Allocation step, considering various constraints. The Load Balancing step ensures even distribution of tasks among VMs, considering migration cost and efficiency. This systematic approach, by efficiently distributing tasks across VMs within the distributed cloud environment, contributes to enhanced efficiency and scalability. Further, the contribution of the ALB approach in enhancing the efficiency and scalability of distributed cloud environments is evaluated through analyses. The KBA is 1189.279, BES is 629.240, ACO is 1017.889, Osprey is 1147.300, SMO is 1215.148, APDPSO is 1191.014, and DGWO is 1095.405, respectively. The resource utilization attained by the KOU method is 1224.433 at task 1000. Show more
Keywords: Cloud computing, distributed cloud environment, adaptive load balancing, IFCMC and KOU algorithm
DOI: 10.3233/IDT-240672
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1933-1954, 2024
Authors: Mandal, Dipankar | Nandi, Debashis | Tudu, Bipan | Chatterjee, Arpitam
Article Type: Research Article
Abstract: Adulteration in different spices is an emerging challenge in human civilization. It is commonly detected using different analytical and instrumental techniques. Despite good accuracy and precision many of such techniques are limited by their high processing time, skilled manpower requirement, expensive machinery and portability factor. Computer vision methodology driven by powerful convolutional neural network (CNN) architectures can be a possible way to address those limitations. This paper presents a CNN driven computer vision model which can detect cornstarch adulteration in turmeric powder along with the degree of adulteration. The model has been optimized using binary genetic algorithm (BGA) for improved …performance and consistency. The experimentations presented in this paper were conducted with an in-house database prepared for 4 levels of adulteration and found to provide about 98% overall accuracy. The less expensive and faster detection capability of the model along with its mobility makes this proposal a promising addition to the existing spice adulteration screening methods. Show more
Keywords: Turmeric adulteration, computer vision, convolutional neural network, spice adulteration detection, binary genetic algorithm
DOI: 10.3233/IDT-240656
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1955-1964, 2024
Authors: Kaftantzis, Savvas | Bousdekis, Alexandros | Theodoropoulou, Georgia | Miaoulis, Georgios
Article Type: Research Article
Abstract: Process mining is an emerging research field which deals with discovering, monitoring and improving business processes by analyzing and mining data in the form of event logs. Event logs can be extracted by most of the existing enterprise information systems. Predictive business process monitoring is a sub-field of process mining and deals with predictive analytics models on event log data that incorporate Machine Learning (ML) algorithms and deal with various objectives of process instances, such as: next activity, remaining time, costs, and risks. Existing research works on predictions about next activities are scarce. At the same time, Automated Machine Learning …(AutoML) has not been investigated in the predictive business process monitoring domain. Therefore, based on its promising results in other domains and type of data, we propose an approach for next activity prediction based on AutoML, and specifically on the Tree-Based Pipeline Optimization Tool (TPOT) method for AutoML. The evaluation results demonstrate that automating the design and optimization of ML pipelines without the need for human intervention, apart from making accessible ML to non-ML experts (in this case, the process owners and the business analysts), also provides higher prediction accuracy comparing to other approaches in the literature. Show more
Keywords: Automated machine learning, process mining, business process, predictive analytics
DOI: 10.3233/IDT-240632
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1965-1980, 2024
Authors: Zhu, Tingrou
Article Type: Research Article
Abstract: Garbage sorting contributes to resource recycling, mitigates environmental pollution, and promotes sustainable development. However, traditional garbage sorting methods typically require significant human labor and time resources, underscoring the necessity for automated solutions. While the convolutional neural network (CNN) has achieved significant success in garbage sorting, existing models still suffer from low computational efficiency and accuracy. In light of these challenges, this study proposes the smart recycle sort network (SRS-Net), a lightweight model with attention mechanism aimed at enhancing the efficiency and accuracy of garbage sorting processes. Lightweight networks reduce computational complexity and parameters, improving garbage sorting efficiency. We improve the …ShuffleNet unit and introduce the lightweight shuffle attention module (LSAM) as the primary module of SRS-Net. On one hand, given the diverse shapes and sizes of garbage items, we replace the depthwise convolution (DWConv) in the ShuffleNet unit with heterogeneous kernel-based convolutions (HetConv) to accommodate this diversity. On the other hand, to better focus on important features of garbage images, we introduce shuffle attention (SA), a channel-spatial attention mechanism that considers the importance of inter-channel relationships and spatial positions. To validate the performance of SRS-Net, we conduct comparative experiments on two datasets, TrashNet and garbage dataset. The experimental results demonstrate that SRS-Net achieves an accuracy of 90.02% on TrashNet and 91.52% on garbage dataset, with FLOPs of 1262.0 M and Params of 9.6902 M. Our approach effectively facilitates automated garbage sorting and resource recycling. Show more
Keywords: Garbage sorting, deep learning, smart recycle sort network
DOI: 10.3233/IDT-240685
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1981-1992, 2024
Authors: Mohanty, Manas Ranjan | Mallick, Pradeep Kumar | Navandar, Rajesh Kedarnath | Chae, Gyoo-Soo | Jagadev, Alok Kumar
Article Type: Research Article
Abstract: This paper explores cognitive interface technology, aiming to tackle current challenges and shed light on the prospects of brain-computer interfaces (BCIs). It provides a comprehensive examination of their transformative impact on medical technology and patient well-being. Specifically, this study contributes to addressing challenges in classifying brain lesion images arising from the complex nature of lesions and limitations of traditional deep learning approaches. It introduces advanced feature fusion models that leverage deep learning algorithms, including the African vulture optimization (AVO) algorithm. These models integrate informative features from multiple pre-trained networks and employ innovative fusion techniques, including the attention-driven grid feature fusion …(ADGFF) model. The ADGFF model incorporates an attention mechanism based on the optimized weights obtained using AVO. The objective is to improve the overall accuracy by providing fine-grained control over different regions of interest in the input image through a grid-based technique. This grid-based technique divides the image into vertical and horizontal grids, simplifying the exemplar feature generation process without compromising performance. Experimental results demonstrate that the proposed feature fusion strategies consistently outperform individual pre-trained models in terms of accuracy, sensitivity, specificity, and F1-score. The optimized feature fusion strategies, particularly the GRU-ADGFF model, further enhance classification performance, outperforming CNN and RNN classifiers. The learning progress analysis shows convergence, indicating the effectiveness of the feature fusion strategies in capturing lesion patterns. AUC-ROC curves highlight the superior discriminatory capabilities of the ADGFF-AVO strategy. Five-fold cross-validation is employed to assess the performance of the proposed models, demonstrating their accuracy, and few other accuracy-based measures. The GRU-ADGFF model optimized with AVO consistently achieves high accuracy, sensitivity, and AUC values, demonstrating its effectiveness and generalization capability. The GRU-ADGFF model also outperforms the majority voting ensemble technique in terms of accuracy and discriminative ability. Additionally, execution time analysis reveals good scalability and resource utilization of the proposed models. The Friedman rank test confirms significant differences in classifier performance, with the GRU-ADGFF model emerging as the top-performing method across different feature fusion strategies and optimization algorithms. Show more
Keywords: Cognitive-interface, brain-computer interfaces (BCIs), brain lesion classification, CNN’s pre-trained networks, attention-driven feature fusion, attention-driven grid-based feature fusion, African vulture optimization (AVO), recurrent neural network (RNN), gated recurrent unit (GRU)
DOI: 10.3233/IDT-240652
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 1993-2018, 2024
Authors: Shahnazari, Kourosh | Ayyoubzadeh, Seyed Moein
Article Type: Research Article
Abstract: This paper introduces the innovative Power Muirhead Mean K-Nearest Neighbors (PMM-KNN) algorithm, a novel data classification approach that combines the K-Nearest Neighbors method with the adaptive Power Muirhead Mean operator. The proposed methodology aims to address the limitations of traditional KNN by leveraging the Power Muirhead Mean for calculating the local means of K-nearest neighbors in each class to the query sample. Extensive experimentation on diverse benchmark datasets demonstrates the superiority of PMM-KNN over other classification methods. Results indicate statistically significant improvements in accuracy on various datasets, particularly those with complex and high-dimensional distributions. The adaptability of the Power Muirhead …Mean empowers PMM-KNN to effectively capture underlying data structures, leading to enhanced accuracy and robustness. The findings highlight the potential of PMM-KNN as a powerful and versatile tool for data classification tasks, encouraging further research to explore its application in real-world scenarios and the automation of Power Muirhead Mean parameters to unleash its full potential. Show more
Keywords: Machine learning, K-Nearest neighbors, power muirhead mean
DOI: 10.3233/IDT-240890
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2019-2031, 2024
Authors: Pandey, Gaurav | Bagri, Rashika | Gupta, Rajan | Rajpal, Ankit | Agarwal, Manoj | Kumar, Naveen
Article Type: Research Article
Abstract: Traditionally, performance measures such as accuracy, recall, precision, specificity, and negative predicted value (NPV) have been used to evaluate a classification model’s performance. However, these measures often fall short of capturing different classification scenarios, such as binary or multi-class, balanced or imbalanced, and noisy or noiseless data. Therefore, there is a need for a robust evaluation metric that can assist business decision-makers in selecting the most suitable model for a given scenario. Recently, a general performance score (GPS) comprising different combinations of traditional performance measures (TPMs) was proposed. However, it indiscriminately assigns equal importance to each measure, often leading to …inconsistencies. To overcome the shortcomings of GPS, we introduce an enhanced metric called the Weighted General Performance Score (W-GPS) that considers each measure’s coefficient of variation (CV) and subsequently assigns weights to that measure based on its CV value. Considering consistency as a criterion, we found that W-GPS outperformed GPS in the above-mentioned classification scenarios. Further, considering W-GPS with different weighted combinations of TPMs, it was observed that no demarcation of these combinations that work best in a given scenario exists. Thus, W-GPS offers flexibility to the user to choose the most suitable combination for a given scenario. Show more
Keywords: Performance measures, classification, imbalanced data modelling, noisy data modelling, coefficient of variation
DOI: 10.3233/IDT-240465
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2033-2054, 2024
Authors: Bennour, Akram | Boudraa, Merouane | Ghabban, Fahad
Article Type: Research Article
Abstract: Determining the script of historical manuscripts is pivotal for understanding historical narratives, providing historians with vital insights into the past. In this study, our focus lies in developing an automated system for effectively identifying the script of historical documents using a deep learning approach. Leveraging the ClAMM dataset as the foundation for our system, we initiate the system with dataset preprocessing, employing two fundamental techniques: denoising through non-local means denoising and binarization using Canny-edge detection. These techniques prepare the document for keypoint detection facilitated by the Harris-corner detector, a feature-detection method. Subsequently, we cluster these keypoints utilizing the k-means algorithm …and extract patches based on the identified features. The final step involves training these patches on deep learning models, with a comparative analysis between two architectures: Convolutional Neural Networks (CNN) and Vision Transformers (ViT). Given the absence of prior studies investigating the performance of vision transformers on historical manuscripts, our research fills this gap. The system undergoes a series of experiments to fine-tune its parameters for optimal performance. Our conclusive results demonstrate an average accuracy of 89.2 and 91.99% respectively of the CNN and ViT based proposed framework, surpassing the state of the art in historical script classification so far, and affirming the effectiveness of our automated script identification system. Show more
Keywords: Script-classification, historical-manuscripts, deep-learning, CNNs, vision-transformers, transfer-learning
DOI: 10.3233/IDT-240565
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2055-2078, 2024
Authors: Halloum, Kamal | Ez-Zahraouy, Hamid
Article Type: Research Article
Abstract: The segmentation of cancerous tumours, particularly brain tumours, is of paramount importance in medicine due to its crucial role in accurately determining the extent of tumour lesions. However, conventional segmentation approaches have proven less effective in accurately delineating the exact extent of brain tumours, in addition to representing a time-consuming task, making it a laborious process for clinicians. In this study, we proposed an automatic segmentation method based on convolutional neural networks (CNNs), by developing a new model using the Resnet50 architecture for detection and the DrvU-Net architecture, derived from the U-Net model, with adjustments adapted to the characteristics of …the medical imaging data for the segmentation of a publicly available brain image dataset called TCGA-LGG and TCIA. Following an in-depth comparison with other recent studies, our model has demonstrated its effectiveness in the detection and segmentation of brain tumours, with accuracy rates for accuracy and the Dice Similarity Coefficient (DSC), the Similarity Index (IoU) and the Tversky Coefficient reaching 96%, 94%, 89% and 91.5% respectively. Show more
Keywords: Brain tumor, classification, segmentation, CLAHE, data augmentation, resnet50, U-Net
DOI: 10.3233/IDT-240385
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2079-2096, 2024
Authors: Abdalla, Mohammed | Islam, Abdullah | Ali, Mohamed | Hendawi, Abdeltawab
Article Type: Research Article
Abstract: Many factors affect the precision and accuracy of location data. These factors include, but not limited to, environmental obstructions (e.g., high buildings and forests), hardware issues (e.g., malfunctioning and poor calibration), and privacy concerns (e.g., users denying consent to fine-grained location tracking). These factors lead to uncertainty about users’ location which in turn affects the quality of location-aware services. This paper proposes a novel framework called UMove , which stands for uncertain movements, to manage the trajectory of moving objects under location uncertainty. The UMove framework employs the connectivity (i.e., links between edges) and constraints (i.e., travel time and …distance) on road network graphs to reduce the uncertainty of the object’s past, present, and projected locations. To accomplish this, UMove incorporates (i) a set-based pruning algorithm to reduce or eliminate uncertainty from imprecise trajectories; and (ii) a wrapper that can extend user-defined probability models designed to predict future locations of moving objects under uncertainty. Intensive experimental evaluations based on real data sets of GPS trajectories collected by Didi Chuxing in China prove the efficiency of the proposed UMove framework. In terms of accuracy, for past exact-location inference, UMove achieves rates from 88% to 97% for uncertain regions with sizes of 75 meters and 25 meters respectively; for future exact-location inference, accuracy rates reach up to 72% and 82% for 75 meters and 25 meters of uncertain regions. Show more
Keywords: Location-based services, location uncertainty, traffic management, imprecise locations, smart transportation, sustainability, smart cities
DOI: 10.3233/IDT-240819
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2097-2113, 2024
Authors: Cao, Yuan | Zhou, Menghao
Article Type: Research Article
Abstract: In this paper, the hybrid electric vehicle (HEV) energy management optimization method is proposed based on deep learning (DL) model predictive control. Through empirical research combined with the questionnaire survey, this article not only provides a new perspective and practical basis but also improves the efficiency and accuracy of the model by improving the relevant algorithms. The study first analyzes the importance of HEV energy management and reviews the existing literature. Then, the optimization method of HEV energy management based on the deep learning model is introduced in detail, including the composition of energy management for hybrid electric vehicles, the …structure and working principle of the deep learning model, especially the backpropagation neural network (BPNN) and the convolutional neural network (CNN), and the steps of application of deep learning in energy management. In the experimental part, questionnaire data from 1,500 consumers were used to design the HEV energy management optimization scheme, and consumers’ attitudes and preferences towards HEV energy optimization were discussed. The experimental results show that the proposed model can predict HEV energy consumption under different road conditions (urban roads, highways, mountain areas, suburban areas, and construction sites), and the difference between the average predicted energy consumption and the actual energy consumption is between 0.1KWH and 0.3KWH, showing high prediction accuracy. In addition, the deep learning-based energy management strategy outperforms traditional control strategies in terms of fuel consumption (6.2 L/100 km), battery charge and discharge times (814), battery life, and CO2 emissions, significantly improving the efficiency of HEV energy. These results demonstrate the great potential and practical application value of deep learning models in the optimization of energy management of HEVs, helping to drive the development of more sustainable and efficient transportation systems. Show more
Keywords: Hybrid electric vehicles, energy management optimization, deep learning, backpropagation neural network, convolutional neural network
DOI: 10.3233/IDT-240298
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2115-2131, 2024
Authors: Parida, Raj Kumar | Roy, Monideepa | Parida, Ajaya Kumar | Khan, Asif Uddin
Article Type: Research Article
Abstract: Integrating renewable energy sources like solar power into the grid necessitates accurate prediction methods to optimize their utilization. This paper proposes a novel approach that combines Convolutional Neural Networks (CNN) with the Ladybug Beetle Optimization (LBO) algorithm to forecast solar power generation efficiently. Many traditional models, for predicting power often struggle with accuracy and efficiency when it comes to computations. To overcome these challenges, we utilize the capabilities of CNN to extract features and recognize patterns from past irradiance data. The CNN structure is skilled at capturing relationships within the input data allowing it to detect patterns that are natural …in solar irradiance changes. Additionally, we apply the LBO algorithm inspired by how ladybug beetles search for food to tune the parameters of the CNN model. LBO imitates how ladybug beetles explore to find solutions making it effective in adjusting the hyperparameters of the CNN. This research utilizes a dataset with solar irradiance readings to train and test the proposed CNN-LBO framework. The performance of this model is assessed using evaluation measures, like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), MAPE, and R 2 value. The experimental outcomes indicate that our hybrid CNN-LBO method surpasses existing techniques in terms of efficiency. Show more
Keywords: Convolutional neural networks, LBO, solar power prediction, optimization, MAE
DOI: 10.3233/IDT-240288
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2133-2144, 2024
Authors: Velayuthapandian, Karthikeyan | Veyilraj, Mathavan | Jayakumaraj, Marlin Abhishek
Article Type: Research Article
Abstract: In recent smart city innovations, parking lot location has garnered a lot of focus. The issue of where to put cars has been the subject of a lot of literature. However, these efforts rely heavily on algorithms built on centralized servers using historical data as their basis. In this study, we propose a smart parking allocation system by fusing k-NN, decision trees, and random forests with the boosting techniques Adaboost and Catboost. Implementing the recommended intelligent parking distribution technique in Smart Society 5.0 offers promise as a practical means of handling parking in contemporary urban settings. Users will be given …parking spots in accordance with their preferences and present locations as recorded in a centralized database using the proposed system’s hybrid algorithms. The evaluation of performance considers the effectiveness of both the ML classifier and the boosting technique, and it finds that the combination of Random Forest and Adaboost achieves 98% accuracy. Users and operators alike can benefit from the suggested method’s optimised parking allocation and pricing structure, which in turn provides more convenient and efficient parking options. Show more
Keywords: Parking space administration, machine learning, control scheme, hybrid-mechanism, k-nearest neighbour
DOI: 10.3233/IDT-230339
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2145-2159, 2024
Authors: Hidri, Adel | Mkhinini Gahar, Rania | Sassi Hidri, Minyar
Article Type: Research Article
Abstract: Distinguishing between roles like Data Scientist, Data Engineer, Data Analyst, and Business Intelligence Developer can be challenging, as there can be overlap in responsibilities, focuses, and skill sets across these positions. By understanding these distinctions, job seekers can better align their skills and interests with the specific requirements and factors of each role, thereby increasing their chances of finding a fulfilling career in the data field. To address what factors distinguish these positions, we developed machine learning models capable of clarifying the distinctions among these positions based on relevant features extracted from the dataset. The proposed learning models leverage relevant …features extracted from the dataset to differentiate between roles accurately. Factors such as technical skills, programming languages, educational background, work experience, and certifications likely play crucial roles in distinguishing between these positions. By incorporating these features into the models, they can effectively identify patterns and characteristics unique to each role. The high accuracy (approximately 99%) achieved by these models not only validates their effectiveness but also underscores the importance of understanding the nuances and specific requirements of each role within the data field. Armed with this knowledge, both job seekers and employers can make more informed decisions when it comes to hiring, career planning, and talent acquisition. Show more
Keywords: Machine learning, Data Scientists, Data Engineers, Data Analysts, Business Intelligence Developers
DOI: 10.3233/IDT-240509
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2161-2176, 2024
Authors: Zafeiriou, Theodoros | Kalles, Dimitris
Article Type: Research Article
Abstract: This document outlines the analysis, design, implementation, and benchmarking of various neural network architectures in a short-term frequency prediction system for the FOREX market. Our objective is to emulate the decision-making process of a human expert (technical analyst) through a system that swiftly adapts to market condition changes, thereby optimizing short-term trading strategies. We have designed and implemented a series of LSTM neural network architectures that take exchange rate values as input to generate short-term market trend forecasts. Additionally, we developed a custom ANN architecture based on simulators for technical analysis indicators. We performed a comparative analysis of the results …and came to useful conclusions regarding the suitability of each architecture and the cost in terms of time and computational power to implement them. The ANN custom architecture produces better prediction quality with higher sensitivity using fewer resources and spending less time than LSTM architectures. The ANN custom architecture appears to be ideal for use in low-power computing systems and for use cases that need fast decisions with the least possible computational cost. Show more
Keywords: Foreign exchange, technical analysis, neural networks, trend forecasting
DOI: 10.3233/IDT-240713
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2177-2190, 2024
Authors: Geetha, G. | Rajagopal, Manjula | Purnachand, K.
Article Type: Research Article
Abstract: Cyber security evolving as a severe problem almost in all sectors of cyberspace, due to the time-to-time increase in the number of security breaches. Numerous Zero-days attacks occur continuously, due to the increase in multiple protocols. Almost all of these attacks are small variants of previously known cyber attacks. Moreover, even the advanced approach like Machine Learning (ML), faces the difficulty in identifying those attack’s small mutants over time. Recently, Deep Learning (DL) has been utilized for multiple applications related to cybersecurity fields. Making use of this DL to identify the cyber attack might be a resilient mechanism for novel …attacks or tiny mutations. Thereby, a novel cyber attack classification model named DCNN-Bi-LSTM-ICS is proposed in this work. This proposed DCNN-Bi-LSTM-ICS has five working stages. Firstly, in the data acquisition stage, the input data (considering the datasets) for attack classification has been collected. These raw data are pre-processed in the second stage, where an improved class imbalance balancing processing is conducted which makes use of the Improved Synthetic Minority Oversampling Technique (ISMOTE). In the third stage, along with the conventional mutual information and statistical features, Improved holo-entropy-based features are extracted. To choose the appropriate feature from those retrieved features, an Improved Chi-Square (ICS) processing is developed in the fourth stage. In the final classification stage, a hybrid classification model that combines both the Deep Convolutional Neural Network (DCNN) and Bi-directional Long Short Term Memory (Bi-LSTM) has been developed. The outcomes show that the proposed DCNN-Bi-LSTM-ICS can offer outstanding performance in the cyber attack classification task. Show more
Keywords: Machine Learning (ML), Deep Learning (DL), ISMOTE, improved holoentropy, Improved Chi-Square (ICS) processing, Deep Convolutional Neural Network (DCNN)
DOI: 10.3233/IDT-240362
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2191-2212, 2024
Authors: Wu, Yunan | Zhang, Haitao
Article Type: Research Article
Abstract: In art and design, style conversion algorithms can fuse the content of one image with the style of another image, thereby generating images with new artistic styles. However, traditional style conversion algorithms suffer from high computational complexity and loss of details during row image conversion. Therefore, this study introduces VGG16 multi-scale fusion feature extraction in any style transition algorithm and introduces a compressed attention mechanism to improve its computational complexity. Then it designs an arbitrary style transformation algorithm on the ground of multi-scale fusion and compressed attention. The results showed that the designed algorithm took 0.014 s and 0.021 s to process …tasks on the COCO Stuff dataset and WikiArt dataset, respectively, proving its high computational efficiency. The loss values of the designed algorithm are 0.046 and 0.052, respectively, indicating strong fitting performance and good generalization ability. The IS score and FID score of the design algorithm are 2.36 and 91.67, respectively, proving that the generated images have high diversity and quality. The above results demonstrate the effectiveness and practicality of design algorithms in art and design. It has important theoretical and practical value in promoting the development of style conversion technology, enhancing the creativity and expressiveness of art and design. Show more
Keywords: Art and design, style conversion algorithm, multi scale fusion, compress attention, VGG16
DOI: 10.3233/IDT-230788
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2213-2225, 2024
Authors: Swathi, M. | Regunathan, Rajeshkannan
Article Type: Research Article
Abstract: Pharyngitis is an inflammation of the oropharynx’s mucous membranes. It is typically brought on by a bacterial illness. The outburst of latest technologies has created the need for remote care of detecting diseases like pharyngitis through images of throat taken with help of smart camera. In recent years, research has forwarded with help of deep learning in classifying pharyngitis. But deep learning models require at least one hour training and requires considerably large data set to get a good accuracy. In this paper, we focused on this time constraint and are proposing a novel approach PFDP to classify pharyngitis through …detection of potential features based on doctor’s perspective. We have extracted the tiny portions of image which the doctor observes them as infected and calculated frequencies of the occurrences of these portions and are given to custom made decision rules. The classification results showed significant improvement in performance in terms of time taken to reach average accuracy of 70%. It has taken only 5 minutes to extract counts of infected patterns and 1 more minute to get classification results by decision rules of if-then-else rules. We have conducted the experiment on set of 800 images. Though accuracy is lesser than that of what other works achieved but time taken to extract features is significantly lower than that of previous works. Also our approach does not require training and can be applied where scarcity of dataset exists. We assure that our approach is a new direction of research and can compete with more state of the art works in future. Show more
Keywords: Pharyngitis, potential feature extraction, classification, machine learning
DOI: 10.3233/IDT-240495
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2227-2240, 2024
Authors: Tiwari, Ajay | Katiyar, Alok
Article Type: Research Article
Abstract: Tongue images (the size, shape, and colour of tongue and the thickness, colour, and moisture content of tongue coating), reflecting the medical condition of entire body based on the model of traditional Chinese medicine (TCM) are extremely utilized in China for millions of years. Gastric cancer (GC) is great lethal kind of cancer in countries and societies. The screening and analysis of GC yet depend on gastroscopy, however its application was significantly restricted due to its invasive, maximum rate and the requirement for expert endoscopists. Early recognition in GC patients and direct treatment contribute significantly to safety for health. Consequently, …this study introduces a Chicken Swarm Algorithm with Deep learningbased Tongue Image Analysis for Gastric Cancer Classification (CSADL-TIAGCC) system. The projected CSADL-TIAGCC approach studies the input tongue images for the identification and classification of GC. To accomplish this, the CSADL-TIAGCC system uses improved U-Net segmentation approach. Besides, residual network (ResNet-34) model-based feature extractor is used. Furthermore, long short term memory (LSTM) approach was exploited for GC classification and its hyperparameters are selected by the CSA. The simulation outcome of the CSADL-TIAGCC algorithm was examined under tongue image database. The experimental outcomes illustrate the enhanced results of the CSADL-TIAGCC technique with respect of different evaluation measures. Show more
Keywords: Tongue image analysis, hyperparameter tuning, gastric cancer, deep learning, computer aided diagnosis
DOI: 10.3233/IDT-240138
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2241-2253, 2024
Authors: Danesh, Hamed | Karimi, Mohammad Bagher | Arasteh, Bahman
Article Type: Research Article
Abstract: Crypto-jacking attack is a novel type of cyber-attack on the internet that has emerged because of the popularity of digital currencies. These attacks are the most common type of attacks in the cryptocurrency field because of their specific features such as easy scenario, un-traceability, and ease of secrecy. In crypto-jacking attacks, it is common to embed malicious code inside website scripts. Different techniques have been provided to deal with Crypto-jacking attacks, but crypto-jacking attackers bypass them by limiting resources. The crypto-mining services provided on the internet are legal, and due to the anonymous nature of cryptocurrencies, client identification is a …challenging task. Improving the accuracy and performance of the Crypto-jacking attack detection methods are the main objectives of this study. In this paper, a hybrid network-based method to identify these attacks to achieve better and more accurate results. The proposed solution (CMShark) is a combination of machine learning (ML) models, IP blacklisting and payload inspection methods. In the ML model, the packets are classified using size patterns; in IP blacklisting, attacks are detected based on known infected addresses and infected scripts. In payload inspection, the provided information on the packet payload is searched for any suspicious keywords. The proposed method relies solely on the network and is deployed on the edge of the network, making it infrastructureindependent. The proposed detection model reaches an accuracy score of 97.02%, an F1-score of 96.90% a ROC AUC score of 97.20% in input NetFlow classification; and a 93.98% accuracy score, 94.30% F1-score and 97.30% ROC AUC score in output NetFlow classification. Show more
Keywords: Intrusion detection system, crypto-jacking, NetFlow-based detection, machine learning, blacklisting, payload inspection
DOI: 10.3233/IDT-240319
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2255-2273, 2024
Authors: Aarthi, G. | Priya, S. Sharon | Banu, W. Aisha
Article Type: Research Article
Abstract: Anomaly detection in Intrusion Detection System (IDS) data refers to the process of identifying and flagging unusual or abnormal behavior within a network or system. In the context of IoT, anomaly detection helps in identifying any abnormal or unexpected behavior in the data generated by connected devices. Existing methods often struggle with accurately detecting anomalies amidst massive data volumes and diverse attack patterns. This paper proposes a novel approach, KDE-KL Anomaly Detection with Random Forest Integration (KRF-AD), which combines Kernel Density Estimation (KDE) and Kullback-Leibler (KL) divergence with Random Forest (RF) for effective anomaly detection. Additionally, Random Forest (RF) integration …enables classification of data points as anomalies or normal based on features and anomaly scores. The combination of statistical divergence measurement and density estimation enhances the detection accuracy and robustness, contributing to more effective network security. Experimental results demonstrate that KRF-AD achieves 96% accuracy and outperforms other machine learning models in detecting anomalies, offering significant potential for enhancing network security. Show more
Keywords: Anomaly detection, intrusion detection system, kullback leibler divergence, kernel density estimation, random forest, machine learning
DOI: 10.3233/IDT-240628
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2275-2287, 2024
Authors: Sharma, Ashish
Article Type: Research Article
Abstract: In the fiercely competitive landscape of modern business, the establishment and maintenance of strong brand relationships have become pivotal for organizations seeking sustainable success. This research explores the intricate dynamics of brand relationships by examining the influences of three core constructs: brand commitment, brand reliability, and brand attitude. Drawing upon an extensive review of the literature on branding and consumer behaviour, this study develops a comprehensive theoretical framework that elucidates the interplay among these pivotal factors. We hypothesize that brand commitment, brand reliability, and brand attitude, are independent factors and brand relationship is a dependent factor. To empirically test our …hypotheses, we employ a rigorous research design, incorporating quantitative data collection methods. Data was gathered directly from survey participants through the administration of questionnaires, and the gathered data was subjected to analysis using SPSS and AMOS software. This analysis encompassed various statistical techniques, including SEM, CFA, descriptive analysis, regression analysis, ANOVA, reliability test, correlation analysis, the Kaiser-Meyer-Olkin (KMO) and Bartlett’s test. The sample for the study consisted of 600 respondents drawn from a wide range of organizations. All the analyses reveal that Brand Commitment, Brand Reliability, and Brand Attitude all have statistically significant impacts on Brand Relationship. Consumers who possess a profound dedication to a brand are inclined to develop robust and lasting connections with it. Future investigations could explore the factors that potentially moderate or mediate the relationships under research. Furthermore, exploring these dynamics within diverse industries and cultural settings can provide a more all-encompassing grasp of brand relationships. Show more
Keywords: Brand commitment, brand reliability, brand attitude, brand relationship, consumer behaviour, branding dynamics
DOI: 10.3233/IDT-240686
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2289-2305, 2024
Authors: Abdollahi, Mahdi | Bouyer, Asgarali | Arasteh, Bahman
Article Type: Research Article
Abstract: In recent years, the Cuckoo Optimization Algorithm (COA) has been widely used to solve various optimization problems due to its simplicity, efficacy, and capability to avoid getting trapped in local optima. However, COA has some limitations such as low convergence when it comes to solving constrained optimization problems with many constraints. This study proposes a new modified and adapted version of the Cuckoo optimization algorithm, referred to as MCOA, that overcomes the challenge of solving constrained optimization problems. The proposed adapted version introduces a new coefficient that reduces the egg-laying radius, thereby enabling faster convergence to the optimal solution. Unlike …previous methods, the new coefficient does not require any adjustment during the iterative process, as the radius automatically decreases along the iterations. To handle constraints, we employ the Penalty Method, which allows us to incorporate constraints into the optimization problem without altering its formulation. To evaluate the performance of the proposed MCOA, we conduct experiments on five well-known case studies. Experimental results demonstrate that MCOA outperforms COA and other state-of-the-art optimization algorithms in terms of both efficiency and robustness. Furthermore, MCOA can reliably find the global optimal solution for all the tested problems within a reasonable iteration number. Show more
Keywords: Nonlinear optimization problem, constrained problems, engineering designing problems, penalty function, cuckoo optimization algorithm
DOI: 10.3233/IDT-240306
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2307-2337, 2024
Authors: Anand, Sameer | Jaiswal, Ajay | Verma, Vibha | Singh, Atul
Article Type: Research Article
Abstract: The operational environment of software differs from the debugging environment. Therefore, the study explores the impact of irregular consumption by diverse users on the development cost of software, reliability of software, and release decisions. To accomplish this, a release model has been formulated considering logistic testing coverage-based reliability growth model built as a stochastic process including the error generation. The stochastic nature has been captured by considering the noise factor due to irregular fluctuations occurring during testing while usage uncertainty has been captured by introducing a constant parameter in the cost function of the operational phase. It is assumed …that the testing phase cost is affected by the noise and the operation phase cost is affected by the severity of noise which is the result of uncertain usage by users. The model was evaluated against a real failure dataset. The release model creates a trade-off between software development cost, release timing, and reliability aspirations. This study contributes to software reliability literature and provides insights to practitioners to make software release decisions. The sensitivity analysis results give information about various aspects during the operational phase that affect the overall development cost. Show more
Keywords: Testing coverage, noise-based SRGM, error generation, operational phase, release planning, environmental factor, sensitivity analysis
DOI: 10.3233/IDT-240307
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2339-2352, 2024
Authors: Yadav, Priyanka | Jain, Anshul | Pathak, Nitish | Sharma, Neelam
Article Type: Research Article
Abstract: The primary aim of the present research is to investigate the factors that precede digital payments and their effect on the behavioural intentions of consumers residing in both rural and urban areas. In this research, different factors, such as perceived usefulness (PU), compatibility (CO), performance expectancy (PE), transaction speed (TS), trust (TR), unavailability of facilitating conditions (UFC), and operational constraints (OC) have been taken from various theories. To collect data, an online survey was administered, and a total of 557 participants provided their responses. The model’s validity and reliability are established using confirmatory factor analysis (CFA), while the hypothetical relationships …were analysed using structural equation modelling (SEM) in AMOS. It is concluded that PU, CO, PE, TS, TR, UFC, and OC have impacted the behavioural intentions of both urban and rural users. Moreover, the multi-group analysis is also applied in this research, to know the difference in behaviour of rural and urban users. It is concluded that PU, CO, PE, TS, TR, UFC, and OC have impacted the behavioural intentions of the users. Through the multi-group AMOS analysis, it was discovered that the path from drivers such as TS, CO, TR, and UFC to intent to use showed a difference between the two groups, whereas OC, PE, and PU showed the same path coefficient. Finally, at the end, some managerial implications are made for further study. Show more
Keywords: Digital payment system, transaction speed, trust, compatibility
DOI: 10.3233/IDT-240659
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2353-2370, 2024
Authors: Shelke, Vishakha | Jadhav, Ashish
Article Type: Research Article
Abstract: Influence maximization (IM) in dynamic social networks is an optimization problem to analyze the changes in social networks for different periods. However, the existing IM methods ignore the context propagation of interaction behaviors among users. Hence, context-based IM in multiplex networks is proposed here. Initially, multiplex networks along with their contextual data are taken as input. Community detection is performed for the network using the Wilcoxon Hypothesized K-Means (WH-KMA) algorithm. From the detected communities, the homogeneous network is used for extracting network topological features, and the heterogeneous networks are used for influence path analysis based on which the node connections …are weighted. Then, the influence-path-based features along with contextual features are extracted. These extracted features are given for the link prediction model using the Parametric Probability Theory-based Long Short-Term Memory (PPT-LSTM) model. Finally, from the network graph, the most influencing nodes are identified using the Linear Scaling based Clique (LS-Clique) detection algorithm. The experimental outcomes reveal that the proposed model achieves an enhanced performance. Show more
Keywords: IM, social influence analysis, multiplex networks, Wilcoxon Hypothesized community detection, linear scaling based influencing nodes identification, parametric probability theory-based link prediction
DOI: 10.3233/IDT-230804
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2371-2387, 2024
Authors: Liu, Yishu | Hou, Jia
Article Type: Research Article
Abstract: Expanding and being competitive in the current economic environment requires companies to embrace digital transformation. In the framework of Industry 4.0, the network of interconnected machines, sensors, and software known as the IIoT plays a crucial role in transforming conventional manufacturing facilities into smart factories, notably in monitoring and optimising the manufacturing process. The issues about enormous record storage and how they react challenge conventional automated methods in the IIoT. Cognitive systems optimally modify production settings based on managing uncertainty and sensory inputs. This work uses the Internet of Things-based decision support system with cognitive automation (IoT-DSS-CA) for industrial informatics across …the board, including data collection, transmission, processing, and storage. Incorporating the elements frequently neglected during digital transformation, the suggested method uses the business process management (BPM) paradigm to give a systematic approach that industrial organizations may employ to aid their path towards Industry 4.0. The proposed mechanism is thoroughly investigated and evaluated compared to an original solution using several sensing and decision-making features in industrial parameter settings determined by Simple Additive Weighting (SAW) and Analytic Hierarchy Process (AHP). Show more
Keywords: Businesses, Industrial Internet of Things (IIoT), cognitive automation, Industry 4.0, business process management
DOI: 10.3233/IDT-230636
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2389-2406, 2024
Authors: Cai, Yuan
Article Type: Research Article
Abstract: Accompanied by a series of developments in information technology, such as the Internet of Things, big data, and digital twin technology, these innovations came into existence and began to gain significance. Targeting the issues of hierarchical confusion and inadequate visualization in traditional logistics and warehousing systems, this study begins by analyzing the framework structure of the warehousing system. It uses genetic algorithm calculation to obtain the solution set for optimizing cargo pull objectives. Finally, it proposes a novel intelligent IoT logistics and warehousing system by integrating digital twin technology. The experiment results indicated the genetic algorithm could optimize up to …60% of the cargo pull optimization objective function in this model with at least 300 iterations. The simulation and actual times of outgoing and incoming storage under this model varied between 0 to 1. The error throughout the range was a minimum of 0.1 seconds. The study found that the storage density achieved a maximum value of nearly 98%, while the minimum storage cost was approximately $3 per order and the maximum was $9 per order. Overall, the proposed model can aid enterprises in optimizing their operations by improving efficiency and reducing logistics and warehousing costs, ultimately promoting the digital and intelligent development of the logistics industry. Show more
Keywords: Internet of things, logistics and warehousing systems, digital twins, genetic algorithms, cargo pull optimization
DOI: 10.3233/IDT-240324
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2407-2420, 2024
Authors: Chen, Ke | Zhang, Tingting | Zhao, Yuanxing | Qian, Taiyu
Article Type: Research Article
Abstract: The exponential expansion of information has made text feature extraction based on simple semantic information insufficient for the multidimensional recognition of textual data. In this study, we construct a text semantic structure graph based on various perspectives and introduce weight coefficients and node clustering coefficients of co-occurrence granularity to enhance the link prediction model, in order to comprehensively capture the structural information of the text. Firstly, we jointly build the semantic structure graph based on three proposed perspectives (i.e., scene semantics, text weight, and graph structure), and propose a candidate keyword set in conjunction with an information probability retrieval model. …Subsequently, we propose weight coefficients of co-occurrence granularity and node clustering coefficients to improve the link prediction model based on the semantic structure graph, enabling a more comprehensive acquisition of textual structural information. Experimental results demonstrate that our research method can reveal potential correlations and obtain more complete semantic structure information, while the WPAA evaluation index validates the effectiveness of our model. Show more
Keywords: Multi-view, embedding, semantic, feature extraction, link prediction
DOI: 10.3233/IDT-240022
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2421-2437, 2024
Authors: Liu, Rui
Article Type: Research Article
Abstract: International trade, as an important component of economic exchange between countries, is of great significance for the economic development of each country and international cooperation. In international trade, the selection and evaluation of suppliers has always been a key issue. To ensure the smooth progress of trade and the controllability of quality, it is necessary to establish a target supplier evaluation system. This article used the CART (Classification and Regression Tree) algorithm to help identify and analyze the impact of key factors on supplier evaluation and classify and evaluate suppliers. The international trade target supplier evaluation system based on the …CART algorithm was also constructed, and its performance was tested in the experimental section and compared with the international trade target supplier evaluation system based on traditional algorithms. According to the experimental results, it can be concluded that both the traditional algorithm and the CART algorithm performed well in terms of application effectiveness and system user satisfaction. In terms of application effectiveness, the average score of traditional algorithms was 4.3, with a rating range of 3.8 to 4.9, while the average score of the CART algorithm was 4.6, with a rating range of 4.2 to 5.0. The satisfaction rating of system users on the CART algorithm was slightly higher than that of traditional algorithms, indicating that the CART algorithm has better application effectiveness and user satisfaction in the design of international trade target supplier evaluation systems. The design of an international trade target supplier evaluation system based on the CART algorithm can also help enterprises reduce trade risks and improve the stability and reliability of the supply chain. It has important practical significance and application value for further promoting the development of international trade. Show more
Keywords: Supplier evaluation system, CART algorithm, international trade, procurement efficiency, supply chain
DOI: 10.3233/IDT-230246
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2439-2454, 2024
Authors: Li, Cong
Article Type: Research Article
Abstract: Establishing a management information system that adapts to the continuous development of agricultural production management today, adopting advanced 5G technology and means, and comprehensively managing the entire agricultural production is an inevitable trend in the development of agricultural production management. In order to improve the effect of agricultural production informatization management, this paper combines 5G (fifth-generation) network technology to construct agricultural production and informatization management models. In view of the massive dynamic agricultural production management data, this paper studies the efficient multi-copy management strategy, which can consider the two aspects of load balancing and energy efficiency at the same time, …so as to achieve efficient data access and energy efficiency. Moreover, this paper introduces a time series-based file access heat calculation model. In terms of determining the number of copies of files, this paper proposes a copy factor allocation algorithm centered on the ranking of file access popularity, and gives a characteristic function. Through cluster analysis, it can be seen that the agricultural production informatization management model based on 5G technology proposed in this paper can effectively improve the efficiency of agricultural information management. Show more
Keywords: 5G network, agricultural production, informatization, management
DOI: 10.3233/IDT-240021
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2455-2470, 2024
Authors: A, Raja | P M, Prathibhavani | K R, Venugopal
Article Type: Research Article
Abstract: The Internet of Things (IoT) consists of smart devices with limited resources that can identify and analyze data. In IoT-enabled healthcare systems, the security of IoT devices and the data they contain is complex. These devices in the healthcare industry, edge computing can provide low-latency information services at a reasonable cost. This work proposes a security infrastructure for Software Defined Network (SDN)-based edge computing in IoT-enabled healthcare systems consisting of three steps: Lightweight authentication, collaborative edge computing and job migration. The lightweight authentication step involves both Improved Lightweight Key Management (ILKM) and Improved Elliptic Curve Cryptography (IECC) schemes to ensure …authentication among the devices and edge servers. Moreover, the patient’s data in IoT devices are scheduled to the appropriate edge server by examining the load balancing in the collaborative edge computing phase. This is done optimally using the adopted hybrid optimization model, Osprey Assisted Coati Optimization Algorithm (OACOA). Further, job migration takes place, in which the data is allocated to the edge server by comparing the capacity of edge servers and the data gets migrated to other servers by considering migration cost when the capacity of the edge server is overloaded. Finally, the efficiency of the suggested OACOA scheme is evaluated over traditional models with regard to several metrics. When considering the edge-server 30, the OACOA scheme achieves a makespan of 385, while conventional methods acquired fewer makespan ratings. Also, the OACOA approach obtained the highest security ratings (0.7143) on edge-server 20 when compared to existing schemes. Show more
Keywords: IoT healthcare, SDN-based edge computing, IECC, ILKM, OACOA
DOI: 10.3233/IDT-230650
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2471-2493, 2024
Authors: Zhang, Xueqi
Article Type: Research Article
Abstract: The malicious download of library resources may lead to serious security risks and data leakage. To address this issue, this study proposes an intelligent algorithm based on Sliding Event Windows for detecting the malicious download behavior. This research method includes collecting and analyzing library resource download data and establishing a Sliding Event Window model for behavioral analysis. For each event window, this intelligent algorithm was utilized for feature extraction and behavior classification. Experimental results showed that window size and class radius had a large impact, while clustering radius had a small impact. The maximum topic cluster ratio affected the false …alarm rate. In the ROC curve area comparison, the AUC values of the proposed method, RBM Method, a detection method based on IP request time interval, and a detection method based on IP request frequency were 0.904, 0.879, 0.841, and 0.797, respectively. The research confirmed that Sliding Event Windows can effectively improve the accuracy of malicious download detection for library resources, enhance the security of library resources, and protect user privacy. This study can further optimize resource utilization efficiency and promote scientific research and innovative development. Show more
Keywords: Sliding Event Window, theme relevance, library, electronic resources, malicious download
DOI: 10.3233/IDT-240382
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2495-2509, 2024
Authors: Mohanty, Niharika | Pradhan, Manaswini | Mane, Pranoti Prashant | Mallick, Pradeep Kumar | Ozturk, Bilal A. | Shamaileh, Anas Atef
Article Type: Research Article
Abstract: This manuscript presents a comprehensive approach to enhance the accuracy of skin lesion image classification based on the HAM10000 and BCN20000 datasets. Building on prior feature fusion models, this research introduces an optimized cluster-based fusion approach to address limitations observed in our previous methods. The study proposes two novel feature fusion strategies, KFS-MPA (using K-means) and DFS-MPA (using DBSCAN), for skin lesion classification. These approaches leverage optimized clustering-based deep feature fusion and the marine predator algorithm (MPA). Ten fused feature sets are evaluated using three classifiers on both datasets, and their performance is compared in terms of dimensionality reduction and …accuracy improvement. The results consistently demonstrate that the DFS-MPA approach outperforms KFS-MPA and other compared fusion methods, achieving notable dimensionality reduction and the highest accuracy levels. ROC-AUC curves further support the superiority of DFS-MPA, highlighting its exceptional discriminative capabilities. Five-fold cross-validation tests and a comparison with the previously proposed feature fusion method (FOWFS-AJS) are performed, confirming the effectiveness of DFS-MPA in enhancing classification performance. The statistical validation based on the Friedman test and Bonferroni-Dunn test also supports DFS-MPA as a promising approach for skin lesion classification among the evaluated feature fusion methods. These findings emphasize the significance of optimized cluster-based deep feature fusion in skin lesion classification and establish DFS-MPA as the preferred choice for feature fusion in this study. Show more
Keywords: Skin lesion image classification, feature fusion, CNN’s pre-trained networks, VGG16, EfficientNet B0, and ResNet50, marine predator algorithm (MPA)
DOI: 10.3233/IDT-240336
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2511-2536, 2024
Authors: RS, Rajasree | Pede, Shailaja V. | Kharat, Reena | S, Pooja Sharma | GS, Gopika | Bansode, Suyoga
Article Type: Research Article
Abstract: The Alzheimer disease (AD) is a neurologic brain condition, which affects the cells in the brain and eventually renders a patient incapable of performing routine daily tasks. Due to the outstanding spatial clarity, high access, and strong contrast, MRI has been utilized in analyses pertaining to AD. This work develops an AD classification model using MRI images. Here, preprocessing is done by the Gabor filter. Subsequently, the Improved U-net segmentation model is employed for image segmentation. The features extracted comprises of modified LGXP features, LTP features, and LBP features as well. Finally, the Deep ensemble classifier (DEC) model is proposed …for AD classification which combines classifiers such as RNN, DBN, and Deep Maxout Network (DMN). For enhancing the efficiency for classification of AD, the optimal weight of DMN is adjusted using the Self Customized BWO (SC-BWO) model. The outputs from DEC are averaged and the final result is obtained. Finally, the analysis of dice, Jaccard scores is performed to show the betterment of the SC-BWO scheme. Show more
Keywords: Alzheimer’s disease, MRI image, modified LGXP, deep ensemble, SC-BWO algorithm
DOI: 10.3233/IDT-230524
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2537-2557, 2024
Authors: Lohani, Dhruv Chandra | Rana, Bharti
Article Type: Research Article
Abstract: Diagnosed in millions of children, ADHD is the leading mental health concern in childhood. Several steps and a lot of personal characteristics (PC) are required from various sources for accurate analysis of ADHD and its subtype (Hyperactive (ADHD-H), Combined (ADHD-C), or Inattentive (ADHD-I)). Moreover, there is no standard automatic diagnostic tool to differentiate ADHD, its subtype, and typical developing (TD) using PC data. The present work focused on the development of a machine learning-based automatic diagnostic tool for the classification of TD, ADHD, and its subtypes using PC data that can be helpful for clinicians. In this work, eight datasets …(D1 to D8, four balanced and four unbalanced) are constructed from publicly available dataset and three sets of features were built. Five popular classifiers, namely K-Nearest Neighbor (KNN), Logistic Regression Classifier (LRC), Random Forest (RF), Support Vector Machine (SVM), and Radial Basis Function Support Vector Machine (RBSVM), were trained for the datasets. To comprehensively evaluate performance, the evaluation involved ten iterations of a 10-fold cross-validation approach to calculate average classification accuracy, recall, specificity, and F1 score. Gender, IQMeasure, Full4IQ, and Handedness are observed to be relevant for the classification. Overall, it is observed that RBSVM outperformed other classifiers in most cases. Show more
Keywords: Attention-deficit/hyperactivity disorder, personal characteristics, classification, machine learning, feature selection
DOI: 10.3233/IDT-230223
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2559-2575, 2024
Authors: Li, Qiang
Article Type: Research Article
Abstract: In today’s world, Internet technology and wireless communication technology are becoming more and more mature, which has brought massive network information resources to all sectors of society. At the same time, the phenomenon of data loss caused by illegal network intrusion is becoming more and more common. Therefore, it is necessary to identify and deal with them in combination with IDS (intrusion detection system). The problem of data processing is also very important. Enterprises can build a data management system according to their own needs, and use this system to process data. With the help of science and technology, AI …(Artificial Intelligence) technology has become more mature and applied in many industries. Therefore, this paper proposed to build an AI IDS, and combined the deep RL (reinforcement learning) algorithm to analyze the performance of the system. This paper tested and analyzed the system from the aspects of precision and recall. The experimental results showed that the average precision of the five data sets was 94.76%, and the average recall rate was 91.4%. From the above data, combined with the algorithm in this paper, the precision and recall of the system have been significantly improved. This paper also conducted benchmark energy consumption comparison experiments for different cloud data management systems. The results showed that in terms of loading, the benchmark energy consumption of HBase was the lowest, which was 86KJ. In terms of query, the benchmark energy consumption of GridSQL was the lowest, which was 56KJ. It can be seen that different systems have their own advantages in the benchmark energy consumption of loading and query. Show more
Keywords: Artificial intelligence, intrusion detection technology, artificial intelligence intrusion detection system, data management system
DOI: 10.3233/IDT-240388
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2577-2588, 2024
Authors: Chen, Dong | Wu, Yang
Article Type: Research Article
Abstract: A solid foundation for behavior portrait construction in the fight against telecom fraud is the goal of this research. The study explores the integration of communication AI and Big Data technologies, focusing on the perspective of artificial intelligence. By using insights obtained from a telecom fraud detection model that relies on users’ behavior variations expressed through time-varying signatures, the goal of this study is to enhance fraud prevention strategies in the telecom industry. Through the examination of call detail records and customer profile information, the TeleGuard AI Fraud Prevention Framework (TGAI-FPF) aims to recognize suspicious trends and variations that are …potentially suggestive of fraudulent actions. The purpose of the model is to generate behavior portraits that are capable of capturing the distinctive aspects of fraudulent conduct in telecom networks. This will be accomplished through the utilization of advanced analytics and machine learning algorithms. The study highlights the significance of leveraging big data analytics and artificial intelligence technologies to efficiently detect and thwart fraudulent activity in the telecom industry. The results of this study should fortify the defenses of telecom networks against growing fraudulent schemes and help in the development of preventative measures to combat fraud. This is the anticipated manner in which the results will add. Show more
Keywords: Telecom fraud prevention, communication big data, artificial intelligence (AI) technology, behavior portraits, telecommunication networks, fraud detection, machine learning, behavioral indicators, fraud threats mitigation
DOI: 10.3233/IDT-240386
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2589-2605, 2024
Authors: Feng, Xiaofang | Li, Liping | Chen, Qing
Article Type: Research Article
Abstract: Nowadays, as an indispensable part of urban infrastructure, urban rail transit (URT) vehicles have also developed rapidly. A large amount of manpower, material resources, and financial resources need to be invested in the construction process of URT. For URT vehicles, research on more accurate fault prediction methods can save a lot of maintenance costs and improve the reliability of URT construction. As an important electrical equipment for urban rail transit vehicles to obtain electric energy from the catenary, the operation of rail transit vehicles puts forward higher performance requirements for the pantograph. For solving the problems of low accuracy of …fault prediction, over reliance on practical experience and high cost of fault prediction in the application of traditional URT vehicle pantograph fault prediction model. Combining sensor network and artificial intelligence algorithm, this paper analyzed the traditional rail transit vehicle pantograph fault prediction model, and verified it through comparative experiments. Through the comparative analysis of the experimental results, this paper can draw a conclusion that compared with the traditional rail transit vehicle pantograph fault prediction model, the rail transit vehicle pantograph fault prediction model has higher fault prediction accuracy, less model response time, lower risk of pantograph failure, higher model application satisfaction, and the accuracy of fault prediction increased by about 6.6%. The rail transit vehicle pantograph fault prediction model can effectively improve the accuracy of vehicle pantograph fault prediction, which can greatly promote the safety of URT and promote the intelligent process of URT. Show more
Keywords: Pantograph failure, urban rail transit, machine learning, edge feature extraction
DOI: 10.3233/IDT-230756
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2607-2619, 2024
Authors: J, Revathi | J, Anitha
Article Type: Research Article
Abstract: This research investigates various deep learning techniques to automatically classify Left Ventricular Hypertrophy (LVH) from electrocardiogram (ECG) signals. LVH frequently results from persistently high blood pressure, causing the heart pump harder and thicken the ventricular walls. It is associated with an increased risk of heart attacks, heart failure, stroke, and sudden cardiac death. The significance of this research lies in the early and precise detection of LVH, facilitating timely interventions and ultimately improving patient health. The non-invasive nature of ECG monitoring, integrated with the efficiency of deep learning models, contributes to faster and more accessible to enhance diagnostic accuracy and …efficiency in identifying LVH. The objective of this research is to assess and compare the performance of GRU3Net, Double-Bilayer LSTM, and Conv2LSTM, Dual-LSTM models in the classification of Left Ventricular Hypertrophy (LVH) based on electrocardiogram (ECG) signals, utilizing a dataset sourced from the PTB Diagnostic ECG Database. The implemented deep learning models yielded noteworthy results. Specifically, the GRU3Net model achieved a high accuracy of 96.1%, showcasing an optimal configuration for overall accuracy. The Double-Bilayer LSTM model followed with an accuracy of 91.7%. However, a decline in accuracy was observed in both the Dual-LSTM and Conv2LSTM models, with the former registering an accuracy of 90.8% and the latter decreasing further to 87.3%. Show more
Keywords: Left ventricular hypertrophy, GRU3Net, Dual-LSTM, Double-Bilayer LSTM, Conv2LSTM model
DOI: 10.3233/IDT-240649
Citation: Intelligent Decision Technologies, vol. 18, no. 3, pp. 2621-2641, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl