Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 105.00Impact Factor 2024: 0.6
The field of intelligent decision technologies is interdisciplinary in nature, bridging computer science with its development of artificial intelligence, information systems with its development of decision support systems, and engineering with its development of systems. IDT seeks to attract research that is focused on applied technology while exploring its development at a fundamental level. IDT seeks an interchange of research on intelligent systems and intelligent technologies which enhance or improve decision-making in industry, government and academia. IDT publishes research on all aspects of intelligent decision technologies, from fundamental development to the applied system. The journal is concerned with theory, design, development, implementation, testing and evaluation of intelligent decision systems that have the potential to support decision making in the areas of management, international business, finance, accounting, marketing, healthcare, military applications, production, networks, traffic management, crisis response, human interfaces and other applied fields.
The target audience is researchers in computer science, business, commerce, health science, management, engineering and information systems that develop and apply intelligent decision technologies. Authors are invited to submit original unpublished work that is not under consideration for publication elsewhere.
Authors: Chen, Huanying | Wei, Bo | Huang, Zhaoji
Article Type: Research Article
Abstract: In the age of big data, electronic data has developed rapidly and gradually replaced traditional paper documents. In daily life, all kinds of data are saved in the form of electronic documents. In this regard, people have strengthened the development of electronic depository system. Electronic storage refers to the storage of actual events in the form of electronic data through information technology to prove the time and content of events. Its application scenarios are very extensive such as electronic contracts, online transactions and intellectual property rights. However, due to the vulnerability of electronic data, the existing electronic data depository system …has certain security risks, and its content is very easy to be tampered with and destroyed, resulting in the loss of depository information. Due to the complexity of the operation of the existing electronic data depository system, some users are likely to reduce the authenticity of the depository information due to the non-standard operation. In order to solve the problems existing in the current electronic data storage system, this paper designed an electronic data storage system based on cloud computing and blockchain technology. The data storage of cloud computing and blockchain was decentralized, and its content cannot be tampered with. It can effectively ensure the integrity and security of electronic information, which is more suitable for the needs of electronic storage scenarios. This paper first introduced the development of electronic data depository system and cloud computing, and optimized the electronic data depository system through the task scheduling model of cloud computing. Finally, the feasibility of the system was verified through experiments. The data showed that the functional efficiency of the system in the electronic data sampling point storage function, the upload of documents to be stored, the download of stored documents, the view of stored information function and the file storage and certificate comparison verification function has reached 0.843, 0.821, 0.798, 0.862 and 0.812 respectively. The final function indexes of each function of the traditional electronic data depository system were 0.619, 0.594, 0.618, 0.597 and 0.622 respectively. This data shows that the electronic data storage system based on cloud computing and blockchain modeling can effectively manage electronic data and facilitate relevant personnel to verify electronic data. Show more
Keywords: Electronic data, deposit system, cloud computing, blockchain technology, task scheduling model
DOI: 10.3233/IDT-230152
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2643-2656, 2024
Authors: Xiao, Jian
Article Type: Research Article
Abstract: Machine learning algorithms have been widely used in risk prediction management systems for financial data. Early warning and control of financial risks are important areas of corporate investment decision-making, which can effectively reduce investment risks and ensure companies’ stable development. With the development of the Internet of Things, enterprises’ financial information is obtained through various intelligent devices in the enterprise financial system. Big data provides high-quality services for the economy and society in the high-tech era of information. However, the amount of financial data is large, complex and variable, so the analysis of financial data has huge difficulties, and with …the in-depth application of machine learning algorithms, its shortcomings are gradually exposed. To this end, this paper collects the financial data of a listed group from 2005 to 2020, and conducts data preprocessing and Feature selection, including removing missing values, Outlier and unrelated items. Next, these data are divided into a training set and a testing set, where the training set data is used for model training and the testing set data is used to evaluate the performance of the model. Three methods are used to build and compare data control models, which are based on machine learning algorithm, based on deep learning network and the model based on artificial intelligence and Big data technology proposed in this paper. In terms of risk event prediction comparison, this paper selects two indicators to measure the performance of the model: accuracy and Mean squared error (MSE). Accuracy reflects the predictive ability of the model, which is the proportion of all correctly predicted samples to the total sample size. Mean squared error is used to evaluate the accuracy and error of the model, that is, the square of the Average absolute deviation between the predicted value and the true value. In this paper, the prediction results of the three methods are compared with the actual values, and their accuracy and Mean squared error are obtained and compared. The experimental results show that the model based on artificial intelligence and Big data technology proposed in this paper has higher accuracy and smaller Mean squared error than the other two models, and can achieve 90% accuracy in risk event prediction, which proves that it has higher ability in controlling financial data risk. Show more
Keywords: Machine learning algorithm, financial risk control, big data control, cyborg sensation, Internet of Things
DOI: 10.3233/IDT-230156
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2657-2670, 2024
Authors: Ding, Haijuan | Zhao, Chengtao | Zhao, Debiao | Dong, Hairong
Article Type: Research Article
Abstract: An electrical Discharge Machine (EDM) is an effective spark machine that passes the sparks to get the desired shape by performing metal fabrication. In the EDM process, materials are removed between the two electrodes by transferring the workpiece’s high electric voltage and dielectric liquid. The voltage between the two electrodes is gradually increased to break down the dielectrics and remove the materials from the surface. EDM utilizes several parameters to regularize the material removal rate during this process. EDM changes its parameter for every experiment to minimize the Average Tool Wear Rate (ATWR). The existing techniques utilize a neural model …to optimize the EDM parameter. However, the traditional approaches fail to perform the fine-tuning process that affects the Material Removal Rate (MRR) and ATWR. Therefore, meta-heuristics optimization techniques are incorporated with the neural model to enhance the EDM parameter optimization process. In this work, the EDM experimental data have been collected and processed by the learning process to create the training pattern. Then, the test data is investigated using the Backpropagation Neural Model (BPM) to propagate the neural parameters. The BPM model is integrated with the Butterfly Optimization Algorithm (BOA) to select the search space’s global parameters. This analysis clearly shows an 8.93% maximum prediction rate, 0.023 minimum prediction rate and 2.83% mean prediction rate while investigating the different testing patterns compared to other methods. Show more
Keywords: Electrical Discharging Machine (EDM), Material Removal Rate (MRR), Unsupervised Pre-trained Neural Model (UPNM), Backpropagation Neural Model (BPM), Butterfly Optimization Algorithm (BOA)
DOI: 10.3233/IDT-230157
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2671-2685, 2024
Authors: Wang, Jun
Article Type: Research Article
Abstract: From the perspective of practical development, under the premise of stable macroeconomic growth in society, influenced by spatiotemporal factors, regional economies inevitably have differences and changes, which affect various aspects of social production and life. In order to understand the spatiotemporal data evolution characteristics of regional economy, promote common regional development and the implementation of coordinated economic development strategies, this article takes the Beijing Tianjin Hebei (BTH for short here) region as an example. By combining spatial econometric models (SEM for short here), this article collects and processes economic development data from 2013 to 2022 in the BTH region, and …introduced a spatial weight matrix to conduct High-performance computing and analysis of its regional economic spatial correlation. Based on this, this article conducted in-depth research on the spatiotemporal data evolution characteristics of the BTH regional economy through the description and quantitative analysis of the influencing factors of the BTH regional economy. The empirical analysis results showed that the global Moran index (Global Moran’s for short here) of the BTH region was positive from 2013 to 2022, and the Z -values were all greater than 1.96, indicating a significant spatial correlation in the BTH regional economy. There is an imbalance in economic development in the BTH region, but with the continuous development of the region, its economic balance has improved. Show more
Keywords: Regional economies, spatial econometrics, evolution of spatiotemporal data, BTH region
DOI: 10.3233/IDT-230169
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2687-2700, 2024
Authors: Wu, Zongfu | Hou, Fazhong
Article Type: Research Article
Abstract: Due to the large scale and spatiotemporal dispersion of 3D (three-dimensional) point cloud data, current object recognition and semantic annotation methods still face issues of high computational complexity and slow data processing speed, resulting in data processing requiring much longer time than collection. This article studied the FPFH (Fast Point Feature Histograms) description method for local spatial features of point cloud data, achieving efficient extraction of local spatial features of point cloud data; This article investigated the robustness of point cloud data under different sample densities and noise environments. This article utilized the time delay of laser emission and reception …signals to achieve distance measurement. Based on this, the measured object is continuously scanned to obtain the distance between the measured object and the measurement point. This article referred to the existing three-dimensional coordinate conversion method to obtain a two-dimensional lattice after three-dimensional position conversion. Based on the basic requirements of point cloud data processing, this article adopted a modular approach, with core functional modules such as input and output of point cloud data, visualization of point clouds, filtering of point clouds, extraction of key points of point clouds, feature extraction of point clouds, registration of point clouds, and data acquisition of point clouds. This can achieve efficient and convenient human-computer interaction for point clouds. This article used a laser image recognition system to screen potential objects, with a success rate of 85% and an accuracy rate of 82%. The laser image recognition system based on spatiotemporal data used in this article has high accuracy. Show more
Keywords: Laser recognition, image system, spatiotemporal data, point cloud data
DOI: 10.3233/IDT-230161
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2701-2714, 2024
Authors: Zhang, Zhenyuan
Article Type: Research Article
Abstract: With the rapid development of the economy, the demand for electric power is increasing, and the operation quality of the power system directly affects the quality of people’s production and life. The electric energy provided by the electric power system is the foundation of social operation. Through continuous optimization of the functions of the electric power system, the efficiency of social operation can be improved, and economic benefits can be continuously created, thereby promoting social progress and people’s quality of life. In the power system, the responsibility of the power distribution network (PDN) is to transmit electricity to all parts …of the country, and its transmission efficiency would directly affect the operational efficiency of the power system. PDN scheduling plays an important role in improving power supply reliability, optimizing resource allocation, reducing energy waste, and reducing environmental pollution. It is of great significance for promoting social and economic development and environmental protection. However, in the PDN scheduling, due to the inflexibility of the power system scheduling, it leads to the loss and waste of electric energy. Therefore, it is necessary to upgrade the operation of the PDN automatically and use automation technology to improve the operational efficiency and energy utilization rate of the power system. This article optimized the energy-saving management of PDN dispatching through electrical automation technology. The algorithm proposed in this paper was a distribution scheduling algorithm based on electrical automation technology. Through this algorithm, real-time monitoring, analysis, and scheduling of PDNs can be achieved, thereby improving the efficiency and reliability of distribution systems and reducing energy consumption. The experimental results showed that before using the distribution scheduling algorithm based on electrical automation technology, the high loss distribution to transformation ratios of power distribution stations in the first to fourth quarters were 21.93%, 22.95%, 23.61%, and 22.47%, respectively. After using the distribution scheduling algorithm, the high loss distribution to transformation ratios for the four quarters were 15.75%, 13.81%, 14.77%, and 13.12%, respectively. This showed that the algorithm can reduce the high loss distribution to transformation ratio of power distribution stations and reduce their distribution losses, which saved electric energy. The research results of this article indicated that electrical automation technology can play an excellent role in the field of PDN scheduling, which optimized the energy-saving management technology of PDN scheduling, indicating an advanced development direction for intelligent management of PDN scheduling. Show more
Keywords: Electrical automation, power distribution network scheduling, energy saving management, power energy, distribution dispatching algorithm
DOI: 10.3233/IDT-230121
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2715-2729, 2024
Authors: Niu, Meng
Article Type: Research Article
Abstract: Electrical device automation in smart industries assimilates machines, electronic circuits, and control systems for efficient operations. The automated controls provide human intervention and fewer operations through proportional-integral-derivative (PID) controllers. Considering these devices’ operational and control loop contributions, this article introduces an Override-Controlled Definitive Performance Scheme (OCDPS). This scheme focuses on confining machine operations within the allocated time intervals preventing loop failures. The control value for multiple electrical machines is estimated based on the operational load and time for preventing failures. The override cases use predictive learning that incorporates the previous operational logs. Considering the override prediction, the control value is …adjusted independently for different devices for confining variation loops. The automation features are programmed as before and after loop failures to cease further operational overrides in this process. Predictive learning independently identifies the possibilities in override and machine failures for increasing efficacy. The proposed method is contrasted with previously established models including the ILC, ASLP, and TD3. This evaluation considers the parameters of uptime, errors, override time, productivity, and prediction accuracy. Loops in operations and typical running times are two examples of the variables. The learning process results are utilized to estimate efficiency by modifying the operating time and loop consistencies with the help of control values. To avoid unscheduled downtime, the discovered loop failures modify the control parameters of individual machine processes. Show more
Keywords: Control loop, electrical automation, override control, PID, predictive learning
DOI: 10.3233/IDT-230125
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2731-2746, 2024
Authors: Lin, Chunhua
Article Type: Research Article
Abstract: Deep learning (DL) is the basis of many applications of artificial intelligence (AI), and cloud service is the main way of modern computer capabilities. DL functions provided by cloud services have attracted great attention. At present, the application of AI in various fields of life is gradually playing an important role, and the demand and enthusiasm of governments at all levels for building AI computing capacity are also growing. The AI logic evaluation process is often based on complex algorithms that use or generate large amounts of data. Due to the higher requirements for the data processing and storage capacity …of the device itself, which are often not fully realized by humans because the current data processing technology and information storage technology are relatively backward, this has become an obstacle to the further development of AI cloud services. Therefore, this paper has studied the requirements and objectives of the cloud service system under AI by analyzing the operation characteristics, service mode and current situation of DL, constructed design principles according to its requirements, and finally designed and implemented a cloud service system, thereby improving the algorithm scheduling quality of the cloud service system. The data processing capacity, resource allocation capacity and security management capacity of the AI cloud service system were superior to the original cloud service system. Among them, the data processing capacity of AI cloud service system was 7.3% higher than the original cloud service system; the resource allocation capacity of AI cloud service system was 6.7% higher than the original cloud service system; the security management capacity of AI cloud service system was 8.9% higher than the original cloud service system. In conclusion, DL plays an important role in the construction of AI cloud service system. Show more
Keywords: Cloud service mode, deep learning, artificial intelligence, cloud service system construction
DOI: 10.3233/IDT-230150
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2747-2758, 2024
Authors: Wei, Bo | Chen, Huanying | Huang, Zhaoji
Article Type: Research Article
Abstract: In order to solve the problem of low accuracy of evaluation results caused by the impact of throughput and transmission delay on traditional systems in 6G networks, this paper proposes a design method of network security processing system in 5G/6gNG-DSS of intelligent model computer. Supported by the principle of active defense, this paper designs a server-side structure, using ScanHome SH-800/400 embedded scanning module barcode QR code scanning device as the scanning engine. We put an evaluation device on the RISC chip PA-RISC microprocessor. Once the system fails, it will send an early warning signal. Through setting control, data, and cooperation …interfaces, it can support the information exchange between subsystems. The higher pulse width modulator TL494:4 pin is used to design the power source. We use the top-down data management method to design the system software flow, build a mathematical model, introduce network entropy to weigh the benefits, and realize the system security evaluation. The experimental results show that the highest evaluation accuracy of the system can reach 98%, which can ensure user information security. Conclusion: The problem of active defense network security is transformed into a dynamic analysis problem, which provides an effective decision-making scheme for managers. The system evaluation based on Packet Tracer software has high accuracy and provides important decisions for network security analysis. Show more
Keywords: Active defense, network security assessment, packet tracer, scanning engine, software flow design, mathematical model
DOI: 10.3233/IDT-230143
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2759-2774, 2024
Authors: Zhao, Lihui
Article Type: Research Article
Abstract: Due to the lack of data security protection, a large number of malicious information leaks, which makes building information security (InfoSec) issues more and more attention. The construction information involves a large number of participants, and the number of construction project files is huge, leading to a huge amount of information. However, traditional network security information protection software is mostly passive, which is difficult to enhance its autonomy. Therefore, this text introduced data sharing algorithm in building InfoSec management. This text proposed an Attribute Based Encryption (ABE) algorithm based on data sharing, which is simple in calculation and strong in …encrypting attributes. This algorithm was added to the building InfoSec management system (ISMS) designed in this text, which not only reduces the burden of relevant personnel, but also has flexible control and high security. The experimental results showed that when 10 users logged in to the system, the stability and security of the system designed in this text were 87% and 91% respectively. When 20 users logged in to the system, the system stability and security designed in this text were 89% and 92% respectively. When 80 users logged in to the system, the system stability and security designed in this text were 94% and 95% respectively. It can be found that the stability and security of the system have reached a high level, which can ensure the security of effective management of building information. Show more
Keywords: Data sharing, attribute based encryption, construction information, security management system
DOI: 10.3233/IDT-230144
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2775-2788, 2024
Authors: Lei, Gang | Wu, Junyi | Gu, Keyang | Jiang, Fan | Li, Shibin | Jiang, Changgen
Article Type: Research Article
Abstract: In the era of rapid development of modern internet technology, network transmission techniques are continuously iterating and updating. The Quick UDP Internet Connections (QUIC) protocol has emerged as a timely response to these advancements. Owing to the strong compatibility and high transmission speed of QUIC, its extended version, Multipath QUIC (MPQUIC), has gained popularity. MPQUIC can integrate various transmission scenarios, achieving parallel transmission with higher bandwidth. However, due to some security flaws in the protocol, MPQUIC is susceptible to attacks from anomalous network traffic. To address this issue, we propose an MPQUIC traffic anomaly detection model based on Empirical Mode …Decomposition (EMD) and Long Short-Term Memory (LSTM) networks, which can decompose and denoise data and learn the long-term dependencies of the data. Simulation experiments are conducted by obtaining MPQUIC traffic data under normal and anomalous conditions for prediction, analysis, and evaluation. The results demonstrate that the proposed model exhibits satisfactory prediction performance when trained on both normal and anomalous traffic data, enabling anomaly detection. Moreover, the evaluation metrics indicate that the EMD-LSTM-based model achieves higher accuracy compared to various traditional single models. Show more
Keywords: Multipath QUIC, network traffic, anomalous detection model, empirical mode decomposition, long short-term memory
DOI: 10.3233/IDT-230261
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2789-2810, 2024
Authors: Zhang, Lan | Luo, Yanling | Zhao, Yue
Article Type: Research Article
Abstract: The rapid development of mobile communication technology not only brings great convenience to users, but also brings the risk of user data privacy leakage. Due to the broadcasting nature of mobile communication transmission, open wireless interfaces have become a security vulnerability for mobile devices such as mobile phones, which can be easily eavesdropped. This article studied a data privacy protection algorithm suitable for mobile communication in cellular networks based on the anonymous mechanism of blockchain. This article first analyzes the overall framework and anonymity technology of blockchain data from the perspective of privacy and queryability. Then, based on blockchain technology, …consistency mechanisms, privacy control, and access rights management are designed. Finally, mobile communication data privacy protection algorithms are designed and implemented through data encryption and verification. The latency of transactions under different task volumes and throughput at different nodes were analyzed to verify the reliability of several anonymous mechanisms in cellular network mobile communication data privacy protection in blockchain. Based on the experimental results, it was concluded that the reliability range of CryptoNote and the zero coin and zero currency protocol mechanism in the specified nodes was between 92 and 99%. This article utilized blockchain technology to distribute mobile communication data in nodes and achieve decentralized data transmission, thus protecting the privacy of wireless network communication data. By analyzing the reliability of anonymous mechanisms based on blockchain in mobile communication nodes, it was concluded that this method had certain research value. Show more
Keywords: Cellular networks, mobile communications, data privacy, blockchain technology, privacy protection, byzantine algorithm
DOI: 10.3233/IDT-230233
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2811-2825, 2024
Authors: Zhang, Xuxia | Chen, Weijie | Wang, Jian | Fang, Rang
Article Type: Research Article
Abstract: With the rapid development of information technology and the rapid popularization of the Internet, while people enjoy the convenience and efficiency brought about by new technologies, they are also suffering from the harm caused by cyber attacks. In addition to efficiently thwarting network assaults, a high volume of complicated security event data might unintentionally increase the strain of policy makers. At present, NS threats mainly include network viruses, trojans, DOS (Denial-Of-Service), etc. For the increasingly complex Network Security (NS) problems, the traditional rule-based network monitoring technology is difficult to predict the unknown attack behavior. Environment-based, dynamic and integrated data fusion …can integrate data from a macro perspective. In recent years, Machine Learning (ML) technology has developed rapidly, which could easily train, test and predict existing third-party models. It uses ML algorithms to find out the association between data rather than manually sets rules. Support vector machine is a common ML method, which can predict the security of the network well after training and testing. In order to monitor the overall security status of the entire network, NS situation awareness refers to the real-time and accurate reproduction of network attacks using the reconstruction approach. Situation awareness technology is a powerful network monitoring and security technology, but there are many problems in the existing NS technology. For example, the state of the network cannot be accurately detected, and its change rule cannot be understood. In order to effectively predict network attacks, this paper adopted a technology based on ML and data analysis, and constructed a NS situational awareness model. The results showed that the detection efficiency of the model based on ML and data analysis was 7.18% higher than that of the traditional NS state awareness model. Show more
Keywords: Network security, machine learning algorithm, situation awareness, data analysis
DOI: 10.3233/IDT-230238
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2827-2839, 2024
Authors: Liu, Bing | Li, Xianzhong | Li, Zheng | He, Peidong
Article Type: Research Article
Abstract: With the increasing Power Load (PL), the operation of the power system is facing increasingly severe challenges. PL control is an important means to ensure the stability of power system operation and power supply quality. However, traditional PL control methods have limitations and cannot meet the requirements of load control in the new era of power systems. This is because with the development of modern industry and commerce, the demand for electricity is gradually increasing. This article constructed a PL control and management terminal operating system based on machine learning technology to achieve intelligent management of PL, so as to …improve the operational efficiency and power supply quality of the power system. This article identified the design concept of a PL control management terminal operating system based on machine learning technology by reviewing the current research status of PL control technology. Based on the operational characteristics and data characteristics of the power system, this article selected suitable machine learning algorithms to process and analyze load data, and established a prototype of a PL control and management terminal operating system based on machine learning technology, so as to realize intelligent processing and analysis of load data and conduct experimental verification. The experimental results show that through the comparative study of 6 sets of data in the tertiary level, the difference between the system and the real tertiary level is 0.079 kw, 0.005 kw and 0.189 kw respectively. Therefore, therefore, the average difference between the predicted value and the measured value of the PL system is about 0.091 kw. This indicated that the system had high accuracy and real-time performance in predicting PL, which could effectively improve the load control efficiency and power supply quality of the power system. The PL control management terminal operating system based on machine learning technology constructed in this article provided new ideas and methods for the development of PL control technology. In the future, system algorithms can be further optimized and a more intelligent PL control and management terminal operating system can be constructed to cope with the growing PL and increasingly complex power system operating environment. Show more
Keywords: Machine learning technology, power load, management terminal, operation systemï¼ power load forecasting model
DOI: 10.3233/IDT-230239
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2841-2854, 2024
Authors: Zou, Wenjing | Xu, Huan | Yang, Qiuyong | Dong, Can | Su, Wenwei
Article Type: Research Article
Abstract: Currently, the data management of power enterprises faces the need to analyze data sources from multiple places. However, traditional multi-source data fabric systems have problems such as low analysis efficiency and high error rates, which brings great inconvenience to the data analysis of power enterprises. In order to improve the accuracy and efficiency of data analysis in data structure systems, the intelligent system architecture is applied to the construction of source data structure systems. The main modules are data collection, data matching, data integration, and data analysis. This article uses simulated annealing genetic algorithm to perform high-performance calculations on system …timing data, thus achieving data matching. This article conducted data level data integration, feature level data integration, and decision level data integration. The access survey method was used to analyze the current data management problems faced by power companies. The evaluation and analysis of general multi-source data fabric systems and multi-source data fabric systems based on intelligent system architecture were conducted using the evaluation panel evaluation method. The analysis results showed that the operational convenience of the multi-source data fabric system based on intelligent system architecture could reach 60%–80%, which greatly improved compared to general multi-source data fabric systems; the information sharing of multi-source data fabric systems based on intelligent system architecture was greatly improved; the data processing efficiency of general multi-source data fabric systems was much lower than that of multi-source data fabric systems based on intelligent system architecture; however, the symmetry of data collection and matching in the multi-source data fabric system based on intelligent system architecture was slightly insufficient, and further improvement was still needed. In order to benefit more power companies through the intelligent system architecture based multi-source data fabric system, it was necessary to strengthen the management of data collection and matching symmetry. Show more
Keywords: Data fabric, intelligent system architecture, multi-source data, electricity companies, simulated annealing genetic algorithm
DOI: 10.3233/IDT-230240
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2855-2872, 2024
Authors: Chen, Haifeng
Article Type: Research Article
Abstract: With the rapid development of the economy, the power supply has also shown an increasing trend year by year, and many loopholes and hidden dangers have emerged during the operation of the power grid. The power grid may be subject to malicious attacks, such as hacker attacks, power theft, etc. This may lead to security risks such as power grid system paralysis and information leakage. In order to ensure the quality of power supply, it is necessary to optimize the distribution of electricity and improve power supply efficiency. This article pointed out the security performance issues of power Internet of …Things (IoT) terminals and analyzed the design and implementation of a vulnerability mining system for power IoT terminals based on a fuzzy mathematical model simulation platform. This article used a fuzzy mathematical model to quantitatively evaluate the security performance of power IoT terminals, providing an effective theoretical basis for vulnerability mining. Based on the analysis of vulnerability mining technology classification and vulnerability attack process, this article characterizes vulnerability parameters through fuzzy mapping. Based on the collected vulnerability data and the online and device status of power IoT terminals, fuzzy logic inference is used to determine and mine potential vulnerability issues in power IoT terminals. This article aimed to improve the security performance of power IoT terminals and ensure the safe and stable operation of the power system. By testing the number of system vulnerabilities, vulnerability risk level, and vulnerability mining time of the power IoT terminal vulnerability mining system based on fuzzy mathematical models, it was found that the power IoT simulation platform based on fuzzy mathematical models has fewer terminal vulnerabilities. The fuzzy mathematical model can reduce the vulnerability risk level of the power IoT simulation platform system, and the time required for vulnerability mining was reduced; the time was reduced by 0.48 seconds, and the speed of vulnerability mining was improved. Fuzzy mathematical models can promote the development of the power industry, which provides strong support for the security protection of power IoT terminals. Show more
Keywords: Power simulation platform, Internet of Things terminals, vulnerability mining systems, fuzzy mathematical model, fuzz test, fuzzy logic
DOI: 10.3233/IDT-230241
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2873-2883, 2024
Authors: Li, Guozhang | Xing, Kongduo | Alfred, Rayner | Wang, Yetong
Article Type: Research Article
Abstract: With the passage of time, the importance of spatio-temporal data (STD) is increasing day by day, but the spatiotemporal characteristics of STD bring huge challenges to data processing. Aiming at the problems of image information loss, limited compression ratio, slow compression speed and low compression efficiency, this method based on image compression. This article intended to focus on aircraft trajectory data, meteorological data, and remote sensing image data as the main research objects. The research results would provide more accurate and effective data support for research in related fields. The image compaction algorithm based on deep learning in this article …consisted of two parts: encoder and decoder, and this method was compared with the JPEG (Joint Photographic Experts Group) method. When compressing meteorological data, the algorithm proposed in this paper can achieve a maximum compaction rate of 0.400, while the maximum compaction rate of the JPEG compaction algorithm was only 0.322. If a set of aircraft trajectory data containing 100 data points is compressed to 2:1, the storage space required for the algorithm in this paper is 4.2 MB, while the storage space required for the lossless compression algorithm is 5.6 MB, which increases the compression space by 33.33%. This article adopted an image compaction algorithm based on deep learning and data preprocessing, which can significantly improve the speed and quality of image compaction while maintaining the same compaction rate, and effectively compress spatial and temporal dimensional data. Show more
Keywords: STD processing, picture data compaction, high-performance image compaction algorithms, compaction rate, JPEG compaction algorithm
DOI: 10.3233/IDT-230234
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2885-2899, 2024
Authors: Li, Guozhang | Alfred, Rayner | Wang, Yetong | Xing, Kongduo
Article Type: Research Article
Abstract: With the continuous development of science and technology, it has become possible to acquire and process massive high-resolution image data. The amount of high-resolution image data is huge, and the traditional single-machine computing and processing methods may become inefficient, which is difficult to meet the needs of real-time or large-scale data processing. This article selected a high resolution satellite remote sensing image from the Landsat dataset for processing. Gaussian filtering was used to denoise the image, followed by K-means algorithm for image segmentation. The image data was then transmitted and stored and the results of image data processing were merged. …Data processing efficiency and storage space utilization were analyzed for different data segmentation and storage methods. According to the experimental results, it could be concluded that the method of image resolution segmentation not only had fast processing speed, but also produced higher data quality. The storage space utilization rate using AWS S3 (Amazon Simple Storage Service) storage solution was the highest, reaching a maximum of 0.98. The response time was the shortest, around 100 ms. AWS S3 showed the highest read speed, between 147 MB and 154 MB per second. It could be seen that when processing massive high resolution image data, appropriate segmentation methods and storage schemes should be selected. The research and application of distributed computing and storage strategies for massive high resolution image data posed certain theoretical and technical challenges, which could promote the development of distributed computing and storage technology, technological progress and innovation in related fields. Show more
Keywords: Image data processing, distributed image data processing, data storage, high resolution, gaussian filtering, K-means algorithm
DOI: 10.3233/IDT-230231
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2901-2913, 2024
Authors: Zhang, Guangtao
Article Type: Research Article
Abstract: In order to solve the problem of low computing efficiency in big data analysis and model construction, this paper intended to deeply explore the big data analysis programming model, DAG (Directed Acyclic Graph) and other contents, and on this basis, it adopted a distributed matrix computing system Octopus for big data analysis. Octopus is a universal matrix programming framework that provides a programming model based on matrix operations, which can conveniently analyze and process large-scale data. By using Octopus, users can extract functions and data from multiple platforms and operate through a unified matrix operation interface. The distributed matrix representation …and storage layer can design data storage formats for distributed file systems. Each computing platform in OctMatrix provides its own matrix library, and it provides a matrix library written in R language for the above users. SymboMatrix provides a matrix interface to OctMatrix that is consistent with OctMatrix. However, SymboMatrix also retains the flow diagram for matrix operations in the process, and it also supports logical and physical optimization of the flow diagram on a DAG. For the DAG computational flow graph generated by SymbolMatrix, this paper divided it into two parts: logical optimization and physical optimization. This paper adopted a distributed file system based on line matrix, and obtained the corresponding platform matrix by reading the documents based on line matrix. In the evaluation of system performance, it was found that the distributed matrix computing system had a high computing efficiency, and the average CPU (central processing unit) usage reached 70%. This system can make full use of computing resources and realize efficient parallel computing. Show more
Keywords: Big data analysis, distributed matrix computing system, data management, matrix segmentation, historical data
DOI: 10.3233/IDT-230309
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2915-2931, 2024
Authors: Li, Yujie
Article Type: Research Article
Abstract: In response to the problems of insufficient data management and low construction quality in architectural interior design, high-performance computing technology was proposed to be applied to optimize the overall effect of architectural interior design. This article proposed some solutions to the problems in interior design of buildings, and applied high-performance computing technology to building design materials, building operation data processing, etc. Furthermore, combined with multi-objective genetic algorithm, relevant experimental tests were conducted on the optimization of interior space environment design in buildings. The experimental results showed that under the algorithm proposed in this paper, the average health standard compliance of …each scheme reached 92.17%, and the average emotional requirement satisfaction reached 88.42%. The average interface detail attention reaches 84.65%, and the average environmental comfort reaches 92.87%. Under the genetic algorithm, the average compliance of health standards for each scheme was 88.14%, and the average satisfaction of emotional needs was 84.17%. The average attention to interface details was 81.18%, and the average environmental comfort was 90.30%. From the above data, it can be seen that the algorithm proposed in this paper can achieve good optimization results in the design of indoor space environment in buildings. Show more
Keywords: Big data, high performance computing, architectural interval design, space environment design
DOI: 10.3233/IDT-230171
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2933-2944, 2024
Authors: Jain, Aryamaan | Mahawar, Priyanka | Pantola, Deepika | Gupta, Madhuri | Singh, Prabhishek | Diwakar, Manoj
Article Type: Research Article
Abstract: Recent research suggests that by 2023, the production of data will exceed 300 exabytes per month, a figure surpassing human verbal communication by over 60 times. This exponential growth underscores the need for platforms to adapt in areas such as data analysis and storage. Efficient data organization is crucial, considering the growing scarcity of time and space resources. While manual sorting may suffice for small datasets in smaller organizations, large corporations dealing with millions or billions of documents require advanced tools to streamline storage, sorting, and analysis processes. In response to this need, this research introduces a novel architecture called …Slick, designed to enhance sorting, filtering, organization, and analysis capabilities for any storage service. The proposed architecture incorporates two innovative techniques – Degree of Importance (DOI) and amortized clustering – along with established natural language processing methods such as Topic Modelling, Summarization, and Tonal Analysis. Additionally, a new methodology for keyword extraction and document grouping is presented, resulting in significantly improved response times. It offers a searchable platform where users can utilize succinct keywords, lengthy text passages, or complete documents to access the information they seek. Experimental findings demonstrate a nearly 46 percent reduction in average response time compared to existing methods in literature. Show more
Keywords: Keyword extraction, clustering, document retrieval engine, tonal analysis, summarization
DOI: 10.3233/IDT-230682
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2945-2960, 2024
Authors: Wang, Zhongxing
Article Type: Research Article
Abstract: Current machine learning models under artificial intelligence can only improve prediction accuracy, but their underlying logic remains incomprehensible. Therefore, to provide high prediction accuracy and enhance the interpretability of the model through machine learning, the study selects the Extreme Gradient Boosting (XGBoost) model by comparing multiple models under single learner and integrated learning. Then a cancer probability statistical prediction model is constructed through parameter optimization, and its performance and interpretability are analyzed. The experimental results showed that the Receiver Operating Characteristic (ROC) Area under Curve (AUC) value in the single learner was generally lower than 80%, while the AUC value …was 84.4%, surpassing that of the comparison model. Simultaneously, an increase in Alpha-Fetoprotein value greater than 13.5 had a stronger predictive effect when combined with other factors. Smaller serum Alanine Aminotransferase and Alpha-Fetoprotein assay near 0 may produce negative or positive effects, whereas a higher value is more likely to produce a positive effect. This is in line with its clinical significance. Overall, XGBoost effectively improves the out-of-sample prediction accurate and interpretability, which is significant for the actual liver cancer diagnosis prediction. Show more
Keywords: Machine learning interpretability, liver cancer, statistical prediction model, integrated learning, XGBoost model, feature selection, single learner
DOI: 10.3233/IDT-230504
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2961-2975, 2024
Authors: Wang, Yunjun | Ren, Zhiyuan
Article Type: Research Article
Abstract: Traditional standing long jump measurement relies only on visual reading and manual recording, which makes the recording of data subjective and arbitrary, making it difficult to ensure the accuracy and efficiency of long jump performance. To address the shortcomings and deficiencies of traditional measurement methods and to avoid the interference of subjective bias on results, the research aims to provide a more accurate, automated, and objective measurement method. Furthermore, the research will provide new technological means for the measurement of related sports projects. In contrast to the utilization of human motion recognition technology, the study introduces image recognition technology into …the domain of standing long jump testing. This technology enables the calculation of distance through the application of image processing and perspective transformation algorithms, thereby facilitating the realization of a distance measurement function. Specifically, this includes using wavelet decomposition coefficients and morphological denoising to improve the performance of wavelet threshold denoising, achieving feature extraction of image edge information, adding vibration sensors and CNN algorithms to adjust the angle of offset images, and designing a multi-step long jump distance measurement system. The combination of wavelet decomposition coefficients and morphological denoising utilized in the study demonstrated lower mean square error (50.8369) and signal-to-noise ratio (24.1126) values, with a maximum accuracy of 96.23%, which was significantly higher than the other two comparison methods. In the context of different feature information recognition, the ROC curve area of the algorithm model proposed in the study reached over 85%, with a deviation in the dataset of all below 0.5. The minimum absolute and relative errors between the measurement results of this method and the actual test results were 0.01 cm and 2%, respectively. The overall deviation of the system was 0.35, indicating high stability. The proposed long jump measurement system has the potential to enhance the efficiency of testing for the standing long jump, while also forming a complementary mode with traditional distance measurement systems. This could collectively serve the intelligent instrument market, providing technical means for the development of sports teaching projects. Show more
Keywords: Image denoising, standing long jump, automatic distance measurement, CNN, binarization
DOI: 10.3233/IDT-230733
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2977-2992, 2024
Authors: Huang, Guomin
Article Type: Research Article
Abstract: With the maturity of digital technology, the effectiveness of information transmission has received extensive attention. Variable font design is regarded as an extension of graphic design in digital media art, and variable font design is studied from the perspective of information communication design. From the perspective of digital technology, based on the research theory of variable information communication design, combined with the specific methods of font design, the perception, cognitive methods, and aesthetic laws of aesthetic subject vision and psychology in the field of font variable design are discussed. Provide relevant theoretical support for information communication design activities with variable …fonts. Based on artificial intelligence technology, a new interactive art expression method is proposed. By analyzing the main expression scenarios of interactive art, namely user information communication scenarios, interactive art push scenarios, interactive art promotion scenarios and personal service scenarios, it provides a scenario application basis for interactive expression, calculate the space complexity, space complexity and resource complexity, according to the complexity calculation results, carry out interaction design from three aspects of visual elements, multimedia elements, and human-computer interaction elements, integrate the topology structure after the interaction is realized, and in the network topology structure. The proposed model consists mainly of visual design, basic design elements, basic visual design, complete design and solution generation. It realizes the expression of interactive art of multimedia elements in display and control. The experimental results show that the interactive element extraction speed of the visual communication expression method based on artificial intelligence technology is increased by 30%, and the flexibility is stronger. Show more
Keywords: Visual communication, artificial intelligence, digital technology, design content
DOI: 10.3233/IDT-240105
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 2993-3009, 2024
Authors: Xue, Jing
Article Type: Research Article
Abstract: Art is a symbol of people’s thoughts, and among many forms of artistic expression, literature is the most direct one, which can present art directly to people. How to correctly understand language materials in literature is crucial for understanding literary works and realizing their artistic value. Therefore, in order to strengthen the understanding of Korean literature and analyze its core ideas, this article utilizes modern computer technology and improved Term Frequency-Inverse Document Frequency (TF-IDF) algorithm to process the corpus of Korean literature, in order to quickly extract valuable textual information from Korean literature and facilitate reading and understanding. At the …same time, a Korean literature corpus processing model was constructed based on deep learning algorithms. This model is based on the Natural Language Processing (NLP) algorithm, selecting Word Frequency Inverse Document Frequency (TF-IDF) as the feature to calculate the feature weight of keywords. By weighting the naive Bayesian algorithm, it achieves the classification and processing of expected text data in Korean literature. The results of multiple experiments show that the classification accuracy of the model exceeds 97.7%, and the classification recall rate is as high as 94.2%, indicating that the model can effectively achieve corpus processing in Korean literature. Show more
Keywords: Korean literature, art, expect, TF-IDF algorithm, NLP, computer system
DOI: 10.3233/IDT-230772
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3011-3024, 2024
Authors: Huang, Haifeng
Article Type: Research Article
Abstract: With the rapid development of computer technology, parameter adaptive control methods are becoming more and more widely used in nonlinear systems. However, there are still many problems with synchronous controllers with multiple inputs and a single output, uncertainty, and dynamic characteristics. This paper analyzed a synchronization control strategy of uncoupled nonlinear systems based on parameter dynamic factors to adjust the performance of the synchronization controller, and briefly introduced the manifestations of chaotic motion. The characteristics and differences of continuous feedback control methods and transmission and transfer control methods were pointed out. Simple, effective, stable, and feasible synchronous control was analyzed …using parameter-adaptive control theory. By analyzing the non-linear relationships between various models at different orders, the fuzzy distribution of the second-order mean and their independent and uncorrelated matrices were obtained, and their corresponding law formulas were established to solve the functional expression between the corresponding state variables and the dynamic characteristics of the system. The error risk test, computational complexity test, synchronization performance score test, and chaos system control effect score test were carried out on the control algorithms of traditional chaos system synchronization methods and chaos system synchronization methods based on parameter adaptive methods. Parameter adaptive methods were found to effectively reduce the error risk of high-performance control algorithms for synchronization of the unified chaos system. The complexity of the calculation process was simplified and the complexity score of the calculation process was reduced by 0.6. The application of parameter adaptive methods could effectively improve the synchronization performance of control algorithms, and the control effectiveness rating of control algorithms was improved. The experimental test results proved the effectiveness of control algorithms, which greatly enriched the field of modern control applications and also drove the vigorous development of nonlinear dynamics research, thus making significant progress in chaos application research. Show more
Keywords: High-performance control algorithms, unified chaos system, synchronous control, parametric adaptive methods
DOI: 10.3233/IDT-240178
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3025-3039, 2024
Authors: Misra, Mohit | Sharma, Rohit | Tiwari, Shailesh
Article Type: Research Article
Abstract: The poor quality of asphalt roads has a significant impact on driver safety, damages the mechanical structure of vehicles, increases fuel consumption, annoys passengers and is sometimes also responsible for accidents. Further, the poor quality of the road can be described as a rough surface and the presence of potholes. The potholes can be one of the main reasons for accident cause, increased fuel consumption and annoying passengers. Furthermore, the potholes can be of varied size, radiance effect, shadow and scales. Hence, the detection of potholes in asphalt roads can be considered a complex task and one of the serious …issues regarding the maintenance of asphalt roads. This work focuses on the detection of the potholes in the asphalt roads. So in this work, a pothole detection model is proposed for accurate detection of potholes in the asphalt roads. The effectiveness of the proposed pothole detection model is tested over a set of real-world image datasets. In this study, the asphalt roads of the Delhi-NCR region are chosen and real-world images of these roads are collected through the smart camera. The final road image dataset consists of a total of 1150 images including 860 pothole images and the rest of are without pothole images. Further, the deep belief network is integrated into a proposed model for the detection of pothole images as a classification task and classified the images as pothole detected and not pothole. The experimental results of the proposed detection model are evaluated using accuracy, precision, recall, F1-Score and AUC parameters. These results are also compared with ANN, SVM, VGG16, VGG19 and InceptionV3 techniques. The simulation results showed that the proposed detection model achieves a 93.04% accuracy rate, 94.30% recall rate, 96.31% precision rate and 96.92% F1-Score rate than other techniques. Show more
Keywords: Asphalt road, potholes, deep belief network, detection model
DOI: 10.3233/IDT-240127
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3041-3055, 2024
Authors: Li, Zhiguang | Li, Baitao | Jahng, Surng Gahb | Jung, Changyong
Article Type: Research Article
Abstract: In order to improve the tracking effect of prevention and control, this paper combines three-dimensional face recognition technology and posture recognition method to build a prevention and control tracking system, and designs a single-stage human target detection and posture estimation network based on thermal infrared images. By using infrared recognition methods, the face and human body shape can be directly recognized, overcoming the problem of traditional visual recognition methods being easily occluded. The network can simultaneously complete two tasks of human target detection and pose estimation in a multi-task manner. Moreover, this paper uses the knowledge distillation strategy to train …a lightweight model to further reduce the amount of model parameters to improve the inference speed. In addition, this paper uses a single-stage human target detection and posture estimation network to judge the effect of prevention and control tracking. Through the prevention and control tracking effect test, it can be seen that the prevention and control tracking system based on 3D face recognition proposed in this paper can effectively improve the prevention and control tracking effect. Show more
Keywords: Three-dimensional, face recognition, prevention and control, tracking
DOI: 10.3233/IDT-240104
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3057-3074, 2024
Authors: Zhou, Lingling | Li, Wanchun | Qiao, Yu | Zhang, Miaomiao | Qiu, Xiaoli
Article Type: Research Article
Abstract: The digital culture business benefits from quick technical updates, digital manufacturing, communication networks, and customized consumption, which promotes new supply and consumption. The innovation and development of the digital cultural industry cannot be separated from the physical industry. The efficient production of the physical industry can ensure the innovative development of digital cultural industry. Therefore, to improve the production efficiency of physical industry, this research proposes a temperature control model combining whale optimization algorithm and fuzzy proportional integral differential algorithm based on the traditional temperature control in ceramic firing and wine making process. Firstly, fuzzy control theory is used to …improve the proportional integral differential algorithm, and a fuzzy proportional integral differential model with adaptive adjustment function is established. On this basis, to quickly confirm the optimal parameters of the proportional integral differential model, the whale optimization algorithm is introduced to further optimize its weight factors. It was verified that the temperature control adjustment model combining whale optimization algorithm and fuzzy proportional integral differential model was 2.77 s, which was shortened by 1.11 s, 2.37 s and 3.13 s, respectively than the other three models. Therefore, the constructed temperature control model can effectively realize high-precision intelligent temperature control in industrial production, laying the cornerstone for promoting the innovative development of digital culture industry. The proposed model can improve the production efficiency of physical industries, promote digital transformation, and optimize the innovative development of digital cultural industries. Show more
Keywords: Cultural innovation, PID, fuzzy control, whale optimization, digital culture industry
DOI: 10.3233/IDT-240028
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3075-3089, 2024
Authors: Sun, Yan | Wu, Yin | Wu, Yan | Liang, Kai | Dong, Cuihong | Liu, Shiwei
Article Type: Research Article
Abstract: This article aimed to use the proximal policy optimization (PPO) algorithm to address the limitations of power system startup strategies, to enhance the adaptability, coping ability, and overall robustness of the system to variable grid demand and integrated renewable energy, the constraints in the power system start-up strategy are optimized. Firstly, this article constructed a dynamic model of the power system, including key components such as generators, transformers, and transmission lines; secondly, it integrated the PPO algorithm and designed interfaces that allow the algorithm to interact with the power system model; afterward, the state variables of the power system were …determined, and a reward function was designed to evaluate the startup efficiency and stability of the system. Next, this article adjusted the reward function and trained and iterated multiple times in the simulation environment to guide the algorithm to learn the optimal startup strategy. Finally, an effective evaluation of the strategy can be conducted. The research results showed that after optimization by the PPO algorithm, the stable frequency startup of the power system only took about 23 seconds, and the system recovery time was reduced by 33.3% under a sudden load increase. The algorithm used can significantly optimize the intelligent startup strategy of the power system. Show more
Keywords: Power system startup strategy, proximal policy optimization, system adaptability and robustness, dynamic modeling, performance evaluation
DOI: 10.3233/IDT-240122
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3091-3104, 2024
Authors: Yu*, Lin | Bai, Yujie
Article Type: Research Article
Abstract: The Network Security Monitoring System (NSMS) can use Big Data (BD) and K-means DT (K-means with distance threshold) algorithms to automatically learn and identify abnormal patterns in the network, improving the accuracy of network threat detection. In this article, KDD Cup 1999 and NSL KDD were selected as NSMS for dataset analysis. Preprocess the data; Extract statistical information, time series information, and traffic distribution characteristics. Value device DT further classifies regular attacks, remote location (R2L) attacks, and user to root (U2R) permission attacks. The experimental results show that the hybrid intrusion detection algorithm based on K-means DT achieves a network …attack detection accuracy of 99.2% and a network attack detection accuracy of 98.9% on the NSL-KDD dataset. Hybrid intrusion detection algorithms can effectively improve the accuracy of network intrusion detection (NID). The hybrid intrusion detection system proposed in this article performs well on different datasets and can effectively detect various types of network intrusion attacks, with better performance than other algorithms. The NSMS designed in this article can cope with constantly changing network threats. Show more
Keywords: Network security, monitoring systems, big data, K-means with distance threshold, network attacks
DOI: 10.3233/IDT-240185
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3105-3118, 2024
Authors: Fu, Ning | Zhang, Dongxia | Chen, Yu
Article Type: Research Article
Abstract: Computer networks (CNs) has been widely popularized and applied. They not only affect people’s daily lives, but also promote the development of the times and society. However, in the era of big data (BD), the rapid growth of data poses significant challenges to computer network security management (CNSM). The large amount, fast speed, and diverse types of data generated by modern networks make it increasingly difficult for security professionals to detect and respond to threats in real-time. Artificial intelligence (AI) has the potential to play an important role in CNSM in the era of BD. It can quickly analyze large …amounts of data, automate daily tasks, and predict potential network vulnerabilities. This article conducted relevant research on the development of CNSM technology based on AI technology. The final experimental results showed that the average accuracy score of network security (NS) detection based on AI technology is 92.35 points; the average overall event response time is 9.45 hours; the average cost of network security management (NSM) is 147300 US dollars. These indicators have huge advantages compared to traditional NSM technology. Show more
Keywords: Big data, artificial intelligence, computer networks, network security management, security detection
DOI: 10.3233/IDT-240215
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3119-3128, 2024
Authors: Zhao, Mingguan | Li, Meng | Dong, Xinsheng | Yang, Yang | Wang, Hongxia | Ni, Yunlong
Article Type: Research Article
Abstract: With the explosive growth of electricity consumption, the demand for electricity by electricity users is increasing. As a core component of power supply, the safe and stable operation of transmission lines plays an important role in the normal operation of the entire power system. However, traditional monitoring methods for transmission line operation status face challenges such as limited accuracy, lack of real-time feedback, and high operational costs. In this paper, the Firefly algorithm is used to monitor the running status of transmission lines. Through synchronous testing with the traditional particle swarm optimization algorithm, it is found that the average accuracy …of the Firefly algorithm in voltage and current measurement is improved to 93.13% and 93.66% respectively, which is better than the traditional algorithm. Firefly algorithm shows high precision in various power equipment monitoring, the average monitoring accuracy is 95.62% and 93.06%, respectively, which proves that it has stronger performance in transmission line monitoring and can achieve more stringent monitoring requirements. Through the comparison experiment of the algorithm, it proved that the Firefly algorithm had a strong performance in the transmission line operation status monitoring, and could more accurately identify the transmission line fault, which provided a new idea and new method for the safe operation status monitoring of transmission lines. Show more
Keywords: Transmission line, firefly algorithm, state detection, particle swarm algorithm, power system
DOI: 10.3233/IDT-240211
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3129-3142, 2024
Authors: Miao, Yaofeng | Zhou, Yuan
Article Type: Research Article
Abstract: Computers play a very important role in today’s life. A large amount of personal information exists in computers, so it is very important to protect the information security on computer networks. The development of information technology has brought a huge impact on human society, and the information security of its computer network has attracted more and more attention. Aiming at the needs of computer network information management in the age of big data, this paper has developed a special computer network information protection system to deal with various threats and constantly improve the protection system. Based on this, the concept …and advantages of the 5G network are first expounded, and the management technology of the 5G network is briefly analyzed. Then the information security problems of computer networks in the 5G era are considered, and effective solutions to the information security problems are proposed. Through research and calculation, the new computer network security system can improve network information security by 13.4%. Show more
Keywords: Computer network, network information security, big data algorithm, computer network information security, orthogonal frequency
DOI: 10.3233/IDT-240179
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3143-3154, 2024
Authors: Sun, Wei | Zhang, Yu | Ma, Jin
Article Type: Research Article
Abstract: With the development of technology and the improvement of living standards, consumer demand for toy products has also changed. More people are showing strong interest and demand for digital toy products. However, in the current digital toy design process, extracting the hairstyle features of the doll’s head is still a challenge. Therefore, this study extracts the two-dimensional contour of the target hairstyle and matches it with the template hairstyle in the database. Then, combining hair texture information and hairstyle structural features, fine geometric texture details are generated on the hairstyle mesh surface. Finally, a doll head modeling method based on …model matching and texture generation is proposed. The results showed that the hairstyle of the Generative model was almost the same as the real hairstyle. Compared with the modeling methods based on interactive genetic algorithm and digital image, the average F1 value of this method was 0.95, and the mean absolute error was the smallest. The accuracy of the target model modeling was 95.4%, and the area enclosed by the receiver operating characteristic curve and coordinate axis was 0.965. In summary, the doll head modeling method based on model matching and texture generation proposed in this study can generate high-precision and realistic hairstyle models corresponding to the hairstyle. The overall shape and local geometric details of the hairstyle can meet the needs of 3D printing, providing certain reference significance for hairstyle reconstruction. Show more
Keywords: Model matching, geometric texture, hairstyle reconstruction, 3D printing, head modeling
DOI: 10.3233/IDT-240458
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3155-3169, 2024
Authors: Ding, Dawei
Article Type: Research Article
Abstract: In view of the problems of low detection accuracy, long detection time, and inability to monitor fault data in real time in the fault detection of traditional machinery and equipment, this paper studies the identification and fault detection of industrial machinery based on the Internet of Things (IoT) technology. By using Internet of Things technology to build a mechanical equipment fault detection system, Internet of Things technology can better build diagnostic and early warning modules for the system, so as to achieve the goal of improving the accuracy of equipment fault detection, shortening equipment fault detection time, and remotely monitoring …equipment. The fault detection system studied in this paper has an accuracy rate of more than 93.4% to detect different types of fault. The use of Internet of Things technology is conducive to improving the accuracy of mechanical equipment fault detection and realizing real-time monitoring of equipment data. Show more
Keywords: Equipment fault detection, industrial machinery, Internet of Things technology, neural network algorithm, energy consumption
DOI: 10.3233/IDT-240177
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3171-3184, 2024
Authors: Zhang, Min | Yang, Jining
Article Type: Research Article
Abstract: In response to the low efficiency of intelligence in traditional logistics industry and transaction services, the lack of effective consensus mechanisms, and the difficulty in fully considering the mutual influence and conflict between multiple objectives, this article adopts multi-objective consensus algorithm (MOCA) to study the optimization of objectives in the logistics industry and transaction services. Firstly, the needs and objective functions of the shipper, logistics service provider, and recipient can be modeled, and the Analytic Hierarchy Process (AHP) and exponential smoothing methods can be used for target weighting and dynamic weight adjustment. Then, the Multi-Objective Particle Swarm Optimization (MOPSO) algorithm …can be introduced to balance the benefits of all parties and find the optimal solution set for the optimization objective. Finally, a consensus mechanism can be established by combining Nash equilibrium and bargaining solution game theory models to achieve fairness in cooperation among the shipper, logistics service provider, and recipient. The experiment collected daily operation records and management systems from a logistics enterprise in Wuhan from June to December 2023. The results showed that in terms of overall efficiency, MOPSO game theory reached 96.5%, an improvement of 6.3% compared to PSO (Particle Swarm Optimization) game theory, and the Benefit Fairness Index (BFI) reached 90.5%. By combining MOCA to optimize the objectives of logistics and transaction services, the intelligence efficiency is greatly improved, ensuring the maximization of benefits and satisfaction for stakeholders. Show more
Keywords: Logistics industry, transaction services, multi-objective consensus algorithm, game theory model, consensus mechanism
DOI: 10.3233/IDT-240201
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3185-3201, 2024
Authors: Gao, Liping | Lin, Zhouyong | Lian, Dade | Lin, Yilong | Yang, Kaixuan | Li, Zhi | Li, Letian
Article Type: Research Article
Abstract: High performance computing is a theory, method, and technology that utilizes supercomputers to achieve parallel computing, and is widely applied in various fields. The problem of thermal deviation and overheating tube explosion on the high-temperature heating surface of coal-fired boilers in large power plants is becoming increasingly prominent, seriously threatening the safe operation of power plant units. In order to solve these problems, an intelligent monitoring method for the overheating status of coal-fired boilers in large power plants is proposed. Analyze the damage mechanism of the high-temperature heating surface of the boiler, and monitor in real-time the temperature exceeding the …limit of the heating surface; Constructing gas-phase turbulent combustion models and metal wall temperature models; Analyze the energy consumption loss during the operation of coal-fired boilers, calculate the heat transfer function of the boiler, and complete the intelligent monitoring of the overheating status of the boiler heating surface. The experimental results show that the rationality of the research method is optimal at a relative height of 100 m; The equivalent heat flux density of the boiler has been improved, and the ability to control consumption losses is better; It can improve the wind power consumption capacity of the heating system. Show more
Keywords: Large power station, coal-fired boiler, heating surface, overtemperature state, intelligent monitoring method
DOI: 10.3233/IDT-240502
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3203-3217, 2024
Authors: Zhang, Zijian
Article Type: Research Article
Abstract: Intelligent city is a product of the deep integration of information technology, industrialization and urbanization, which has a large number of intelligent mechanical products. The users widely evaluate their application characteristics, and the selection of mechanical products based on user evaluation has become a trend. Nowadays, personalized mechanical product recommendation based on user evaluation is more and more widely used. However, due to the sparse evaluation data, the recommendation accuracy needs to be improved. In this paper, the principle of matrix decomposition is deeply analyzed in order to provide useful ideas for solving this problem. The bias weight hybrid recommendation …model of user preference and rating object characteristics is proposed, and the corresponding hybrid recommendation algorithm is designed. First, estimated data obtained using the matrix decomposition principle is supplemented to the sparse data matrix. Secondly, according to the characteristics of users and ratings, initial positions were set based on the statistical distribution of high-performance computing data, and bias weights were set by incorporating each feature. Finally, the nonlinear learning ability of deep neural network learning is used to enhance the classification effectiveness. Practice has proved that the constructed model is reasonable, the designed algorithm converges fast, the recommendation accuracy is improved by about 10%, and the model better alleviates the problem of sparse scoring data. The practical application is simple and convenient, and has good application value. Show more
Keywords: Deep neural networks, high performance computing, sparsity, bias weight, intelligent city, fast setting, hybrid recommendation
DOI: 10.3233/IDT-240529
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3219-3228, 2024
Authors: Chen, Zhenna
Article Type: Research Article
Abstract: In high-performance computing (HPC) systems, energy and power considerations play an increasingly important role. This work aims to ensure the implementation of China’s green and ecological concepts, respond to China’s strategy of building an environment-friendly and resource-saving society, and actively promote the construction of sustainable development campuses. First, the theoretical basis of sustainable energy campus architecture is described. Next, the teaching feedback model under HPC is constructed. Finally, with the evaluation results of students’ task completion processes and students’ task works as measurement indexes, the corresponding data is collected and comprehensively evaluated and analyzed to verify the effectiveness of the …model. The analysis results indicate that most students are able to complete tasks within two hours; however, their proficiency with multimedia technology is inadequate, and they lack initiative in the learning process. There is a correlation between the overall evaluation of task performance and the students’ level of understanding of the tasks. By implementing a teaching feedback model, students’ learning enthusiasm and the quality of their work improved, providing effective educational support for promoting sustainable development on campus. Overall, the awareness of using computers and other multimedia technologies among students is not strong, and their learning process is relatively closed, with insufficient enthusiasm and initiative. This model can achieve the acquisition, integration, and statistical analysis of teacher feedback information. The model can realize the acquisition, integration, and statistical analysis of teachers’ feedback information. This work hopes to use this learning and feedback mode in practical teaching to address specific problems in computer multimedia courses. Show more
Keywords: Highperformance computing (HPC), sustainable energy, multimedia technology, teaching feedback model, information gathering
DOI: 10.3233/IDT-240592
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3229-3241, 2024
Authors: Li, Haiqiang | Ling, Zhimei | Yang, Chaoyi
Article Type: Research Article
Abstract: With the digital transformation of information technology and industry, the applicability scenarios of edge intelligent terminals are expanding. AI chips have become one of the mainstream core devices of edge terminals. How to select suitable AI chips according to application scenarios through systematic analysis has become an urgent problem to be solved. This paper first provides a comparative analysis of the technical architecture and advantages and disadvantages of AI chips; then proposes a low implementation complexity, multi-code rate fusion LDPC parallel coding structure, and an encoder chip design scheme using this structure for high-speed digital transmission applications. Based on the …TSMC 130 nm CMOS standard cell library, the encoder chip can achieve a throughput rate of 1.6 Gbps at 200 MHz clock and consume only 184.3 m W. Compared with the LDPC encoder chip with the same throughput rate designed in the conventional architecture, this solution can reduce the storage space requirement to 18.52% of the conventional architecture. The proposed encoding chip design solution not only maintains high performance but also significantly reduces the demand for storage space, making it a good choice for resource-constrained environments. Show more
Keywords: AI chip, edge intelligence, LDPC encoder, multi-code rate fusion, integrated chip design
DOI: 10.3233/IDT-240112
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3259-3276, 2024
Authors: Li, Xingdong | Hu, Shengwu
Article Type: Research Article
Abstract: In order to obtain the rigorous accuracy of any point on the straight line element, based on existing research, this article considers the error of the straight line element and the error of any point to its end point of the straight line element, attempts to discuss the accuracy calculation of any point on the straight line element in three different situations, and through case analysis, compares the position covariance matrix of any point on the straight line element in three situations and draws its error ellipse using MATLAB programming. Finally, different accuracy sizes and different error ellipses are obtained. …This research has certain guiding significance and role for error calculation in precision engineering measurement, such as large bridges or large through projects. Show more
Keywords: Rigorous accuracy, linear element, covariance matrix, error ellipses, MATLAB
DOI: 10.3233/IDT-230751
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3277-3284, 2024
Authors: Yu, Yue | Li, Qiang | Duan, Maojun | Yuan, Minxi | Song, Ziheng
Article Type: Research Article
Abstract: Traditional data storage models are inadequate in the face of the growing demand for big data in transportation and transportation management. Its poor horizontal scalability makes it difficult to deal with the growth of massive data; on the other hand, its complex management makes it challenging to achieve unified management and effective resource utilization due to the differences in equipment from different manufacturers. In order to effectively store and manage this huge amount of information, it is urgent to rely on advanced technical tools. In this context, while ensuring data security and simplifying data management, it is also necessary to …meet the terminal’s demand for high real-time performance, and these factors jointly promote the continuous attention and improvement of data storage terminal performance. This paper proposes a Hadoop solution based on distributed computing. As a distributed system infrastructure, Hadoop allows users to develop distributed programs without a deep understanding of distributed details, fully using the high-speed computing and storage capabilities of Hadoop clusters, which is especially suitable for big data processing tasks on the Internet of Things (IoT) platform. The experimental results show that for a 10 GB data file, the traditional terminal (Terminal 1) can store 7.8 GB, while the Hadoop-based terminal (Terminal 2) can store 9.9 GB. For 50 GB of data files, Terminal 1 and Terminal 2 can store 40.4 GB and 49.8 GB of data respectively. These results show that Hadoop terminals have significant advantages in processing large-scale data, especially in terms of data storage efficiency, and can use storage resources more effectively to meet the high-performance requirements of traffic and transportation management for data storage terminals. Show more
Keywords: Internet of Things, distributed computing, transportation management, data storage terminal, data transfer, data management
DOI: 10.3233/IDT-240196
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3285-3299, 2024
Authors: Zhu, Yidong | Zhang, Zhe | Liu, Hengyu | Zhao, Bo | Wang, Tong
Article Type: Research Article
Abstract: Load access point planning based on network security is the open capacity assessment for distributive networks to reduce power loss. Distribution network operators must meet consumers’ qualitative and statistical power demands and ensure their satisfaction. The distribution network function aims to minimize power loss by determining the network’s maximum power supply capacity while considering parameters like power flow dispersion, node voltage, line capability, and other limitations. The emergence of new flexible loads presents opportunities and challenges. The study suggested Open Capacity Distribution Network solutions incorporating Distributed Power Supply and Network Reconfiguration (OCDN-DPS and NR). The paper proposes a novel technique …to optimize distribution networks’ open capacity, addressing the issues faced by the expanding use of distributed energy sources. The proposed study produces a rise of 18% in inaccessible capacity and a decrease in power losses by combining distributed electrical power and reconfiguration of network approaches, demonstrating the efficacy of the presented methodologies. The study uses second-order cone relaxing theory to estimate the distribution of the system’s power supply capability on typical days, taking uncertainty into account to satisfy optimization goals. Comparative analysis indicates the unique approach’s superiority, stressing its significance in promoting sustainable energy options in distribution system optimization while maintaining consumer happiness and network security. Show more
Keywords: Distributed power supply, network reconfiguration, open capacity of distribution network, second-order cone relaxation theory
DOI: 10.3233/IDT-240219
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3301-3322, 2024
Authors: Lin, Shuang | Huang, Jianye | Lin, Chenxiang | Zheng, Zhou | He, Deming | Ma, Teng | Wu, Xinxin
Article Type: Research Article
Abstract: The multi-scale object detection algorithm based on deep learning is a common method for safety monitoring in current power operation scenarios, which is of great significance to ensuring the safety of power operations. For certain power applications with high real-time requirements, the computational complexity of deep learning models may become a bottleneck. Deploying deep learning models requires high-performance hardware support, such as GPUs, which might not be easily achievable in some power field environments. According to the characteristics of common safety monitoring tasks in power operation scenarios, this paper proposes an automatic structured pruning method for multi-scale object detection algorithms. …This method is designed for the safety monitoring of single and fixed-scale targets, effectively reducing the complexity of inference calculations and improving the frames per second (FPS) of real-time object detection without compromising accuracy. Furthermore, the proposed method can adaptively perform automatic structured pruning for targets of different scales. Experiments conducted using the YOLOv5 model demonstrate that the proposed method improves inference speed by approximately 20% for safety monitoring tasks without reducing detection accuracy. Show more
Keywords: Power scene, multi-scale object detection, automatic structured pruning, model optimization
DOI: 10.3233/IDT-240255
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3323-3332, 2024
Authors: Huang, Cong | Nong, Liyong | Nong, Yingxiong | Lu, Ying | Chen, Zhibin | Li, Zhe
Article Type: Research Article
Abstract: This paper studies the detection model of network access data tampering attack based on blockchain technology to solve the problem of over-dependence on central server and easy data tampering in traditional network environment. The model uses decentralization and encryption technology to monitor user behavior in real time through smart contracts, enhances data protection with SHA-256 hash algorithm, and combines consensus algorithm to ensure data consistency and security. The experimental results show that the model performs well in detecting multiple attack types with an accuracy of 99.51% and an F1 score of 0.98, far exceeding traditional methods and other deep learning …techniques. The model shows good robustness under multi-node attacks, even with 200 attack nodes, the recognition accuracy is still close to 90%, and the response time is less than 3 seconds. Cross-platform testing showed that the model quickly and consistently detected tampering on both Ethereum and Hyperledger, with an average detection time between 0.33 and 0.47 seconds.The hardware acceleration test further shows that the processing speed and hardware utilization of TPU and GPU have been improved, with TPU processing speed reaching 135 MB/s and GPU 122 MB/s. This study will provide a theoretical basis for improving the security, effectiveness and reliability of current network systems, and also lay a solid theoretical and technical foundation for network applications in future network environments. Show more
Keywords: Blockchain technology, central server, data tampering attack detection, convolutional neural network, hash algorithm
DOI: 10.3233/IDT-240176
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3333-3345, 2024
Authors: Zhang, Zongju
Article Type: Research Article
Abstract: As electronic information technology continuously develops, it has gradually been applied to various industries and plays a vital role in the Internet of Things (IoT). The electronic communication positioning system is an interrelated aggregate or component composed of combining electronic science and technology with information science and technology to determine the spatial position. The current electronic communication positioning system has low positioning accuracy in indoor environments and is easily affected by obstacles such as building structures and walls, making it difficult to achieve accurate positioning. This paper integrated the IoT technology into the design and simulation experiment of the electronic …communication system, aiming to improve the stability and accuracy of the positioning system and improve the quality of the location information service in the current stage of the market. This paper first introduced the architecture of the IoT, including RFID technology analysis and ZigBee technology analysis. Then it studied the classification of common positioning algorithms, and finally conducted a simulation test on the electronic communication positioning system based on the IoT technology in the experimental part. Compared with the traditional electronic communication positioning system, the tested results show that the minimum error of the electronic communication positioning system based on the IoT technology can reach 0.102. The maximum time accuracy can reach 95.34%, proving its effectiveness. In practical application, its error and time accuracy can achieve the required level of location information service. This article provided a brief explanation of the application of electronic information technology in the IoT system to assist in the further development of the IoT. Show more
Keywords: Internet of things technology, electronic communication technology, system design, simulation test
DOI: 10.3233/IDT-240184
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3347-3363, 2024
Authors: Wu, Jian | Xu, Liang | Chen, Qi | Ye, Zhihui
Article Type: Research Article
Abstract: In the development of automation and intelligent systems, multi-sensor data fusion technology is crucial. However, due to the uncertainty and incompleteness of sensor data, how to effectively fuse these data has always been a challenge. To solve this problem, the study combines fuzzy theory and neural networks to study the process of multi-sensor data transmission and data fusion. Sensor network clustering algorithms based on whale algorithm optimized fuzzy logic and neural network data fusion algorithms based on sparrow algorithm optimized were designed respectively. The performance test results showed that the first node death time of the data fusion algorithm is …delayed to 1122 rounds, which is 391 rounds and 186 rounds later than the comparison algorithm, respectively. In the same round, the remaining energy was always greater than the comparison algorithm, and the difference gradually increased. The results indicate that the proposed multi-sensor data fusion path combining fuzzy theory and neural networks has successfully improved network efficiency and node energy utilization, and extended network lifespan. Show more
Keywords: SSA, BP, fuzzy logic, neural networks, data fusion
DOI: 10.3233/IDT-240316
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3365-3378, 2024
Authors: Wang, Yongqiang | Yang, Li
Article Type: Research Article
Abstract: Faced with the situation that the elderly people at home have dangerous behaviors, the study explores various aspects of motion target detection, real-time target tracking and behavioral pose recognition and classification, using behavioral poses in videos as samples. To tackle the challenges in detecting motion targets, a target detection method based on Gaussian mixture model (GMM) and four frame difference method is proposed; A tracking technique incorporating Kalman filter (KF) is investigated to trail the behavioral changes of the elderly in actual time. A seven-layer convolutional neural network (CNN) is constructed to face the problem of inaccurate behavioral pose recognition. …Through relevant experimental analyses, the outcomes show that the increased GMM detection way has a complete profile and the accuracy is significantly improved. The KF target tracking technique can trail the object trajectory in actual time and steadily, with the smallest trailing error value of 0.19. The classification accuracy of the CNN pose recognition model is 95.87%, and the pose classification time is 27 seconds. Its performance is superior to the mean shift algorithm, particle filter algorithm, and Cam Shift algorithm in all aspects. When applied in practice, it can accurately identify whether the elderly’s behavior is abnormal and ensure their daily health. Show more
Keywords: Abnormal poses, target tracking, convolutional neural networks, gaussian mixture models, kalman filtering
DOI: 10.3233/IDT-230375
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3379-3394, 2024
Authors: Tang, Hui | Yan, Jie | Pan, Deng | Chen, Yu
Article Type: Research Article
Abstract: This paper analyzes the healthcare level and the evolution of spatial and temporal pattern of each district and county in Changsha City by using the entropy value method based on the statistics of the number of healthcare facilities, beds, and health technicians in Changsha City from 2010 to 2018, and evaluates the fairness and trend of healthcare resource distribution in Changsha City over the years in terms of per capita level through the calculation of the Gini coefficient. The results show that the medical level in the southern part of Changsha City is better, the medical level in the northern …part is weaker, there are significant differences in the healthcare level between districts and counties and the differences between districts and counties are decreasing year by year; there is a lack of fairness in the level of healthcare technicians in each district and counties in comparison with the level of the number of health care institutions and beds. In this regard, suggestions are made to improve the level of protection of health technical personnel, to equalize the distribution of the internal components of health care, and to improve the construction of the health care system and resource sharing. Show more
Keywords: Health equality, medical health resource allocating, gini coefficient, Changsha City
DOI: 10.1177/18724981241295933
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3395-3404, 2024
Authors: Zhang, Pengcheng | Yu, Hao | Wang, Wei | Su, Tao | Zhu, Min | Jin, Xin | Huang, Ying
Article Type: Research Article
Abstract: With the rapid development of global navigation satellite system (GNSS) technology, IGS signals play a crucial role in many fields such as positioning, navigation, and time synchronization. Nevertheless, the multipath error problem seriously affects the performance of IGS signals in practical applications. In this paper, the multipath error problem of IGS signals is studied in depth, and an improvement method based on a digital filtering technique is proposed. The article first provides a comprehensive analysis of the impact of multipath error and identifies the shortcomings of existing studies, thus clarifying the motivation of this study. Subsequently, this paper uses digital …filtering techniques to estimate the multipath error accurately and designs a corresponding improvement strategy. Through experimental verification, this paper demonstrates the significant effect of the improved method in improving signal quality and positioning accuracy. Finally, the article summarises the research results. It discusses their potential promotion value in IGS signal reception and related applications, which provides important theoretical support and practical guidance for future technology development and applications. Show more
Keywords: Digital filtering techniques, IGS signals, multipath error estimation, error improvement
DOI: 10.3233/IDT-240242
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3405-3421, 2024
Authors: Liu, Xin
Article Type: Research Article
Abstract: In film and television production, efficient and precise image processing is vital for achieving realistic visual effects. Therefore, exploring and applying advanced image processing technologies has become an essential method for elevating the production quality of film and television projects. This work investigates the application of artificial intelligence (AI) technology in the processing and production of animated images in film and television scenarios. By comparing the performance of standard Generative Adversarial Network (GAN), DenseNet, and CycleGAN models under different noise conditions, it is found that CycleGAN performs the best in image denoising and detail restoration. Experimental results demonstrate that CycleGAN …achieves a Peak Signal-to-noise Ratio (PSNR) of 30.1dB and a Structural Similarity Index Measure (SSIM) of 0.88 under Gaussian noise conditions. Moreover, CycleGAN achieves a PSNR of 29.5dB and an SSIM of 0.85 under salt-and-pepper noise conditions. It outperforms the other models in both conditions. Additionally, CycleGAN’s mean absolute error is significantly lower than that of the other models. This work demonstrates that CycleGAN can more effectively handle complex noise and generate high-quality images under unsupervised learning conditions. These findings provide new directions for future image processing research and offer important references for model selection in practical applications. This work not only offers new perspectives on the development of animation image processing technology but also establishes a theoretical foundation for applying advanced AI techniques in film and television production. Through comparative analysis of various deep learning models, this work highlights the superior performance of CycleGAN under complex noise conditions. This advancement not only drives progress in image processing technology but also provides effective solutions for efficient production and quality enhancement of future film and television works. Show more
Keywords: High-performance calculation, Zhongying⋅Shensi, film and television images, artificial neural network
DOI: 10.3233/IDT-240577
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3423-3436, 2024
Authors: Wang, Honglv | Shi, Dingke | Zhang, Chengting | Ding, Nanzhe | Cheng, Chao
Article Type: Research Article
Abstract: Industry 4.0 is reshaping conventional factories into “smart factories” via the widespread use of IoT-enabled networks of linked devices, sensors, and software for process optimization and monitoring. Intelligent manufacturing facilities may employ IoT-based predictive maintenance to reduce downtime, increase equipment longevity, and avoid machine problems. Manufacturers may get real-time insights into energy consumption patterns, which is a major concern in the business. The primary objective is to optimize energy use during part manufacturing. Hence, this paper proposes the Internet of Things- Low-Power Wide-Area Network Model (IoT-LPWM) to monitor manufacturing and reduce energy consumption. The proposed method’s production status component uses …visual Knowledge Map Analysis loaded with data from the edge device. A Low-power wide-area network (LPWAN) is the fundamental component of the suggested approach to industrial wireless communication. Using edge computing technology in LPWAN helps reduce computational complexity by shifting high-intensity processing to the periphery, where devices with computing resources are more readily available. Both the energy needed to process and store massive data and the likelihood of cyberattacks may be decreased with this method. The experimental results show that the IoT-LPWM provides useful information to help them make decisions and reduce energy consumption. The experimental results show that our proposed method IoT-LPWM achieves a high performance ratio of 97%, attack prevention ratio of 96.3%, energy management ratio of 93.8%, and data transmission ratio of 98.1% compared to other methods. Show more
Keywords: Intelligent factory, internet of things (IoT), knowledge map (KM), edge computing(EC), sensors
DOI: 10.3233/IDT-240251
Citation: Intelligent Decision Technologies, vol. 18, no. 4, pp. 3437-3451, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl