You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Editorial

Dear Colleague:Welcome to volume 27(6) of the Intelligent Data Analysis (IDA) Journal.

Dear reader, this is the sixth issue that closes the 27th year of the IDA journal. It includes fifteen articles covering theoretical contributions as well as papers on applied Intelligent Data Analysis.

The first part of this issue includes several theoretical contribution, new methods and tehcniques for some particular families of problems or challenges involving data preparation and preprocessing. In the first paper, G. Chen el al. revisit the concept of Transformers and introduce a Resformer network with a fully-sparse attention mechanism and specialized modules for time series and a quadratic linear transformation instead of the Transformer’s self-attention mechanism.The paper concludes with a comparative analysis between the new Resformer and Transformers for long time-series forecasting. In Duan et al., the second paper of this block, the authors research on alternatives in oversampling methods for imabalanced classification datasets and they propose a new method to conduct hybrid sampling with a two-step noise filtering. The algorithms combines noise discrimination mechanisms with dividing the minority class into boundary and safe samples (that helps to remove a portion of boundary majority class). This divition of the minority class into these two categories is also exploited to sample them independently. In the third theoretical paper, Mu et al. analyze the problem of graph-based recommendation systems based on graph neural networks. The paper propose a novel heterogeneous information fusion based graph collaborative filtering method that combined multiple heterogenous graphs. Hou et al. propose a new sub-graph sampling methods for large-scale graphs anallyzed with Graph Convolutional Networks. The proposed methodology uses an index between two adjacent nodes as a way to describe the relationship among neighbor nodes, leveraging it for probabilistic sampling. The algorithm shows much smaller variance and its result outperforms existing literature. Zhang et al. introduce a new method to analyze clustering on stream data. The propose a new dynamic-microlocal-clustering method to merge samples from streaming data into dynamic microlocal clusters. This approach allows to use a density-center-based neighborhood search method for periodically merging microlocal clusters to global clusters automatically while these global clusters are updated by the dynamic-cluster-increasing method with data streaming in each period.

A second section of theoretical papers included three contributions in supervised and unsupervised classification. This section starts with S. Nheri et al. who extend the Covariance-guided One-Class Support Vector Machine classifier to perform a scattered covariance estimation. The new method uses subclass information to jointly decrease dispersion solving it as a convex optimization problem. Additionally, the comparison of this method and the original one is conducted on both artificial and real-world datasets. In the second paper of this sections Bao et al. present an interesting work on neural networks based on RBF (radial basis function) for Multi-Instance Multi-Label classification. The contribution addresses the issue of reusing prio information in regards to the relationship among instances and among labels,. The authors formulate the problem as a multiobjective optimization case using a tailor-made particle swarm optimization that uses share-learning factor to adjust particle velocity to solve it. We conclusde the section on theoretical papers with Pang et al. presenting their work on Bayesian network classifiers. In particular, the authors introduce Bayesian multinets within the framework of multistage classification, by partitioning training set and pseudo training set according to high-confidence class labels. This paper addresses the issue of asymmetric independence assertion in contrast to the independent and identically distributed assumption in other single-topology networks.

The third group of papers correspond already to the application section, particularly, in the areas of applied computer vision and image processing. The first of these papers is Wang et al. that present an interesting paper in the detection of small objects in unmanned aerial vehicle images. These objects are difficult to detect, and the proposed method uses multi-scale spatial features and multi-scale channel attention. The results is an improved sensitivity for smaller objects addressing problems such as aggregation, occlusion and insufficient feature extraction in small object detection. A second application paper is Liang et al., where the authors address the challenge of performing Transformer-based vision tasks on the edge (e.g. mobile computers). The proposed new architecture include two novel modules to, first, expand feature capacity inside the module and enhance the long-range modeling capability of attention mechanism, and, second, to increase the compactness of the model and introduce inductive bias using convolutional cheap operations. We conclude this section on computer vision with Deepa et al. who have conducted a detailed survey of different deep learning models for the detection and classification of brain tumor using Magnetic Resonance Images. Nineteen different pre-trained deep learning models and their variants have been tested with data form eight different well-known datasets, becoming an extensive and detailed study of these deep learning models to this challenging application field.

The final section of this issue is deveoted to other application papers, in this case on a broader range of domains. For instance, in the first paper of this last section, Tian et al. proposes a new method to classify smart contracts that also uses multi-feature fusion as the core mechanism. This method uses structure-based traversal approaches to extract global code features from the abstract syntax tree, then these local and global code features introduce to an attention mechanism to generate code semantic features. The classification layer of this approach exploits a these smart contract semantic features as they are processed by a stacked denoising autoencoder and a softmax classifier. The second paper of the section, Sun et al., address the concept of important or key nodes in social networks paying particular attention of how these nodes are defined and what make them different from the rest of the nodes in the network. The paper is a well-.structured survey on the existing literature on this topic reviewing the different metrics and analyzing the relantionship among them in twenty well-known real-world datasets. In the this application paper of this section, Q. Chen et al. tackles the problem of failure evidence extraction from complex hybrid data in QA systems. The contribuntion introduces two different metrics for the problem and proposes an approach that leverages the use of a candidate extractor to obtain supporting evidence related to the question, a origin selector designed to obtain answer provenance, and a fusion mechanism for the loss of origin selector. We conclude this section and the issue with an interesting application problem, Park et al. proposes a method based on Trasnformers to produce a note-level transcription of music melodies. The proposed method uses sequence-to-sequence Transformers combined with pitch augmentation to prevent overfiting and overlapping decoding for broken context between segments.

We would also like to thank you the help of all the editorial board members and the reviewers that have make possible to complete this review as well as the publisher and the team in IOS Press for their excellent work.

With our best wishes,

Dr. A. FamiliDr. J.M. Pena𝑭𝒐𝒖𝒏𝒅𝒆𝒓Editor-in-Chief