Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Article Type: Other
DOI: 10.3233/FI-2010-212
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. i-ii, 2010
Authors: Alcala-Fdez, Jesus | Flugy-Pape, Nicolo | Bonarini, Andrea | Herrera, Francisco
Article Type: Research Article
Abstract: DataMining is most commonly used in attempts to induce association rules from transaction data which can help decision-makers easily analyze the data and make good decisions regarding the domains concerned. Most conventional studies are focused on binary or discrete-valued transaction data, however the data in real-world applications usually consists of quantitative values. In the last years, many researches have proposed Genetic Algorithms for mining interesting association rules from quantitative data. In this paper, we present …a study of three genetic association rules extraction methods to show their effectiveness for mining quantitative association rules. Experimental results over two real-world databases are showed. Show more
Keywords: Association Rules, Data Mining, Evolutionary Algorithms, Genetic Algorithms
DOI: 10.3233/FI-2010-213
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 1-14, 2010
Authors: Lu, Kun-Che | Yang, Don-Lin
Article Type: Research Article
Abstract: Clustering is useful for mining the underlying structure of a dataset in order to support decision making since target or high-risk groups can be identified. However, for high dimensional datasets, the result of traditional clustering methods can be meaningless as clusters may only be depicted with respect to a small part of features. Taking customer datasets as an example, certain customers may correlate with their salary and education, and the others may correlate with their job …and house location. If one uses all the features of a customer for clustering, these local-correlated clusters may not be revealed. In addition, processing high dimensions and large datasets is a challenging problem in decision making. Searching all the combinations of every feature with every record to extract local-correlated clusters is infeasible, which is in exponential scale in terms of data dimensionality and cardinality. In this paper, we propose a scalable 2-Leveled Approximated Hyper-Image-based Clustering framework, referred as 2L-HIC-A, for mining local-correlated clusters, where each level clustering process requires only one scan of the original dataset. Moreover, the data-processing time of 2L-HIC-A can be independent of the input data size. In 2L-HIC-A, various well-developed image processing techniques can be exploited for mining clusters. In stead of proposing a new clustering algorithm, our framework can accommodate other clustering methods for mining local-corrected clusters, and to shed new light on the existing clustering techniques. Show more
Keywords: local-correlated cluster, approximated clustering, high dimension, large dataset, image processing, morphology
DOI: 10.3233/FI-2010-214
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 15-32, 2010
Authors: Lee, Wei-Hsun | Tseng, Shian-Shyong | Wang, Ching-Hung | Tung, Shen-Lung
Article Type: Research Article
Abstract: Locating the radio signal coverage hole (SCH) and signal weak coverage area (SWCA) in the mobile network and taking appropriate actions to improve network quality are the major tasks for mobile network operators (MNO). A novel approach, Signal Coverage Hole and weak coverageArea DIscovering Model (SCHADIM), is proposed based on spatiotemporal data mining on the raw data collecting from location based service (LBS) to achieve this goal. It reuses the communication raw data in LBS-based applications …and transforms it into mobile network communication quality monitoring information by integrating the GIS (geographical information system), road network database and mobile network base station database. By this way, the vehicles in the LBS-based applications can be regarded as the mobile network signal probing vehicles, which provides plentiful information for discovering the SCH/SWCA. Comparing to the traditionalmobile network signal probing vehicle method, which is known as radio site verification (RSV) method, the proposed SCHADIM has the spatiotemporal coverage as well as cost advantages. A mobile network monitoring system, cellular network quality safeguard (CQS), has been implemented based on the proposed model which combines with the domain expert heuristics to provide decision support information for optimizing the mobile network. Show more
Keywords: Spatiotemporal data mining, Radio signal coverage hole, Mobile network planning, Location-based service (LBS)
DOI: 10.3233/FI-2010-215
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 33-47, 2010
Authors: De Bock, KoenW. | Van den Poel, Dirk
Article Type: Research Article
Abstract: Several recent studies have explored the virtues of behavioral targeting and personalization for online advertising. In this paper, we add to this literature by proposing a cost-effective methodology for the prediction of demographic website visitor profiles that can be used for web advertising targeting purposes. The methodology involves the transformation of website visitors' clickstream patterns to a set of features and the training of Random Forest classifiers that generate predictions for gender, …age, level of education and occupation category. These demographic predictions can support online advertisement targeting (i) as an additional input in personalized advertising or behavioral targeting, or (ii) as an input for aggregated demographic website visitor profiles that supportmarketingmanagers in selecting websites and achieving an optimal correspondence between target groups and website audience composition. The proposed methodology is validated using data from a Belgian web metrics company. The results reveal that Random Forests demonstrate superior classification performance over a set of benchmark algorithms. Further, the ability of the model set to generate representative demographic website audience profiles is assessed. The stability of the models over time is demonstrated using out-of-period data. Show more
Keywords: demographic prediction, demographic targeting, web advertising, web user profiling, clickstream analysis, ensemble classification, Random Forests, out-of-period validation
DOI: 10.3233/FI-2010-216
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 49-70, 2010
Authors: Leng, Jinsong | Hong, Tzung-Pei
Article Type: Research Article
Abstract: Outlier detection in high dimensional data sets is a challenging data mining task. Mining outliers in subspaces seems to be a promising solution, because outliers may be embedded in some interesting subspaces. Searching for all possible subspaces can lead to the problem called "the curse of dimensionality". Due to the existence of many irrelevant dimensions in high dimensional data sets, it is of paramount importance to eliminate the irrelevant or unimportant dimensions and identify …interesting subspaces with strong correlation. Normally, the correlation among dimensions can be determined by traditional feature selection techniques or subspace-based clustering methods. The dimension-growth subspace clustering techniques can find interesting subspaces in relatively lower dimension spaces, while dimension-reduction approaches try to group interesting subspaces with larger dimensions. This paper aims to investigate the possibility of detecting outliers in correlated subspaces. We present a novel approach by identifying outliers in the correlated subspaces. The degree of correlation among dimensions is measured in terms of the mean squared residue. In doing so, we employ a dimension-reduction method to find the correlated subspaces. Based on the correlated subspaces obtained, we introduce another criterion called "shape factor" to rank most important subspaces in the projected subspaces. Finally, outliers are distinguished from most important subspaces by using classical outlier detection techniques. Empirical studies show that the proposed approach can identify outliers effectively in high dimensional data sets. Show more
Keywords: Outlier Detection, Subspace Outlier Detection, Subspace Clustering, Shape Factor, Dimension Reduction
DOI: 10.3233/FI-2010-217
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 71-86, 2010
Authors: Kramer, Oliver
Article Type: Research Article
Abstract: Kernel based techniques have shown outstanding success in data mining and machine learning in the recent past. Many optimization problems of kernel based methods suffer from multiple local optima. Evolution strategies have grown to successfulmethods in non-convex optimization. This work shows how both areas can profit from each other. We investigate the application of evolution strategies to Nadaraya-Watson based kernel regression and vice versa. The Nadaraya-Watson estimator is used as meta-model during optimization with …the covariance matrix self-adaptation evolution strategy. An experimental analysis evaluates the meta-model assisted optimization process on a set of test functions and investigates model sizes and the balance between objective function evaluations on the real function and on the surrogate. In turn, evolution strategies can be used to optimize the embedded optimization problem of unsupervised kernel regression. The latter is fairly parameter dependent, and minimization of the data space reconstruction error is an optimization problem with numerous local optima. We propose an evolution strategy based unsupervised kernel regression method to solve the embedded learning problem. Furthermore, we tune the novel method by means of the parameter tuning technique sequential parameter optimization. Show more
DOI: 10.3233/FI-2010-218
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 87-106, 2010
Authors: Tsumoto, Shusaku | Hirano, Shoji
Article Type: Research Article
Abstract: This paper proposes an application of data mining to medical risk management, where data mining techniques are applied to detection, analysis and evaluation of risks potentially existing in clinical environments. We applied this technique to the following two medical domains: risk aversion of nurse incidents and infection control. The results show that data mining methods were effective to detection and aversion of risk factors.
Keywords: Risk Mining, Risk Management, Data Mining, Hospital Management
DOI: 10.3233/FI-2010-219
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 107-121, 2010
Authors: Abe, Akinori | Ohsawa, Yukio | Ozaku, Hiromi Itoh | Sagara, Kaoru | Kuwahara, Noriaki | Kogure, Kiyoshi
Article Type: Research Article
Abstract: Many medical accidents and incidents occurred due to communication errors. To avoid such incidents, in this paper, we propose a system for determining communication errors. Especially, we propose a model that can be applied to multi-layered or chained situations. First, we provide an overview of communication errors in nursing activities. Then we describe the warp and woof model for nursing task that was proposed by Harada and considers multi-layered or chained situations. Next we describe a …system for determining communication errors based on the warp and woof model for nursing task. The system is capable of generating nursing activity diagrams semi-automatically and compiles necessary nursing activities. We also propose a prototype tagging of the nursing corpus for an effective generation of the diagrams. Then we combine the diagram generation with the Kamishibai KeyGraph to determine possible points of the hidden or potential factors of communication errors. Show more
Keywords: nursing risk management, communication error, multi-layered or chained situations, the warp and woof model, nursing activity diagrams, Kamishibai KeyGraph
DOI: 10.3233/FI-2010-220
Citation: Fundamenta Informaticae, vol. 98, no. 1, pp. 123-142, 2010
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl