Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Authors: Amin, Talha | Chikalov, Igor | Moshkov, Mikhail | Zielosko, Beata
Article Type: Research Article
Abstract: Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification – exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than …the ordinary optimization (length or coverage). Show more
Keywords: Dynamic programming, decision rules, classifiers
DOI: 10.3233/FI-2013-901
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 151-160, 2013
Authors: Greco, Salvatore | Słowiński, Roman | Szczęch, Izabela
Article Type: Research Article
Abstract: The paper focuses on Bayesian confirmation measures used for evaluation of rules induced from data. To distinguish between many confirmation measures, their properties are analyzed. The article considers a group of symmetry properties. We demonstrate that the symmetry properties proposed in the literature focus on extreme cases corresponding to entailment or refutation of the rule's conclusion by its premise, forgetting intermediate cases. We conduct a thorough analysis of the symmetries regarding that the confirmation should express how much more probable the rule's hypothesis is when the premise is present rather than when the negation of the premise is present. As …a result we point out which symmetries are desired for Bayesian confirmation measures. Next, we analyze a set of popular confirmation measures with respect to the symmetry properties and other valuable properties, being monotonicity M, Ex1 and weak Ex1 , logicality L and weak L. Our work points out two measures to be the most meaningful ones regarding the considered properties. Show more
Keywords: Bayesian confirmation measures, symmetry properties, rule evaluation
DOI: 10.3233/FI-2013-902
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 161-176, 2013
Authors: Clark, Patrick G. | Grzymala-Busse, Jerzy W.
Article Type: Research Article
Abstract: In this paper we present results of experiments on 166 incomplete data sets using three probabilistic approximations: lower, middle, and upper. Two interpretations of missing attribute values were used: lost and “do not care” conditions. Our main objective was to select the best combination of an approximation and a missing attribute interpretation. We conclude that the best approach depends on the data set. The additional objective of our research was to study the average number of distinct probabilities associated with characteristic sets for all concepts of the data set. This number is much larger for data sets with “do not …care” conditions than with data sets with lost values. Therefore, for data sets with “do not care” conditions the number of probabilistic approximations is also larger. Show more
Keywords: characteristic sets, singleton, subset and concept approximations, lower, middle and upper approximations, incomplete data
DOI: 10.3233/FI-2013-903
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 177-191, 2013
Authors: Ziarko, Wojciech | Chen, Xugunag
Article Type: Research Article
Abstract: The article reviews the basics of the variable precision rough set and the Bayesian approaches to data dependencies detection and analysis. The variable precision rough set and the Bayesian rough set theories are extensions of the rough set theory. They are focused on the recognition and modelling of set overlap-based, also referred to as probabilistic, relationships between sets. The set-overlap relationships are used to construct approximations of undefinable sets. The primary application of the approach is to analysis of weak data co-occurrence-based dependencies in probabilistic decision tables learned from data. The probabilistic decision tables are derived from data to represent …the inter-data item connections, typically for the purposes of their analysis or data value prediction. The theory is illustrated with a comprehensive application example illustrating utilization of probabilistic decision tables to face image classification. Show more
Keywords: rough sets, approximation space, probabilistic dependencies, variable precision rough sets, Bayesian rough sets, probabilistic decision tables, machine learning
DOI: 10.3233/FI-2013-904
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 193-207, 2013
Authors: Tsumoto, Shusaku | Hirano, Shoji
Article Type: Research Article
Abstract: This paper proposes a new framework for incremental rule induction of medical diagnostic rules based on incremental sampling scheme and rule layers. When an example is appended, four possibilities can be considered. Thus, updates of accuracy and coverage are classified into four cases, which give two important inequalities of accuracy and coverage for induction of probabilistic rules. By using these two inequalities, the proposed method classifies a set of formulae into four layers: the rule layer, subrule layer (in and out) and the non-rule layer. Then, the obtained rule and subrule layers play a central role in updating proabilistic rules. …If a new example contributes to an increase in the accuracy and coverage of a formula in the subrule layer, the formula is moved into the rule layer. If this contributes to a decrease of a formula in the rule layer, the formula is moved into the subrule layer. The proposed method was evaluated on a dataset regarding headaches, whose results show that the proposed method outperforms the conventional methods. Show more
Keywords: incremental rule induction, rough sets, RHINOS, incremental sampling scheme, subrule layer
DOI: 10.3233/FI-2013-905
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 209-223, 2013
Authors: Touati, Hakim | Ras, Zbigniew W.
Article Type: Research Article
Abstract: Meta-actions effect and selection are the fundamental core for a successful action rule execution. All atomic action terms on the left-hand side of an action rule have to be covered by well chosen meta-actions in order for it to be executed. The choice of meta-actions depends on the antecedent side of action rules; however, it also depends on their list of atomic actions that are outside of the action rule scope, seen as side effects. In this paper, we strive to minimize the side effects by decomposing the left-hand side of an action rule into executable action rules covered by …a minimal number of meta-actions and resulting in a cascading effect. This process was tested and compared to original action rules. Experimental results show that side effects are diminished in comparison with the original meta-actions applied while keeping a good execution confidence. ding effect. This process was tested and compared to original action rules. Experimental results show that side effects are diminished in comparison with the original meta-actions applied while keeping a good execution confidence. Show more
Keywords: Meta-actions, Action rules decomposition, Side effects
DOI: 10.3233/FI-2013-906
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 225-240, 2013
Authors: Wang, Hui | Düntsch, Ivo | Trindade, Luis
Article Type: Research Article
Abstract: In this paper we review Lattice Machine, a learning paradigm that “learns” by generalising data in a consistent, conservative and parsimonious way, and has the advantage of being able to provide additional reliability information for any classification. More specifically, we review the related concepts such as hyper tuple and hyper relation, the three generalising criteria (equilabelledness, maximality, and supportedness) as well as the modelling and classifying algorithms. In an attempt to find a better method for classification in Lattice Machine, we consider the contextual probability which was originally proposed as a measure for approximate reasoning when there is insufficient data. …It was later found to be a probability function that has the same classification ability as the data generating probability called primary probability. It was also found to be an alternative way of estimating the primary probability without much model assumption. Consequently, a contextual probability based Bayes classifier can be designed. In this paper we present a new classifier that utilises the Lattice Machine model and generalises the contextual probability based Bayes classifier. We interpret the model as a dense set of data points in the data space and then apply the contextual probability based Bayes classifier. A theorem is presented that allows efficient estimation of the contextual probability based on this interpretation. The proposed classifier is illustrated by examples. Show more
Keywords: Lattice machine, contextual probability, generalisation, classification
DOI: 10.3233/FI-2013-907
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 241-256, 2013
Authors: Pancerz, Krzysztof | Suraj, Zbigniew
Article Type: Research Article
Abstract: The aim of this paper is to present the methods and algorithms of information systems decomposition. In the paper, decomposition with respect to reducts and the so-called global decomposition are considered. Moreover, coverings of information systems by components are discussed. An essential difference between two kinds of decomposition can be observed. In general, global decomposition can deliver more components of a given information system. This fact can be treated as some kind of additional knowledge about the system. The proposed approach is based on rough set theory. To demonstrate the usefulness of this approach, we present an illustrative example coming …from the economy domain. The discussed decomposition methods can be applied e.g. for design and analysis of concurrent systems specified by information systems, for automatic feature extraction, as well as for control design of systems represented by experimental data tables. Show more
Keywords: decomposition, information analysis, information system, knowledge representation, machine learning, reduct, rough sets
DOI: 10.3233/FI-2013-908
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 257-272, 2013
Authors: Kruczyk, Marcin | Baltzer, Nicholas | Mieczkowski, Jakub | Dramiński, Michał | Koronacki, Jacek | Komorowski, Jan
Article Type: Research Article
Abstract: An important step prior to constructing a classifier for a very large data set is feature selection. With many problems it is possible to find a subset of attributes that have the same discriminative power as the full data set. There are many feature selection methods but in none of them are Rough Set models tied up with statistical argumentation. Moreover, known methods of feature selection usually discard shadowed features, i.e. those carrying the same or partially the same information as the selected features. In this study we present Random Reducts (RR) - a feature selection method which precedes classification …per se. The method is based on the Monte Carlo Feature Selection (MCFS) layout and uses Rough Set Theory in the feature selection process. On synthetic data, we demonstrate that the method is able to select otherwise shadowed features of which the user should be made aware, and to find interactions in the data set. Show more
Keywords: Feature selection, random reducts, rough sets, Monte Carlo
DOI: 10.3233/FI-2013-909
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 273-288, 2013
Authors: Pal, Jayanta Kumar | Ray, Shubhra Sankar | Pal, Sankar K.
Article Type: Research Article
Abstract: MicroRNAs (miRNA) are one kind of non-coding RNA which play many important roles in eukaryotic cell. Investigations on miRNAs show that miRNAs are involved in cancer development in animal body. In this article, a threshold based method to check the condition (normal or cancer) of miRNAs of a given sample/patient, using weighted average distance between the normal and cancer miRNA expressions, is proposed. For each miRNA, the city block distance between two representatives, corresponding to scaled normal and cancer expressions, is obtained. The average of all such distances for different miRNAs is weighted by a factor, to generate the threshold. …The weight factor, which is cancer dependent, is determined through an exhaustive search by maximizing the F score during training. In a part of the investigation, a ranking algorithm for cancer specific miRNAs is also discussed. The performance of the proposed method is evaluated in terms of Matthews Correlation Coefficient (MCC) and by plotting points (1 – Specificity vs: Sensitivity) in Receiver Operating Characteristic (ROC) space, besides the F score. Its efficiency is demonstrated on breast, colorectal, melanoma lung, prostate and renal cancer data sets and it is observed to be superior to some of the existing classifiers in terms of the said indices. Show more
Keywords: miRNA expression analysis, cancer detection, pattern recognition, bioinformatics
DOI: 10.3233/FI-2013-910
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 289-305, 2013
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl