Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Authors: Wang, Guoyin | Liu, Qing | Lin, Tsau Young | Yao, Yi Yu | Polkowski, Lech
Article Type: Other
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. i-iii, 2004
Authors: Bazan, Jan | Nguyen, Hung Son | Szczuka, Marcin
Article Type: Research Article
Abstract: The concept of approximation is one of the most fundamental in rough set theory. In this work we examine this basic notion as well as its extensions and modifications. The goal is to construct a parameterized approximation mechanism making it possible to develop multi-stage multi-level concept hierarchies that are capable of maintaining acceptable level of imprecision from input to output.
Keywords:
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 107-118, 2004
Authors: Guan, J.W. | Liu, D.Y. | Bell, D.A.
Article Type: Research Article
Abstract: Large collections of genomic information have been accumulated in recent years, and embedded latently in them is potentially significant knowledge for exploitation in medicine and in the pharmaceutical industry. The approach taken here to the distillation of such knowledge is to detect strings in DNA sequences which appear frequently, either within a given sequence (e.g., for a particular patient) or across sequences (e.g., from different patients sharing a particular medical diagnosis). Motifs are strings that occur …very frequently. We present basic theory and algorithms for finding very frequent and common strings. Strings which are maximally frequent are of particular interest and, having discovered such motifs, we show briefly how to mine association rules by an existing rough sets based technique. Further work and applications are in progress. Show more
Keywords: rough sets, data mining, knowledge discovery in databases, DNA, bioinformatics
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 119-134, 2004
Authors: Hu, Xiaohua | Lin, T.Y. | Han, Jianchao
Article Type: Research Article
Abstract: Rough sets theory was proposed by Pawlak in the early 1980s and has been applied successfully in a lot of domains. One of the major limitations of the traditional rough sets model in the real applications is the inefficiency in the computation of core and reduct, because all the intensive computational operations are performed in flat files. In order to improve the efficiency of computing core attributes and reducts, many novel approaches have been developed, some …of which attempt to integrate database technologies. In this paper, we propose a new rough sets model and redefine the core attributes and reducts based on relational algebra to take advantages of the very efficient set-oriented database operations. With this new model and our new definitions, we present two new algorithms to calculate core attributes and reducts for feature selections. Since relational algebra operations have been efficiently implemented in most widely-used database systems, the algorithms presented in this paper can be extensively applied to these database systems and adapted to a wide range of real-life applications with very large data sets. Compared with the traditional rough set models, our model is very efficient and scalable. Show more
Keywords: rough set, database systems, relational algebra, reduct, feature selection
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 135-152, 2004
Authors: Li, Dan | Deogun, Jitender S.
Article Type: Research Article
Abstract: In this paper, we investigate interpolation methods that are suitable for discovering spatiotemporal association rules for unsampled sites with a focus on drought risk management problem. For drought risk management, raw weather data is collected, converted to various indices, and then mined for association rules. To generate association rules for the unsampled sites, interpolation methods can be applied at any stage of this data mining process. We develop and integrate three interpolation models into our association …rule mining algorithm. We call them pre-order, in-order and post-order interpolation models. The performance of these three models is experimentally evaluated comparing the interpolated association rules with the rules discovered from actual raw data based on two metrics, precision and recall. Our experiments show that the post-order interpolation model provides the highest precision among the three models, and the Kriging method in the pre-order interpolation model presents the highest recall. Show more
Keywords: data mining, association rules, interpolation, cross-validation, geo-spatial decision support system
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 153-172, 2004
Authors: Li, Hongsong | Huang, Houkuan | Lin, Youfang
Article Type: Research Article
Abstract: A well-known challenge in data warehousing is the efficient incremental maintenance of data cube in the presence of source data updates. In this paper, we present a new incremental maintenance algorithm, DSD, developed from Mumick's algorithm. Instead of using one auxiliary delta table, we use two tables to improve efficiency of data update. Moreover, when a materialized view has to be recomputed, we use its smallest ancestral view's data, while Mumick uses the fact table which …is usually much lager than its smallest ancestor. We have implemented DSD algorithm and found that its performance shows a significant improvement. Show more
Keywords: data warehousing, data cube, incremental maintenance
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 173-190, 2004
Authors: Mazlack, Lawrence J.
Article Type: Research Article
Abstract: Causal reasoning is important to human reasoning. It plays an essential role in day-to-day human decision-making. Human understanding of causality is necessarily imprecise, imperfect, and uncertain. Soft computing methods may be able to provide the approximation tools needed. In order to algorithmically consider causes, imprecise causal models are needed. A difficulty is striking a good balance between precise formalism and imprecise reality. Determining causes from available data has been a goal throughout …human history. Today, data mining holds the promise of extracting unsuspected information from very large databases. The most common methods build rules. In many ways, the interest in rules is that they offer the promise (or illusion) of causal, or at least, predictive relationships. However, the most common rule form (association rules) only calculates a joint occurrence frequency; they do not express a causal relationship. If causal relationships could be discovered, it would be very useful. Show more
Keywords: causality, causal models, data mining
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 191-201, 2004
Authors: Miao, Duoqian | Hou, Lishan
Article Type: Research Article
Abstract: Rough set theory is a kind of new tool to deal with knowledge, particularly when knowledge is imprecise, inconsistent and incomplete. In this paper, the main techniques of inductive machine learning are united to the knowledge reduction theory based on rough sets from the theoretical point of view. The Monk's problems introduced in the early of nineties are resolved again employing rough sets and their results are analyzed and compared with those of that time. As …far as accuracy and conciseness are concerned, the learning algorithms based on rough sets have remarkable superiority. Show more
Keywords: rough sets, AQ algorithms, ID algorithms, attribute reduction, dynamic reduct
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 203-219, 2004
Authors: Pagliani, Piero
Article Type: Research Article
Abstract: Approximation Spaces were introduced in order to analyse data on the basis of Indiscernibility Spaces, that is, spaces of the form <U,E>, where U is a universe of items and E is an equivalence relation on U. Various authors suggested to consider spaces of the form <U,R>, where R is any binary relation. This paper aims at introducing a further step consisting in taking into account spaces of the form <U,{R_i}_{i∈I}>, where {R_i}_{i∈I} is a family …of binary relations, that we call „Dynamic Spaces” because by means of this generalisation we can account for different forms of dynamics. While Indiscernibility Spaces induce 0-dimensional topological spaces (Approximation Spaces), Dynamic Spaces induce various types of pre-topological spaces. Show more
Keywords: pretopologies, relations, approximation spaces, dynamics
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 221-239, 2004
Authors: Skowron, Andrzej | Synak, Piotr
Article Type: Research Article
Abstract: We investigate patterns over information maps. Such patterns can represent information changes (e.g., in time or space) across information maps. Any map is defined by some transition relation on states. Each state is a pair consisting of a label and information related to the label. Introduced concepts are illustrated by examples. We also discuss searching problems for relevant patterns extracted from data stored in information maps. Some patterns can be expressed by temporal formulas. Then, searching …is reduced to searching for relevant temporal formulas. We generalise association rules over information systems to association rules over information maps. Approximate reasoning methods based on information changes are important for many applications (e.g., related to spatio-temporal reasoning). We introduce basic concepts for approximate reasoning about information changes across information maps. We measure degree of changes using information granules. Any rule for reasoning about information changes specifies how changes of information granules from the rule premise influence changes of information granules from the rule conclusion. Changes in information granules can be measured, e.g., using expressions analogous to derivatives. Illustrative examples are also presented. Show more
Keywords: information maps, reasoning about information changes, rough sets, granular computing, information granules
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 241-259, 2004
Authors: Nguyen, Tuan Trung
Article Type: Research Article
Abstract: Classification systems working on large feature spaces, despite extensive learning, often perform poorly on a group of atypical samples. The problem can be dealt with by incorporating domain knowledge about samples being recognized into the learning process. We present a method that allows to perform this task using a rough approximation framework. We show how human expert's domain knowledge expressed in natural language can be approximately translated by a machine learning recognition system. We present in …details how the method performs on a system recognizing handwritten digits from a large digit database. Our approach is an extension of ideas developed in the rough mereology theory. Show more
Keywords: rough mereology, concept approximation, domain knowledge, machine learning, handwritten digit recognition
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 261-270, 2004
Authors: Tsumoto, Shusaku
Article Type: Research Article
Abstract: One of the most important problems with rule induction methods is that they cannot extract rules, which plausibly represent expert decision processes. In this paper, the characteristics of experts' rules are closely examined and a new approach to extract plausible rules is introduced, which consists of the following three procedures. First, the characterization of decision attributes (given classes) is extracted from databases and the concept hierarchy for given classes is calculated. Second, based on the hierarchy, …rules for each hierarchical level are induced from data. Then, for each given class, rules for all the hierarchical levels are integrated into one rule. The proposed method was evaluated on a medical database, the experimental results of which show that induced rules correctly represent experts' decision processes. Show more
Keywords: rule induction, grouping, coverage, rough sets, granular computing
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 271-285, 2004
Authors: Zhang, Ling | Zhang, Bo
Article Type: Research Article
Abstract: The paper introduces a framework of quotient space theory of problem solving. In the theory, a problem (or problem space) is represented as a triplet, including the universe, its structure and attributes. The problem spaces with different grain sizes can be represented by a set of quotient spaces. Given a problem, the construction of its quotient spaces is discussed. Based on the model, the computational complexity of hierarchical problem solving and the information combination are also …dealt with. The model can also be extended to the fuzzy granular world. Show more
Keywords: quotient space, problem solving, granular computing, hierarchy
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 287-298, 2004
Authors: Zheng, Zheng | Wang, Guoyin
Article Type: Research Article
Abstract: As a special way in which the human brain is learning new knowledge, incremental learning is an important topic in AI. It is an object of many AI researchers to find an algorithm that can learn new knowledge quickly, based on original knowledge learned before, and in such way that the knowledge it acquires is efficient in real use. In this paper, we develop a rough set and rule tree based incremental knowledge acquisition algorithm. It can learn from …a domain data set incrementally. Our simulation results show that our algorithm can learn more quickly than classical rough set based knowledge acquisition algorithms, and the performance of knowledge learned by our algorithm can be the same as or even better than classical rough set based knowledge acquisition algorithms. Besides, the simulation results also show that our algorithm outperforms ID4 in many aspects. Show more
Keywords: rough set, rule tree, incremental learning, knowledge acquisition, data mining
Citation: Fundamenta Informaticae, vol. 59, no. 2-3, pp. 299-313, 2004
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl