Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Authors: Amin, Talha | Chikalov, Igor | Moshkov, Mikhail | Zielosko, Beata
Article Type: Research Article
Abstract: Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification – exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than …the ordinary optimization (length or coverage). Show more
Keywords: Dynamic programming, decision rules, classifiers
DOI: 10.3233/FI-2013-901
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 151-160, 2013
Authors: Greco, Salvatore | Słowiński, Roman | Szczęch, Izabela
Article Type: Research Article
Abstract: The paper focuses on Bayesian confirmation measures used for evaluation of rules induced from data. To distinguish between many confirmation measures, their properties are analyzed. The article considers a group of symmetry properties. We demonstrate that the symmetry properties proposed in the literature focus on extreme cases corresponding to entailment or refutation of the rule's conclusion by its premise, forgetting intermediate cases. We conduct a thorough analysis of the symmetries regarding that the confirmation should express how much more probable the rule's hypothesis is when the premise is present rather than when the negation of the premise is present. As …a result we point out which symmetries are desired for Bayesian confirmation measures. Next, we analyze a set of popular confirmation measures with respect to the symmetry properties and other valuable properties, being monotonicity M, Ex1 and weak Ex1 , logicality L and weak L. Our work points out two measures to be the most meaningful ones regarding the considered properties. Show more
Keywords: Bayesian confirmation measures, symmetry properties, rule evaluation
DOI: 10.3233/FI-2013-902
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 161-176, 2013
Authors: Clark, Patrick G. | Grzymala-Busse, Jerzy W.
Article Type: Research Article
Abstract: In this paper we present results of experiments on 166 incomplete data sets using three probabilistic approximations: lower, middle, and upper. Two interpretations of missing attribute values were used: lost and “do not care” conditions. Our main objective was to select the best combination of an approximation and a missing attribute interpretation. We conclude that the best approach depends on the data set. The additional objective of our research was to study the average number of distinct probabilities associated with characteristic sets for all concepts of the data set. This number is much larger for data sets with “do not …care” conditions than with data sets with lost values. Therefore, for data sets with “do not care” conditions the number of probabilistic approximations is also larger. Show more
Keywords: characteristic sets, singleton, subset and concept approximations, lower, middle and upper approximations, incomplete data
DOI: 10.3233/FI-2013-903
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 177-191, 2013
Authors: Ziarko, Wojciech | Chen, Xugunag
Article Type: Research Article
Abstract: The article reviews the basics of the variable precision rough set and the Bayesian approaches to data dependencies detection and analysis. The variable precision rough set and the Bayesian rough set theories are extensions of the rough set theory. They are focused on the recognition and modelling of set overlap-based, also referred to as probabilistic, relationships between sets. The set-overlap relationships are used to construct approximations of undefinable sets. The primary application of the approach is to analysis of weak data co-occurrence-based dependencies in probabilistic decision tables learned from data. The probabilistic decision tables are derived from data to represent …the inter-data item connections, typically for the purposes of their analysis or data value prediction. The theory is illustrated with a comprehensive application example illustrating utilization of probabilistic decision tables to face image classification. Show more
Keywords: rough sets, approximation space, probabilistic dependencies, variable precision rough sets, Bayesian rough sets, probabilistic decision tables, machine learning
DOI: 10.3233/FI-2013-904
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 193-207, 2013
Authors: Tsumoto, Shusaku | Hirano, Shoji
Article Type: Research Article
Abstract: This paper proposes a new framework for incremental rule induction of medical diagnostic rules based on incremental sampling scheme and rule layers. When an example is appended, four possibilities can be considered. Thus, updates of accuracy and coverage are classified into four cases, which give two important inequalities of accuracy and coverage for induction of probabilistic rules. By using these two inequalities, the proposed method classifies a set of formulae into four layers: the rule layer, subrule layer (in and out) and the non-rule layer. Then, the obtained rule and subrule layers play a central role in updating proabilistic rules. …If a new example contributes to an increase in the accuracy and coverage of a formula in the subrule layer, the formula is moved into the rule layer. If this contributes to a decrease of a formula in the rule layer, the formula is moved into the subrule layer. The proposed method was evaluated on a dataset regarding headaches, whose results show that the proposed method outperforms the conventional methods. Show more
Keywords: incremental rule induction, rough sets, RHINOS, incremental sampling scheme, subrule layer
DOI: 10.3233/FI-2013-905
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 209-223, 2013
Authors: Touati, Hakim | Ras, Zbigniew W.
Article Type: Research Article
Abstract: Meta-actions effect and selection are the fundamental core for a successful action rule execution. All atomic action terms on the left-hand side of an action rule have to be covered by well chosen meta-actions in order for it to be executed. The choice of meta-actions depends on the antecedent side of action rules; however, it also depends on their list of atomic actions that are outside of the action rule scope, seen as side effects. In this paper, we strive to minimize the side effects by decomposing the left-hand side of an action rule into executable action rules covered by …a minimal number of meta-actions and resulting in a cascading effect. This process was tested and compared to original action rules. Experimental results show that side effects are diminished in comparison with the original meta-actions applied while keeping a good execution confidence. ding effect. This process was tested and compared to original action rules. Experimental results show that side effects are diminished in comparison with the original meta-actions applied while keeping a good execution confidence. Show more
Keywords: Meta-actions, Action rules decomposition, Side effects
DOI: 10.3233/FI-2013-906
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 225-240, 2013
Authors: Wang, Hui | Düntsch, Ivo | Trindade, Luis
Article Type: Research Article
Abstract: In this paper we review Lattice Machine, a learning paradigm that “learns” by generalising data in a consistent, conservative and parsimonious way, and has the advantage of being able to provide additional reliability information for any classification. More specifically, we review the related concepts such as hyper tuple and hyper relation, the three generalising criteria (equilabelledness, maximality, and supportedness) as well as the modelling and classifying algorithms. In an attempt to find a better method for classification in Lattice Machine, we consider the contextual probability which was originally proposed as a measure for approximate reasoning when there is insufficient data. …It was later found to be a probability function that has the same classification ability as the data generating probability called primary probability. It was also found to be an alternative way of estimating the primary probability without much model assumption. Consequently, a contextual probability based Bayes classifier can be designed. In this paper we present a new classifier that utilises the Lattice Machine model and generalises the contextual probability based Bayes classifier. We interpret the model as a dense set of data points in the data space and then apply the contextual probability based Bayes classifier. A theorem is presented that allows efficient estimation of the contextual probability based on this interpretation. The proposed classifier is illustrated by examples. Show more
Keywords: Lattice machine, contextual probability, generalisation, classification
DOI: 10.3233/FI-2013-907
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 241-256, 2013
Authors: Pancerz, Krzysztof | Suraj, Zbigniew
Article Type: Research Article
Abstract: The aim of this paper is to present the methods and algorithms of information systems decomposition. In the paper, decomposition with respect to reducts and the so-called global decomposition are considered. Moreover, coverings of information systems by components are discussed. An essential difference between two kinds of decomposition can be observed. In general, global decomposition can deliver more components of a given information system. This fact can be treated as some kind of additional knowledge about the system. The proposed approach is based on rough set theory. To demonstrate the usefulness of this approach, we present an illustrative example coming …from the economy domain. The discussed decomposition methods can be applied e.g. for design and analysis of concurrent systems specified by information systems, for automatic feature extraction, as well as for control design of systems represented by experimental data tables. Show more
Keywords: decomposition, information analysis, information system, knowledge representation, machine learning, reduct, rough sets
DOI: 10.3233/FI-2013-908
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 257-272, 2013
Authors: Kruczyk, Marcin | Baltzer, Nicholas | Mieczkowski, Jakub | Dramiński, Michał | Koronacki, Jacek | Komorowski, Jan
Article Type: Research Article
Abstract: An important step prior to constructing a classifier for a very large data set is feature selection. With many problems it is possible to find a subset of attributes that have the same discriminative power as the full data set. There are many feature selection methods but in none of them are Rough Set models tied up with statistical argumentation. Moreover, known methods of feature selection usually discard shadowed features, i.e. those carrying the same or partially the same information as the selected features. In this study we present Random Reducts (RR) - a feature selection method which precedes classification …per se. The method is based on the Monte Carlo Feature Selection (MCFS) layout and uses Rough Set Theory in the feature selection process. On synthetic data, we demonstrate that the method is able to select otherwise shadowed features of which the user should be made aware, and to find interactions in the data set. Show more
Keywords: Feature selection, random reducts, rough sets, Monte Carlo
DOI: 10.3233/FI-2013-909
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 273-288, 2013
Authors: Pal, Jayanta Kumar | Ray, Shubhra Sankar | Pal, Sankar K.
Article Type: Research Article
Abstract: MicroRNAs (miRNA) are one kind of non-coding RNA which play many important roles in eukaryotic cell. Investigations on miRNAs show that miRNAs are involved in cancer development in animal body. In this article, a threshold based method to check the condition (normal or cancer) of miRNAs of a given sample/patient, using weighted average distance between the normal and cancer miRNA expressions, is proposed. For each miRNA, the city block distance between two representatives, corresponding to scaled normal and cancer expressions, is obtained. The average of all such distances for different miRNAs is weighted by a factor, to generate the threshold. …The weight factor, which is cancer dependent, is determined through an exhaustive search by maximizing the F score during training. In a part of the investigation, a ranking algorithm for cancer specific miRNAs is also discussed. The performance of the proposed method is evaluated in terms of Matthews Correlation Coefficient (MCC) and by plotting points (1 – Specificity vs: Sensitivity) in Receiver Operating Characteristic (ROC) space, besides the F score. Its efficiency is demonstrated on breast, colorectal, melanoma lung, prostate and renal cancer data sets and it is observed to be superior to some of the existing classifiers in terms of the said indices. Show more
Keywords: miRNA expression analysis, cancer detection, pattern recognition, bioinformatics
DOI: 10.3233/FI-2013-910
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 289-305, 2013
Authors: Kryszkiewicz, Marzena
Article Type: Research Article
Abstract: The cosine and Tanimoto similarity measures are typically applied in the area of chemical informatics, bio-informatics, information retrieval, text and web mining as well as in very large databases for searching sufficiently similar vectors. In the case of large sparse high dimensional data sets such as text or Web data sets, one typically applies inverted indices for identification of candidates for sufficiently similar vectors to a given vector. In this article, we offer new theoretical results on how the knowledge about non-zero dimensions of real valued vectors can be used to reduce the number of candidates for vectors sufficiently cosine …and Tanimoto similar to a given one. We illustrate and discuss the usefulness of our findings on a sample collection of documents represented by a set of a few thousand real valued vectors with more than ten thousand dimensions. Show more
Keywords: sparse data sets, high dimensional data sets, the cosine similarity, the Tanimoto similarity, text mining, data mining, information retrieval, similarity joins, inverted indices
DOI: 10.3233/FI-2013-911
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 307-323, 2013
Authors: Kostek, Bozena | Kaczmarek, Andrzej
Article Type: Research Article
Abstract: This study aims to create an algorithm for assessing the degree to which songs belong to genres defined a priori. Such an algorithm is not aimed at providing unambiguous classification-labelling of songs, but at producing a multidimensional description encompassing all of the defined genres. The algorithm utilized data derived from the most relevant examples belonging to a particular genre of music. For this condition to be met, data must be appropriately selected. It is based on the fuzzy logic principles, which will be addressed further. The paper describes all steps of experiments along with examples of analyses and results obtained.
Keywords: Music Information Retrieval (MIR), Music genre classification, Music parametrization, Query systems, Intelligent decision systems
DOI: 10.3233/FI-2013-912
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 325-340, 2013
Authors: Wakulicz-Deja, Alicja | Nowak-Brzezińska, Agnieszka | Przybyła-Kasperek, Małgorzata
Article Type: Research Article
Abstract: This paper discusses the issues related to the conflict analysis method and the rough set theory, process of global decision-making on the basis of knowledge which is stored in several local knowledge bases. The value of the rough set theory and conflict analysis applied in practical decision support systems with complex domain knowledge are expressed. The furthermore examples of decision support systems with complex domain knowledge are presented in this article. The paper proposes a new approach to the organizational structure of a multi-agent decision-making system, which operates on the basis of dispersed knowledge. In the presented system, the local …knowledge bases will be combined into groups in a dynamic way. We will seek to designate groups of local bases on which the test object is classified to the decision classes in a similar manner. Then, a process of knowledge inconsistencies elimination will be implemented for created groups. Global decisions will be made using one of the methods for analysis of conflicts. Show more
Keywords: knowledge bases, rough set theory, conflict analysis, decision support systems, cluster analysis, relation of friendship, relation of conflict
DOI: 10.3233/FI-2013-913
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 341-356, 2013
Authors: Peters, James F. | Ramanna, Sheela
Article Type: Research Article
Abstract: This paper introduces descriptive set patterns that originated from our visits with Zdzisław Pawlak and Andrzej Skowron at Banacha and environs in Warsaw. This paper also celebrates the generosity and caring manner of Andrzej Skowron, who made our visits to Warsaw memorable events. The inspiration for the recent discovery of descriptive set patterns can be traced back to our meetings at Banacha. Descriptive set patterns are collections of near sets that arise rather naturally in the context of an extension of Solomon Leader's uniform topology, which serves as a base topology for compact Hausdorff spaces that are proximity spaces. The …particular form of proximity space (called EF-proximity) reported here is an extension of the proximity space introduced by V. Efremovič during the first half of the 1930s. Proximally continuous functions introduced by Yu.V. Smirnov in 1952 lead to pattern generation of comparable set patterns. Set patterns themselves were first considered by T. Pavlidis in 1968 and led to U. Grenander's introduction of pattern generators during the 1990s. This article considers descriptive set patterns in EF-proximity spaces and their application in digital image classification. Images belong to the same class, provided each image in the class contains set patterns that resemble each other. Image classification then reduces to determining if a set pattern in a test image is near a set pattern in a query image. Show more
Keywords: Descriptive set pattern, EF-proximity, Grenander pattern generator, near sets, proximally continuous function, proximity space, uniform topology
DOI: 10.3233/FI-2013-914
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 357-367, 2013
Authors: Nguyen, Sinh Hoa | Nguyen, Tuan Trung | Szczuka, Marcin | Nguyen, Hung Son
Article Type: Research Article
Abstract: This paper summarizes the some of the recent developments in the area of application of rough sets and granular computing in hierarchical learning. We present the general framework of rough set based hierarchical learning. In particular, we investigate several strategies of choosing the appropriate learning algorithms for first level concepts as well as the learning methods for the intermediate concepts. We also propose some techniques for embedding the domain knowledge into the granular, layered learning process in order to improve the quality of hierarchical classifiers. This idea, which has been envisioned and developed by professor Andrzej Skowron over the last …10 years, shows to be very efficient in many practical applications. Throughout the article, we illustrate the proposed methodology with three case studies in the area of pattern recognition. The studies demonstrate the viability of this approach for such problems as: sunspot classification, hand-written digit recognition, and car identification. Show more
Keywords: Concept approximation, granular computing, layered learning, hand-written digits, object identification, sunspot recognition, pattern recognition, classification
DOI: 10.3233/FI-2013-915
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 369-384, 2013
Authors: Liu, Yuchao | Li, Deyi | He, Wen | Wang, Guoyin
Article Type: Research Article
Abstract: Granular computing is one of the important methods for extracting knowledge from data and has got great achievements. However, it is still a puzzle for granular computing researchers to imitate the human cognition process of choosing reasonable granularities automatically for dealing with difficult problems. In this paper, a Gaussian cloud transformation method is proposed to solve this problem, which is based on Gaussian Mixture Model and Gaussian Cloud Model. Gaussian Mixture Model (GMM) is used to transfer an original data set to a sum of Gaussian distributions, and Gaussian Cloud Model (GCM) is used to represent the extension of a …concept and measure its confusion degree. Extensive experiments on data clustering and image segmentation have been done to evaluate this method and the results show its performance and validity. Show more
Keywords: Granular computing, Gaussian Mixture Model, Gaussian Cloud Model, Data clustering, Image segmentation
DOI: 10.3233/FI-2013-916
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 385-398, 2013
Authors: Pedrycz, Witold
Article Type: Research Article
Abstract: Fuzzy sets (membership functions) are numeric constructs. In spite of the underlying semantics of fuzzy sets (which is inherently linked with the higher level of abstraction), the membership grades and processing of fuzzy sets themselves emphasize the numeric facets of all pursuits stressing the numeric nature of membership grades and in this way reducing the interpretability and transparency of results. In this study, we advocate an idea of a granular description of membership functions where instead of numeric membership grades, introduced are more interpretable granular descriptors (say, low, high membership, etc.). Granular descriptors are formalized with the aid of various …formal schemes available in Granular Computing, especially sets (intervals), fuzzy sets, and shadowed sets. We formulate a problem of a design of granular descriptors as a certain optimization task, elaborate on the solutions and highlight some areas of applications. Show more
Keywords: Granular Computing, granular description of membership, rough sets, shadowed sets, optimization, granular fuzzy modeling
DOI: 10.3233/FI-2013-917
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 399-412, 2013
Authors: Lin, Tsau Young | Liu, Yong | Huang, Wenliang
Article Type: Research Article
Abstract: This paper explains the mathematics of large scaled granular computing (GrC), augmented with a new Knowledge theory, by unifying rough set theories (RS) into one single concept, namely, neighborhood systems (NS). NS was first introduced in 1989 by T. Y. Lin to capture the concepts of “near” (topology) and “conflict” (security). Since 1996 when the term Granular Computing (GrC) was coined by T. Y. Lin to label Zadeh's vision, NS has been pushed into the “heart” of GrC. In 2011, LNS, the largest NS, was axiomatized; it implied that this set of axioms defines a new mathematics that realizes Zadeh's …vision. The main messages are: this new mathematics is powerful and practical. Show more
Keywords: granular computing, neighborhood system, central knowledge, rough set, topological space, variable precision rough set
DOI: 10.3233/FI-2013-918
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 413-428, 2013
Authors: Stepaniuk, Jaroslaw | Kopczynski, Maciej | Grzes, Tomasz
Article Type: Research Article
Abstract: In this paper we propose a combination of capabilities of the FPGA based device and PC computer for data processing using rough set methods. Presented architecture has been tested on the exemplary data sets. Obtained results confirm the significant acceleration of the computation time using hardware supporting rough set operations in comparison to software implementation.
DOI: 10.3233/FI-2013-919
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 429-443, 2013
Authors: Ślęzak, Dominik | Synak, Piotr | Wojna, Arkadiusz | Wróblewski, Jakub
Article Type: Research Article
Abstract: We present analytic data processing technology derived from the principles of rough sets and granular computing. We show how the idea of approximate computations on granulated data has evolved toward complete product supporting standard analytic database operations and their extensions. We refer to our previous works where our query execution algorithms were described in terms of iteratively computed rough approximations. We explain how to interpret our data organization methods in terms of classical rough set notions such as reducts and generalized decisions.
Keywords: Analytic Data Processing Systems, Rough-Granular Computational Models
DOI: 10.3233/FI-2013-920
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 445-459, 2013
Authors: Lewandowski, Jacek | Rybiński, Henryk
Article Type: Research Article
Abstract: Polyhierarchical structures play an important role in artificial intelligence, especially in knowledge representation. The main problem with using them efficiently is lack of efficient methods of accessing related nodes, which limits the practical applications. The proposed hybrid indexing approach generalizes various methods and makes possible combining them in a uniform manner within one index, which adapts to a particular topology of the data structure. This gives rise to a balance between compactness of the index and fast responses to the search requests. The correctness of the proposed method is formally shown, and its performance is evaluated. The results prove its …high efficiency. Show more
DOI: 10.3233/FI-2013-921
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 461-477, 2013
Authors: Zhang, Hao Lan | Liu, Jiming | Zhang, Yanchun
Article Type: Research Article
Abstract: Online social networks (OSN) are facing challenges since they have been extensively applied to different domains including online social media, e-commerce, biological complex networks, financial analysis, and so on. One of the crucial challenges for OSN lies in information overload and network congestion. The demands for efficient knowledge discovery and data mining methods in OSN have been rising in recent year, particularly for online social applications, such as Flickr, YouTube, Facebook, and LinkedIn. In this paper, a Belief-Desire-Intention (BDI) agent-based method has been developed to enhance the capability of mining online social networks. Current data mining techniques encounter difficulties of …dealing with knowledge interpretation based on complex data sources. The proposed agent-based mining method overcomes network analysis difficulties, while enhancing the knowledge discovery capability through its autonomy and collective intelligence. Show more
Keywords: Online social networks, agent networks, AOC, BDI agents
DOI: 10.3233/FI-2013-922
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 479-494, 2013
Authors: Bazan, Jan G. | Buregwa-Czuma, Sylwia | Jankowski, Andrzej W.
Article Type: Research Article
Abstract: This paper investigates the approaches to an improvement of classifiers quality through the application of a domain knowledge. The expertise may be utilizable on several levels of decision algorithms such as: feature extraction, feature selection, a definition of temporal patterns used in an approximation of the concepts, especially of the complex spatio-temporal ones, an assignment of an object to the concept and a measurement of the objects similarity. The domain knowledge incorporation results then in the reduction of the size of searched spaces. The work constitutes an overview of classifier building methods efficiently utilizing the expertise, worked out latterly by …Professor Andrzej Skowron research group. The methods using domain knowledge intended to enhance the quality of classic classifiers, to identify the behavioral patterns and for automatic planning are discussed. Finally it answers a question whether the methods satisfy the hopes vested in them and indicates the directions for future development. Show more
Keywords: rough set, concept approximation, ontology of concepts, discretization, behavioral pattern identification, automated planning, wisdom technology
DOI: 10.3233/FI-2013-923
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 495-511, 2013
Authors: Froelich, Wojciech | Deja, Rafał | Deja, Grażyna
Article Type: Research Article
Abstract: The disease of diabetes mellitus has spread in recent years across the world, and has thus become an even more important medical problem. Despite numerous solutions already proposed, the problem of management of glucose concentration in the blood of a diabetic patient still remains as a challenge and raises interest among researchers. The data-driven models of glucose-insulin interaction are one of the recent directions of research. In particular, a data-driven model can be constructed using the idea of sequential patterns as the knowledge representation method. In this paper a new hierarchical, template-based approach for mining sequential patterns is proposed. The …paper proposes also to use functional abstractions for the representation and mining of clinical data. Due to the experts knowledge involved in the construction of functional abstractions and sequential templates, the discovered underlying template-based patters can be easily interpreted by physicians and are able to provide recommendations of medical therapy. The proposed methodology was validated by experiments using real clinical data of juvenile diabetes. Show more
Keywords: data mining, sequential patterns, diabetes mellitus
DOI: 10.3233/FI-2013-924
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 513-528, 2013
Authors: Krasuski, Adam | Wasilewski, Piotr
Article Type: Research Article
Abstract: We present a method for improving the detection of outlying Fire Service's reports based on domain knowledge and dialogue with Fire & Rescue domain experts. The outlying report is considered as an element which is significantly different from the remaining data. We follow the position of Professor Andrzej Skowron that effective algorithms in data mining and knowledge discovery in big data should incorporate an interaction with domain experts or/and be domain oriented. Outliers are defined and searched on the basis of domain knowledge and dialogue with experts. We face the problem of reducing high data dimensionality without loosing specificity and …real complexity of reported incidents. We solve this problem by introducing a knowledge based generalization level intermediating between analyzed data and experts domain knowledge. In our approach we use the Formal Concept Analysis methods for both generation of the appropriate categories from data and as tools supporting communication with domain experts. We conducted two experiments in finding two types of outliers in which outlier detection was supported by domain experts. Show more
Keywords: outlier detection, formal concept analysis, fire service, granular computing
DOI: 10.3233/FI-2013-925
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 529-544, 2013
Authors: Qin, Linchan | Zhong, Ning | Lu, Shengfu | Li, Mi
Article Type: Research Article
Abstract: Lack of understanding of users' underlying decision making process results in the bottleneck of EB-HCI (eye movement-based human-computer interaction) systems. Meanwhile, considerable findings on visual features of decision making have been derived from cognitive researches over past few years. A promising method of decision prediction in EB-HCI systems is presented in this article, which is inspired by the looking behavior when a user makes a decision. As two features of visual decision making, gaze bias and pupil dilation are considered into judging intensions. This method combines the history of eye movements to a given interface and the visual traits of …users. Hence, it improves the prediction performance in a more natural and objective way. We apply the method to an either-or choice making task on the commercial Web pages to test its effectiveness. Although the result shows a good performance only of gaze bias but not of pupil dilation to predict a decision, it proves that hiring the visual traits of users is an effective approach to improve the performance of automatic triggering in EB-HCI systems. Show more
DOI: 10.3233/FI-2013-926
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 545-560, 2013
Authors: Czyżewski, Andrzej | Lisowski, Karol
Article Type: Research Article
Abstract: Pawlak's flowgraph has been applied as a suitable data structure for description and analysis of human behaviour in the area supervised with multicamera video surveillance system. Information contained in the flowgraph can be easily used to predict consecutive movements of a particular object. Moreover, utilization of the flowgraph can support reconstructing object route from the past video images. However, such a flowgraph with its accumulative nature needs a certain period of time for adaptation to changes in behaviour of objects which can be caused, e.g. by closing a door or placing other obstacle forcing people to pass it by. In …this paper a method for reduction of time needed for flowgraph adaptation is presented. Additionally, distance measure between flowgraphs is also introduced in order to determine if carrying out the adaptation process is needed. Show more
DOI: 10.3233/FI-2013-927
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 561-576, 2013
Article Type: Other
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 577-579, 2013
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl