Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Authors: Cui, Zhihua | Ramanna, Sheela | Peters, James F. | Pal, Sankar K.
Article Type: Other
DOI: 10.3233/FI-2013-821
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. v-viii, 2013
Authors: Hidalgo-Herrero, Mercedes | Rabanal, Pablo | Rodríguez, Ismael | Rubio, Fernando
Article Type: Research Article
Abstract: NP-complete problems are particularly hard to solve. Unless P=NP, any algorithm solving an NP-complete problem takes exponential time in the worst case. The intrinsic difficulty of NP-complete problems when we try to optimally solve them with computers seems to apply to humans too. Intuitively, solving NP-complete problems requires taking a series of choices where each choice we take disables many subsequent choices, but the scope of dependencies between these mutually exclusive choices cannot be bound. Thus, the problem cannot be split into smaller subproblems in such a way that their solutions can be computed independently and easily combined for constructing …the global solution (as it happens in divide and conquer algorithms). Moreover, for each choice, the space of subsequent subproblems to be considered for all possible choice elections does not collapse into a polynomial size set (as it happens in dynamic programming algorithms). Thus, intuitively, in NP-complete problems any choice may unboundedly affect any other, and this difficulty seems to puzzle humans as much as computers. In this paper we conduct an experiment to systematically analyze the performance of humans when solving NP-complete problems. For each problem, in order to measure partial fulfillment of the decision problem goal, we consider its NP-hard optimization version. We analyze the human capability to compute good suboptimal solutions to these problems, we try to identify the kind of problem instances which make humans compute the best and worst solutions (including the dependance of their performance on the size of problem instances), and we compare their performance with computational heuristics typically used to approximately solve these problems. We also interview experiment participants in order to infer the most typical strategies used by them in experiments, as well as how these strategies depend on the form and size of problem instances. Show more
Keywords: NP-hard problems, Heuristic Methods, Metaheuristics, Human-Computer Comparison, Problem Solving, Learning Strategies, Human Reasoning, Testing
DOI: 10.3233/FI-2013-822
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 1-25, 2013
Authors: Virginia, Gloria | Nguyen, Hung Son
Article Type: Research Article
Abstract: It is a big challenge for an information retrieval system (IRS) to interpret the queries made by users, particularly because the common form of query consists of very few terms. Tolerance rough sets models (TRSM), as an extension of rough sets theory, have demonstrated their ability to enrich document representation in terms of semantic relatedness. However, system efficiency is at stake because the weight vector created by TRSM (TRSM-representation) is much less sparse. We mapped the terms occurring in TRSM-representation to terms in the lexicon, hence the final representation of a document was a weight vector consisting only of terms …that occurred in the lexicon (LEX-representation). The LEX-representation can be viewed as a compact form of TRSM-representation in a lower dimensional space and eliminates all informal terms previously occurring in TRSM-vector. With these facts, we may expect a more efficient system. We employed recall and precision commonly used in information retrieval to evaluate the effectiveness of LEX-representation. Based on our examination, we found that the effectiveness of LEX-representation is comparable with TRSM-representation while the efficiency of LEX-representation should be better than the existing TRSM-representation. We concluded that lexicon-based document representation was another alternative potentially used to represent a document while considering semantics. We are tempted to implement the LEX-representation together with linguistic computation, such as tagging and feature selection, in order to retrieve more relevant terms with high weight. With regard to the TRSM method, enhancing the quality of tolerance class is crucial based on the fact that the TRSM method is fully reliant on the tolerance classes. We plan to combine other resources such as Wikipedia Indonesia to generate a better tolerance class. Show more
Keywords: Tolerance rough sets model, information retrieval
DOI: 10.3233/FI-2013-823
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 27-46, 2013
Authors: Hruschka Jr., E. R. | Duarte, M. C. | Nicoletti, M. C.
Article Type: Research Article
Abstract: The project and implementation of autonomous computational systems that incrementally learn and use what has been learnt to, continually, refine its learning abilities throughout time is still a goal far from being achieved. Such dynamic systems would conform to the main ideas of the automatic learning model conventionally characterized as never-ending learning (NEL). The never-ending approach to learning exhibits similarities to the semi-supervised (SS) model which has been successfully implemented by bootstrap learning methods. Bootstrap learning has been one of the most successful among the SS-methods proposed to date and, as such, the natural candidate for implementing NEL systems. Bootstrap …methods learn from an available labeled set of data, use the induced knowledge to label some unlabeled new data and, recurrently, learn again from both sets of data in a cyclic manner. However the use of SS methods, particularly bootstrapping methods, to implement NEL systems can give rise to a problem known as concept-drift. Errors that may occur when the system automatically labels new unlabeled data can, over time, cause the system to run off track. The development of new strategies to lessen the impact of concept-drift is an important issue that should be addressed if the goal is to increase the plausibility of developing such systems, employing bootstrap methods. Coupling techniques can play an important role in reducing concept-drift effects over machine learning systems, particularly those designed to perform tasks related to machine reading. This paper proposes and formalizes relevant coupling strategies for dealing with the concept-drift problem in a NEL environment implemented as the system RTWP (Read The Web in Portuguese); initial results have shown they are promising strategies for minimizing the problem taking into account a few system settings. Show more
Keywords: machine learning, never-ending learning, bootstrap method, coupling, concept-drift, machine reading, semi-supervised learning, autonomous intelligent system
DOI: 10.3233/FI-2013-824
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 47-61, 2013
Authors: Pal, Sankar K. | Chakraborty, Debarati
Article Type: Research Article
Abstract: This paper presents a novel methodology for tracking a single moving object in a video sequence applying the concept of rough set theory. The novelty of this technique is that it does not consider any prior information about the video sequence unlike many existing techniques. The first target model is constructed using the median filtering based foreground detection technique and after that the target is reconstructed in every frame according to the rough set based feature reduction concept incorporating a measure of indiscernibility instead of indiscernibility matrix. The area of interest is initially defined roughly in every frame based on …the object shift in the previous frames, and after reduction of redundant features the object is tracked. The measure of indiscernibility of a feature is defined based on its degree of belonging (DoB) to the target. Three quantitative indices based on rough sets, feature similarity and Bhattacharya distance are proposed to evaluate the performance of tracking and detect the mis-tracked frames in the process of tracking to make those corrected. Unlike many existing measures, the proposed ones do not require to know the ground truth or trajectory of the video sequence. Extensive experimental results are given to demonstrate the effectiveness of the method. Comparative performance is demonstrated both visually and quantitatively. Show more
Keywords: Rough Set, Unsupervised Tracking, Feature Reduction, Bhattacharya distance, moving object segmentation
DOI: 10.3233/FI-2012-825
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 63-90, 2013
Authors: Grabska, Ewa | Ślusarczyk, Grażyna | Gajek, Szymon
Article Type: Research Article
Abstract: This paper deals with the conceptual stage of the design process related to civil engineering. Different types of design knowledge representation essential in visual aspects of a human-computer dialogue are considered. They comprise designer's drawings expressing forms, layouts and functionality of designed artifacts, internal graph-based data structures obtained automatically on the basis of these drawings and logic formulas extracted from the graph data structures. The reasoning mechanism based on the first-order logic enables the system to assess the compatibility of designs with specified requirements and constraints. The feedback given by the system along with visualizations of initial design ideas enforce …the designer's constructive visual perception and creativity. The presented approach is illustrated on the example of designing an indoor swimming pool. Show more
Keywords: design actions, CAD system, conceptual design, visual thinking, visual language, design knowledge, graph-based structures, logic reasoning
DOI: 10.3233/FI-2013-826
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 91-110, 2013
Authors: Saito, J. H. | Mari, J.-F. | Pedrino, E. | Destro-Filho, J. B. | Nicoletti, M. C.
Article Type: Research Article
Abstract: Recently we have witnessed research efforts into developing real-time hybrid systems implementing interactions between computational models and live tissues, in an attempt to learn more about the functioning of biological neural networks. A fundamental role in the development of such systems is played by Multi-Electrode Array (MEA). In vitro cultures of neurons on MEAs, however, have some drawbacks such as: needing a rigorous adherence to sterile techniques, careful choice and replenishment of media and maintenance of pH, temperature, and osmolarity. An alternative way to study and investigate live tissues which partially circumvent some of the problems with in vitro cultures …is by simulating them. This paper describes the proposal of Sim-MEA, a system for modeling and simulating neuron's communications in a MEA-based in vitro culture. Sim-MEA implements a modified Izhikevich model that takes into account both: distances between neurons and distances between microelectrodes and neurons. The system also provides ways of simulating microelectrodes and their recorded signals as well as recovering experimental MEA culture data, from their images. The soundness of the Sim-MEA simulation procedure was empirically evaluated using data from an experimental in vitro cultured hippocampal neurons of Wistar rat embryos. Results from simulations, compared to those of the in vitro experiment, are presented and discussed. The paper also describes a few experiments (varying several parameter values) to illustrate and detail the approach. Show more
Keywords: modified Izhikevich model, simulation of MEA's culture, Multi-Electrode Array, cortical neurons, Micro-Electrode Array, field potentials, spikes, bursts
DOI: 10.3233/FI-2013-827
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 111-132, 2013
Authors: Nicoletti, M. C. | Lisboa, F. O. S. S. | Hruschka Jr., E. R.
Article Type: Research Article
Abstract: Time plays an important role in the vast majority of problems and, as such, it is a vital issue to be considered when developing computer systems for solving problems. In the literature, one of the most influential formalisms for representing time is known as Allen's Temporal Algebra based on a set of 13 relations (basic and reversed) that may hold between two time intervals. In spite of having a few drawbacks and limitations, Allen's formalism is still a convenient representation due to its simplicity and implementability and also, due to the fact that it has been the basis of several …extensions. This paper explores the automatic learning of Allen's temporal relations by the inductive logic programming system FOIL, taking into account two possible representations for a time interval: (i) as a primitive concept and (ii) as a concept defined by the primitive concept of time point. The goals of the experiments described in the paper are (1) to explore the viability of both representations for use in automatic learning; (2) compare the facility and interpretability of the results; (3) evaluate the impact of the given examples for inducing a proper representation of the relations and (4) experiment with both representations under the assumption of a closed world (CWA), which would ease continuous learning using FOIL. Experimental results are presented and discussed as evidence that the CWA can be a convenient strategy when learning Allen's temporal relations. Show more
Keywords: time representation, Allen's temporal algebra, automatic learning of temporal relations, inductive logic programming, FOIL, time points, time intervals, learning under closed world assumption
DOI: 10.3233/FI-2013-828
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 133-151, 2013
Authors: Maji, Pradipta | Paul, Sushmita
Article Type: Research Article
Abstract: Cluster analysis is a technique that divides a given data set into a set of clusters in such a way that two objects from the same cluster are as similar as possible and the objects from different clusters are as dissimilar as possible. In this background, different rough-fuzzy clustering algorithms have been shown to be successful for finding overlapping and vaguely defined clusters. However, the crisp lower approximation of a cluster in existing rough-fuzzy clustering algorithms is usually assumed to be spherical in shape, which restricts to find arbitrary shapes of clusters. In this regard, this paper presents a new …rough-fuzzy clustering algorithm, termed as robust rough-fuzzy c-means. Each cluster in the proposed clustering algorithm is represented by a set of three parameters, namely, cluster prototype, a possibilistic fuzzy lower approximation, and a probabilistic fuzzy boundary. The possibilistic lower approximation helps in discovering clusters of various shapes. The cluster prototype depends on the weighting average of the possibilistic lower approximation and probabilistic boundary. The proposed algorithm is robust in the sense that it can find overlapping and vaguely defined clusters with arbitrary shapes in noisy environment. An efficient method is presented, based on Pearson's correlation coefficient, to select initial prototypes of different clusters. A method is also introduced based on cluster validity index to identify optimum values of different parameters of the initialization method and the proposed clustering algorithm. The effectiveness of the proposed algorithm, along with a comparison with other clustering algorithms, is demonstrated on synthetic as well as coding and non-coding RNA expression data sets using some cluster validity indices. Show more
Keywords: Pattern recognition, Clustering, Rough sets, Fuzzy sets, Rough-fuzzy clustering
DOI: 10.3233/FI-2013-829
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 153-174, 2013
Authors: Peters, James F. | Chitcharoen, Doungrat
Article Type: Research Article
Abstract: This paper introduces sufficiently near visual neighbourhoods of points and neighbourhoods of sets in digital image flow graphs (NDIFGs). An NDIFG is an extension of a Pawlak flow graph. The study of sufficiently near neighbourhoods in NDIFGs stems from recent work on near sets and topological spaces via near and far, especially in terms of visual neighbourhoods of points that are sufficiently near each other. From a topological perspective, non-spatially near sets represent an extension of proximity space theory and the original insight concerning spatially near sets by F. Riesz at the International Congress of Mathematicians (ICM) in 1908. In …the context of Herrlich nearness, sufficiently near neighbourhoods of sets in NDIFGs provide a new perspective on topological structures in NDIFGs. The practical implications of this work are significant. With the advent of a study of the nearness of open as well as closed neighbourhods of points and of sets in NDIFGs, it is now possible to do information mining on a more global level and achieve new insights concerning the visual information embodied in the images that provide input to an NDIFG. Show more
Keywords: Digital image, near sets, visual neighbourhood, Pawlak flow graphs, sufficiently near, topological structure
DOI: 10.3233/FI-2013-830
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 175-196, 2013
Authors: Pal, Sankar K. | Banerjee, Romi | Dutta, Soumitra | Sarma, Samar Sen
Article Type: Research Article
Abstract: The Z-number is a new fuzzy-theoretic concept, proposed by Zadeh in 2011. It extends the basic philosophy of Computing With Words (CWW) to include the perception of uncertainty of the information conveyed by a natural language statement. The Z-number thus, serves as a model of linguistic summarization of natural language statements, a technique to merge human-affective perspectives with CWW, and consequently can be envisaged to play a radical role in the domain of CWW-based system design and Natural Language Processing (NLP). This article presents a comprehensive investigation of the Z-number approach to CWW. We present here: a) an outline of …our understanding of the generic architecture, algorithm and challenges underlying CWW in general; b) a detailed study of the Z-number methodology - where we propose an algorithm for CWW using Z-numbers, define a Z-number based operator for the evaluation of the level of requirement satisfaction, and describe simulation experiments of CWW utilizing Z-numbers; and c) analyse the strengths and the challenges of the Z-numbers, and suggest possible solution strategies. We believe that this article would inspire research on the need for inclusion of human-behavioural aspects into CWW, as well as the integration of CWW and NLP. Show more
Keywords: Cognition, fuzzy sets, linguistics, machine learning, text-summarization, dialogue-based systems, affective computing, Natural Language Processing (NLP), perceptions, soft computing, natural computing
DOI: 10.3233/FI-2013-831
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 197-229, 2013
Authors: Öztürk, Mehmet Ali | İnan, Ebubekir
Article Type: Research Article
Abstract: In 1999, Molodtsov introduced the theory of soft sets, which can be seen as a new mathematical approach to vagueness. In 2002, near set theory was initiated by J. F. Peters as a generalization of Pawlak's rough set theory. In the near set approach, every perceptual granule is a set of objects that have their origin in the physical world. Objects that have, in some degree, affinities are considered perceptually near each other, i.e., objects with similar descriptions. Also, the concept of near groups has been investigated by İnan and Öztürk [30]. The present paper aims to combine the soft …sets approach with near set theory, which gives rise to the new concepts of soft nearness approximation spaces (SNAS), soft lower and upper approximations. Moreover, we give some examples and properties of these soft nearness approximations. Show more
Keywords: Near set, soft set, rough set, soft rough set, soft near set, approximation space, nearness approximation space, soft nearness approximation space
DOI: 10.3233/FI-2013-832
Citation: Fundamenta Informaticae, vol. 124, no. 1-2, pp. 231-250, 2013
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl