Purchase individual online access for 1 year to this journal.
Price: EUR N/A
ISSN 1386-6338 (P)
ISSN 1434-3207 (E)
In Silico Biology is a scientific research journal for the advancement of computational models and simulations applied to complex biological phenomena. We publish peer-reviewed leading-edge biological, biomedical and biotechnological research in which computer-based (i.e.,
) modeling and analysis tools are developed and utilized to predict and elucidate dynamics of biological systems, their design and control, and their evolution. Experimental support may also be provided to support the computational analyses.
In Silico Biology aims to advance the knowledge of the principles of organization of living systems. We strive to provide computational frameworks for understanding how observable biological properties arise from complex systems. In particular, we seek for integrative formalisms to decipher cross-talks underlying systems level properties, ultimate aim of multi-scale models.
Studies published in
In Silico Biology generally use theoretical models and computational analysis to gain quantitative insights into regulatory processes and networks, cell physiology and morphology, tissue dynamics and organ systems. Special areas of interest include signal transduction and information processing, gene expression and gene regulatory networks, metabolism, proliferation, differentiation and morphogenesis, among others, and the use of multi-scale modeling to connect molecular and cellular systems to the level of organisms and populations.
In Silico Biology also publishes foundational research in which novel algorithms are developed to facilitate modeling and simulations. Such research must demonstrate application to a concrete biological problem.
In Silico Biology frequently publishes special issues on seminal topics and trends. Special issues are handled by Special Issue Editors appointed by the Editor-in-Chief. Proposals for special issues should be sent to the Editor-in-Chief.
About In Silico Biology
is a pendant to
(in the living system) and
(in the test tube) biological experiments, and implies the gain of insights by computer-based simulations and model analyses.
In Silico Biology (ISB) was founded in 1998 as a purely online journal. IOS Press became the publisher of the printed journal shortly after. Today, ISB is dedicated exclusively to biological systems modeling and multi-scale simulations and is published solely by IOS Press. The previous online publisher, Bioinformation Systems, maintains a website containing studies published between 1998 and 2010 for archival purposes.
We strongly support open communications and encourage researchers to share results and preliminary data with the community. Therefore, results and preliminary data made public through conference presentations, conference proceeding or posting of unrefereed manuscripts on preprint servers will not prohibit publication in ISB. However, authors are required to modify a preprint to include the journal reference (including DOI), and a link to the published article on the ISB website upon publication.
Abstract: The identification of common tumor signatures can discover the shared molecular mechanisms underlying tumorgenesis whereby we can prevent and treat tumors by a system intervention. We identified tumor-associated signatures including pathways, transcription factors, microRNAs and gene ontology categories by analyzing gene sets for differential expression between normal vs. tumor phenotypes classes in various tumor gene expression datasets. We obtained the common tumor signatures based on their identified frequencies for different tumor types.…Some shared signatures important for various tumor types were uncovered and discussed. We proposed that the interventions aiming at both the shared tumor signatures and the tissue-specific tumor signatures might be a potential approach to overcoming cancer.
Keywords: Tumor, gene expression profiling, gene set enrichment analysis, bioinformatics
Abstract: The low reproducibility of differential expression of individual genes in microarray experiments has led to the suggestion that experiments be analyzed in terms of gene characteristics, such as GO categories or pathways, in order to enhance the robustness of the results. An implicit assumption of this approach is that the different experiments in effect randomly sample the genes participating in an active process. We argue that by the same rationale it is possible to perform this…higher-level analysis on the aggregation of genes that are differentially-expressed in different expression-based studies, even if the experiments used different platforms. The aggregation increases the reliability of the results, it has the potential for uncovering signals that are liable to escape detection in the individual experiments, and it enables a more thorough mining of the ever more plentiful microarray data. We present here a proof-of-concept study of these ideas, using ten studies describing the changes in expression profiles of human host genes in response to infection by Retroviridae or Herpesviridae viral families. We supply a tool (accessible at www.cs.bgu.ac.il/∼waytogo) which enables the user to learn about genes and processes of interest in this study.
Keywords: Data integration, gene expression, microarray data, Gene-Ontology
Abstract: Microarray technology facilitates the monitoring of the expression levels of thousands of genes over different experimental conditions simultaneously. Clustering is a popular data mining tool which can be applied to microarray gene expression data to identify co-expressed genes. Most of the traditional clustering methods optimize a single clustering goodness criterion and thus may not be capable of performing well on all kinds of datasets. Motivated by this, in this article, a multiobjective clustering technique that…optimizes cluster compactness and separation simultaneously, has been improved through a novel support vector machine classification based cluster ensemble method. The superiority of MOCSVMEN (MultiObjective Clustering with Support Vector Machine based ENsemble) has been established by comparing its performance with that of several well known existing microarray data clustering algorithms. Two real-life benchmark gene expression datasets have been used for testing the comparative performances of different algorithms. A recently developed metric, called Biological Homogeneity Index (BHI), which computes the clustering goodness with respect to functional annotation, has been used for the comparison purpose.
Keywords: Multiobjective clustering, support vector machine, cluster ensemble, microarray gene expression data
Abstract: Some of the latest trends in cheminformatics, computation, and the world wide web are reviewed with predictions of how these are likely to impact the field of cheminformatics in the next five years. The vision and some of the work of the Chemical Informatics and Cyberinfrastructure Collaboratory at Indiana University are described, which we base around the core concepts of e-Science and cyberinfrastructure that have proven successful in other fields. Our chemical informatics cyberinfrastructure is realized…by building a flexible, generic infrastructure for cheminformatics tools and databases, exporting "best of breed" methods as easily-accessible web APIs for cheminformaticians, scientists, and researchers in other disciplines, and hosting a unique chemical informatics education program aimed at scientists and cheminformatics practitioners in academia and industry.
Abstract: ChemModLab, written by the ECCR @ NCSU consortium under NIH support, is a toolbox for fitting and assessing quantitative structure-activity relationships (QSARs). Its elements are: a cheminformatic front end used to supply molecular descriptors for use in modeling; a set of methods for fitting models; and methods for validating the resulting model. Compounds may be input as structures from which standard descriptors will be calculated using the freely available cheminformatic front end PowerMV; PowerMV also supports…compound visualization. In addition, the user can directly input their own choices of descriptors, so the capability for comparing descriptors is effectively unlimited. The statistical methodologies comprise a comprehensive collection of approaches whose validity and utility have been accepted by experts in the fields. As far as possible, these tools are implemented in open-source software linked into the flexible R platform, giving the user the capability of applying many different QSAR modeling methods in a seamless way. As promising new QSAR methodologies emerge from the statistical and data-mining communities, they will be incorporated in the laboratory. The web site also incorporates links to public-domain data sets that can be used as test cases for proposed new modeling methods. The capabilities of ChemModLab are illustrated using a variety of biological responses, with different modeling methodologies being applied to each. These show clear differences in quality of the fitted QSAR model, and in computational requirements. The laboratory is web-based, and use is free. Researchers with new assay data, a new descriptor set, or a new modeling method may readily build QSAR models and benchmark their results against other findings. Users may also examine the diversity of the molecules identified by a QSAR model. Moreover, users have the choice of placing their data sets in a public area to facilitate communication with other researchers; or can keep them hidden to preserve confidentiality.
Keywords: Cheminformatics, data-mining, ensemble methods, model assessment, model validation, nearest neighbors, neural networks, QSAR, recursive partitioning, regression, support vector machine, virtual screening
Abstract: We present Illuminator, a user-friendly web front end to computational models such as docking and 3D shape similarity calculations. Illuminator was specifically created to allow non-experts to design and submit molecules to computational chemistry programs. As such it provides a simple user interface allowing users to submit jobs starting from a 2D structure. The models provided are pre-optimized by computational chemists for each specific target. We provide an example of how Illuminator was used to prioritize…the design of molecular substituents in the Anadys HCV Polymerase (NS5B) project. With 7500 submitted jobs in 1.5 years, Illuminator has allowed project teams at Anadys to accelerate the optimization of novel leads. It has also improved communication between project members and increased demand for computational drug discovery tools.