Bio-Medical Materials and Engineering - Volume 26, issue s1
Purchase individual online access for 1 year to this journal.
Price: EUR 245.00
Impact Factor 2024: 1.0
The aim of
Bio-Medical Materials and Engineering is to promote the welfare of humans and to help them keep healthy. This international journal is an interdisciplinary journal that publishes original research papers, review articles and brief notes on materials and engineering for biological and medical systems.
Articles in this peer-reviewed journal cover a wide range of topics, including, but not limited to: Engineering as applied to improving diagnosis, therapy, and prevention of disease and injury, and better substitutes for damaged or disabled human organs; Studies of biomaterial interactions with the human body, bio-compatibility, interfacial and interaction problems; Biomechanical behavior under biological and/or medical conditions; Mechanical and biological properties of membrane biomaterials; Cellular and tissue engineering, physiological, biophysical, biochemical bioengineering aspects; Implant failure fields and degradation of implants. Biomimetics engineering and materials including system analysis as supporter for aged people and as rehabilitation; Bioengineering and materials technology as applied to the decontamination against environmental problems; Biosensors, bioreactors, bioprocess instrumentation and control system; Application to food engineering; Standardization problems on biomaterials and related products; Assessment of reliability and safety of biomedical materials and man-machine systems; and Product liability of biomaterials and related products.
Abstract: Support vector machine (SVM) is one of the most effective classification methods for cancer detection. The efficiency and quality of a SVM classifier depends strongly on several important features and a set of proper parameters. Here, a series of classification analyses, with one set of photoacoustic data from ovarian tissues ex vivo and a widely used breast cancer dataset- the Wisconsin Diagnostic Breast Cancer (WDBC), revealed the different accuracy of a SVM classification in terms of the number of features used and the parameters selected. A pattern recognition system is proposed by means of SVM-Recursive Feature Elimination (RFE) with the…Radial Basis Function (RBF) kernel. To improve the effectiveness and robustness of the system, an optimized tuning ensemble algorithm called as SVM-RFE(C) with correlation filter was implemented to quantify feature and parameter information based on cross validation. The proposed algorithm is first demonstrated outperforming SVM-RFE on WDBC. Then the best accuracy of 94.643% and sensitivity of 94.595% were achieved when using SVM-RFE(C) to test 57 new PAT data from 19 patients. The experiment results show that the classifier constructed with SVM-RFE(C) algorithm is able to learn additional information from new data and has significant potential in ovarian cancer diagnosis.
Show more
Keywords: Support vector machines, ovarian cancer detection, feature selection, correlation filter, SVM-RFE
Abstract: A grid-driven gridding (GDG) method is proposed to uniformly re-sample non-Cartesian raw data acquired in PROPELLER, in which a trajectory window for each Cartesian grid is first computed. The intensity of the reconstructed image at this grid is the weighted average of raw data in this window. Taking consider of the single instruction multiple data (SIMD) property of the proposed GDG, a CUDA accelerated method is then proposed to improve the performance of the proposed GDG. Two groups of raw data sampled by PROPELLER in two resolutions are reconstructed by the proposed method. To balance computation resources of the GPU…and obtain the best performance improvement, four thread-block strategies are adopted. Experimental results demonstrate that although the proposed GDG is more time consuming than traditional DDG, the CUDA accelerated GDG is almost 10 times faster than traditional DDG.
Show more
Keywords: Gridding, magnetic resonance reconstruction, uniform re-sampling, CUDA acceleration
Abstract: Diffusion tensor imaging allows for the non-invasive in vivo mapping of the brain tractography. However, fiber bundles have complex structures such as fiber crossings, fiber branchings and fibers with large curvatures that tensor imaging (DTI) cannot accurately handle. This study presents a novel brain white matter tractography method using Q-ball imaging as the data source instead of DTI, because QBI can provide accurate information about multiple fiber crossings and branchings in a single voxel using an orientation distribution function (ODF). The presented method also uses graph theory to construct the Bayesian model-based graph, so that the fiber tracking between two…voxels can be represented as the shortest path in a graph. Our experiment showed that our new method can accurately handle brain white matter fiber crossings and branchings, and reconstruct brain tractograhpy both in phantom data and real brain data.
Show more
Keywords: Brain white matter, graph theory, QBI, tractography, Bayesian model
Abstract: Laser can precisely deliver quantitative energy to a desired region in a non-contact way. Since it can stimulate regions and minutely control parameters such as the intensity, duration and frequency of stimulus, laser is often used for the areas such as low power laser treatment and clinical physiology. This study proposes simulation using pulse diode laser with reliable output and identifies laser parameters that can present a variety of somesthesis. It is found that typically, as frequency and energy increase, the ratio of feeling senses increases, and dominant sense moves from the sense of heat through tactile sense to pain.…This study will be baseline data for studies of the sense of heat, tactile sense and pain, contribute to studying neurophysiology sector and be applied to basic clinical research.
Show more
Abstract: Segmentation technique is widely accepted to reduce noise propagation from transmission scanning for positron emission tomography. The conventional routine is to sequentially perform reconstruction and segmentation. A smoothness penalty is also usually used to reduce noise, which can be imposed to both the ML and WLS estimators. In this paper we replace the smoothness penalty by a segmentation penalty that biases the object toward piecewise-homogeneous reconstruction. Two updating algorithms are developed to solve the penalized ML and WLS estimates, which monotonically decrease the cost functions. Experimental results on simulated phantom and real clinical data were both given to demonstrate the…effectiveness and efficiency of the algorithms which were proposed.
Show more
Keywords: Segmentation penalty, fuzzy c-means clustering (FCM), space alternating descent, auxiliary function
Abstract: This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of…the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
Show more
Abstract: Computed tomography (CT) has been widely used to acquire volumetric anatomical information in the diagnosis and treatment of illnesses in many clinics. However, the ART algorithm for reconstruction from under-sampled and noisy projection is still time-consuming. It is the goal of our work to improve a block-wise approximate parallel implementation for the ART algorithm on CUDA-enabled GPU to make the ART algorithm applicable to the clinical environment. The resulting method has several compelling features: (1) the rays are allotted into blocks, making the rays in the same block parallel; (2) GPU implementation caters to the actual industrial and medical application…demand. We test the algorithm on a digital shepp-logan phantom, and the results indicate that our method is more efficient than the existing CPU implementation. The high computation efficiency achieved in our algorithm makes it possible for clinicians to obtain real-time 3D images.
Show more
Abstract: Radiotherapy treatment plan may be replanned due the changes of tumors and organs at risk (OARs) during the treatment. Deformable image registration (DIR) based Computed Tomography (CT) contour propagation in the routine clinical setting is expected to reduce time needed for necessary manual tumors and OARs delineations and increase the efficiency of replanning. In this study, a DIR method was developed for CT contour propagation. Prior structure delineations were incorporated into Demons DIR, which was represented by adding an intensity matching term of the delineated tissues pairs to the energy function of Demons. The performance of our DIR was evaluated…with five clinical head-and-neck and five lung cancer cases. The experimental results verified the improved accuracy of the proposed registration method compared with conventional registration and Demons DIR.
Show more
Abstract: Arrhythmia diagnosis is very significant to ensure human health. In this paper, a new model is developed for arrhythmia diagnosis. A salient feature of the algorithm is a synergistic combination of statistical and fuzzy set-based techniques. It is distribution-free and is realized in an unsupervised mode. Arrhythmia diagnosis is viewed as a certain statistical hypothesis testing. ‘Abnormal’ is typically a much complex concept, so it can be described with the technology of fuzzy sets which bring a facet of robustness to the overall scheme and play an important role in the successive step of hypothesis testing. Intensive fuzzification is engaged…in parameters determination which is self-adaptive and no parameter needs to be specified by the user. The algorithm is validated with a number of experiments, which prove its effectiveness for arrhythmia diagnosis.
Show more
Abstract: Epilepsy is a neurological disorder characterized by the sudden abnormal discharging of brain neurons that can lead to encephalographic (EEG) abnormalities. In this study, data was obtained from epileptic patients with intracranial depth electrodes and analyzed using wavelet entropy algorithms in order to locate the epileptic foci. Significant increases in the wavelet entropy of the epileptic signals were identified during multiple episodes of clinical seizures. The results indicated that the algorithm was capable of identifying entropy changes in the epileptic sources. Furthermore, the correlations among the electrocorticogram (ECoG) signals of different channels determined using the amplitude-amplitude coupling method verified that…the epileptic foci exhibited significantly higher coupling strengths. Thus, cross frequency coupling (CFC) could be an inspiration to energy and signal transitive mode of seizure and, thereby, improve diagnostic processes.
Show more
Abstract: Based on the idea of telemedicine, 24-hour uninterrupted monitoring on electrocardiograms (ECG) has started to be implemented. To create an intelligent ECG monitoring system, an efficient and quick detection algorithm for the characteristic waveforms is needed. This paper aims to give a quick and effective method for detecting QRS-complexes and R-waves in ECGs. The real ECG signal from the MIT-BIH Arrhythmia Database is used for the performance evaluation. The method proposed combined a wavelet transform and the K-means clustering algorithm. A wavelet transform is adopted in the data analysis and preprocessing. Then, based on the slope information of the filtered…data, a segmented K-means clustering method is adopted to detect the QRS region. Detection of the R-peak is based on comparing the local amplitudes in each QRS region, which is different from other approaches, and the time cost of R-wave detection is reduced. Of the tested 8 records (total 18201 beats) from the MIT-BIH Arrhythmia Database, an average R-peak detection sensitivity of 99.72 and a positive predictive value of 99.80% are gained; the average time consumed detecting a 30-min original signal is 5.78s, which is competitive with other methods.
Show more
Keywords: ECG, detection of QRS-complex and R-wave, wavelet transform, k-means clustering
Abstract: This paper briefly describes the basic principle of wavelet packet analysis, and on this basis introduces the general principle of wavelet packet transformation for signal den-noising. The dynamic EEG data under +Gz acceleration is made a de-noising treatment by using wavelet packet transformation, and the de-noising effects with different thresholds are made a comparison. The study verifies the validity and application value of wavelet packet threshold method for the de-noising of dynamic EEG data under +Gz acceleration.
Abstract: This paper presents a technical solution that analyses sleep signals captured by biomedical sensors to find possible disorders during rest. Specifically, the method evaluates electrooculogram (EOG) signals, skin conductance (GSR), air flow (AS), and body temperature. Next, a quantitative sleep quality analysis determines significant changes in the biological signals, and any similarities between them in a given time period. Filtering techniques such as the Fourier transform method and IIR filters process the signal and identify significant variations. Once these changes have been identified, all significant data is compared and a quantitative and statistical analysis is carried out to determine the…level of a person’s rest. To evaluate the correlation and significant differences, a statistical analysis has been calculated showing correlation between EOG and AS signals (p=0,005), EOG, and GSR signals (p=0,037) and, finally, the EOG and Body temperature (p=0,04). Doctors could use this information to monitor changes within a patient.
Show more
Keywords: EOG, GSR, AS, body temperature, sleep quality, processing
Abstract: The very first step to process electrocardiogram (ECG) signal is to eliminate baseline wandering interference that is usually caused by electrode-skin impedance mismatch, motion artifacts due to a patient’s body moment or respiratory breathing. A new method is thus suggested to remove baseline wandering in ECG by improving the detrending method that was originally proposed for eliminating slow non-stationary trends from heart rate variability (HRV). In our proposed method, a global trend is estimated in terms of baseline wandering by merging the local trend based on an ECG segment that represents a part of the ECG signal. The experimental results…show that the improved detrending method can efficiently resolve baseline wandering without distorting any morphological characteristic embedded in the ECG signal in no time delay manner.
Show more
Abstract: The heart sound signal is a reflection of heart and vascular system motion. Long-term continuous electrocardiogram (ECG) contains important information which can be helpful to prevent heart failure. A single piece of a long-term ECG recording usually consists of more than one hundred thousand data points in length, making it difficult to derive hidden features that may be reflected through dynamic ECG monitoring, which is also very time-consuming to analyze. In this paper, a Dynamic Time Warping based on MapReduce (MRDTW) is proposed to make prognoses of possible lesions in patients. Through comparison of a real-time ECG of a patient…with the reference sets of normal and problematic cardiac waveforms, the experimental results reveal that our approach not only retains high accuracy, but also greatly improves the efficiency of the similarity measure in dynamic ECG series.
Show more
Abstract: Recently, exploring the cognitive functions of the brain by establishing a network model to understand the working mechanism of the brain has become a popular research topic in the field of neuroscience. In this study, electroencephalography (EEG) was used to collect data from subjects given four different mathematical cognitive tasks: recite numbers clockwise and counter-clockwise, and letters clockwise and counter-clockwise to build a complex brain function network (BFN). By studying the connectivity features and parameters of those brain functional networks, it was found that the average clustering coefficient is much larger than its corresponding random network and the average shortest…path length is similar to the corresponding random networks, which clearly shows the characteristics of the small-world network. The brain regions stimulated during the experiment are consistent with traditional cognitive science regarding learning, memory, comprehension, and other rational judgment results. The new method of complex networking involves studying the mathematical cognitive process of reciting, providing an effective research foundation for exploring the relationship between brain cognition and human learning skills and memory. This could help detect memory deficits early in young and mentally handicapped children, and help scientists understand the causes of cognitive brain disorders.
Show more
Keywords: EEG, brain function network, small-word property, cognitive memory
Abstract: Unrecognized spatial disorientation (SD) which is intimately linked with brain cognitive function is always a fatal issue for the safety of pilots. To explore its effects on human brain cognitive functions, electroencephalography (EEG) functional network analysis methods were adopted to examine topological changes in the connection of cognitive regions when experiencing unrecognized SD. Twelve male pilots participated in the study. They were subjected to a SD scene, namely visual rotation, which evoked unrecognized SD. For the main EEG frequency intervals, the phase lag index (PLI) and normalized mutual information (NMI) were calculated to quantify the EEG data. Then weighted connectivity…networks were constructed and their properties were characterized in terms of an average clustering coefficient and global efficiency. A T-test was performed to compare PLI, NMI and network measures under unrecognized SD and non-SD conditions. It indicated a weak functional connectivity level in the theta band under unrecognized SD based on the significant decrease of mean values of PLI and NMI (p<0.05). Meanwhile, both the average clustering coefficient and global efficiency in the theta band reduced under the unrecognized SD condition. The decrease of the average clustering coefficient and global efficiency demonstrates a lack of small-world characteristics and a decline in processing efficiency of brain cognitive regions. All the experimental results show that unrecognized SD may have a negative effect on brain functional networks in the theta band.
Show more
Abstract: Motor imagery EEG-based BCI has advantages in the assistance of human control of peripheral devices, such as the mobile robot or wheelchair, because the subject is not exposed to any stimulation and suffers no risk of fatigue. However, the intensive training necessary to recognize the numerous classes of data makes it hard to control these nonholonomic mobile systems accurately and effectively. This paper proposes a new approach which combines motor imagery EEG with the Adaptive Neural Fuzzy Inference System. This approach fuses the intelligence of humans based on motor imagery EEG with the precise capabilities of a mobile system based…on ANFIS. This approach realizes a multi-level control, which makes the nonholonomic mobile system highly controllably without stopping or relying on sensor information. Also, because the ANFIS controller can be trained while performing the control task, control accuracy and efficiency is increased for the user. Experimental results of the nonholonomic mobile robot verify the effectiveness of this approach.
Show more
Keywords: BCI, motor imagery EEG, ANFIS, nonholonomic mobile system
Abstract: The use of electroencephalograms (EEGs) to diagnose and analyses Alzheimer’s disease (AD) has received much attention in recent years. The sample entropy (SE) has been widely applied to the diagnosis of AD. In our study, nine EEGs from 21 scalp electrodes in 3 AD patients and 9 EEGs from 3 age-matched controls are recorded. The calculations show that the kurtoses of the AD patients’ EEG are positive and much higher than that of the controls. This finding encourages us to introduce a kurtosis-based de-noising method. The 21-electrode EEG is first decomposed using independent component analysis (ICA), and second sort them…using their kurtoses in ascending order. Finally, the subspace of EEG signal using back projection of only the last five components is reconstructed. SE will be calculated after the above de-noising preprocess. The classifications show that this method can significantly improve the accuracy of SE-based diagnosis. The kurtosis analysis of EEG may contribute to increasing the understanding of brain dysfunction in AD in a statistical way.
Show more
Abstract: In this paper we report a detection method for different sleep stages and it is based on a single-channel electroencephalogram (EEG) system. The system is simple and can be easily setup in homes to perform sleep EEG recording, overnight sleep EEG automatic staging, and sleep quality evaluation. EEG data of 14 sleeping subjects were recorded through the entire night. All subjects were within the age group of 20-30 years and having no significant sleep disorders. To analyze the EEG data, it is segmented into equal time intervals. This is followed by calculation of Sample Entropy (SampEn) for each section, and…the SampEn’s statistical characteristics, such as the median, upper quartile, lower quartile and inter-quartile range. The sleep data were divided into training group (7 cases) and test group (7 cases). Sleep stages’ quantitative ranges of training group referring to ZEO results were extracted and the quantization range used to sleep staging EEG data. Both the training group and test group results were close to ZEO results. It suggested that the statistical characteristics of Sample Entropy could be used as a criterion for sleep staging and evaluation.
Show more
Abstract: This paper explores the application of multifractal detrended fluctuation analysis (MF-DFA) on the nonlinear characteristics of correlation between operation force and surface electromyography (sEMG), which is an applied frontier of human neuromuscular system activity. We established cross-correlation functions between the signal of force and four typical sEMG time-frequency domain index sequences (force-sEMG cross-correlation sequences), and dealt with the sequences with MF-DFA. In addition, we demonstrated that the force-sEMG cross-correlation sequences have strong statistical self-similarity and the fractal characteristic of the signal spectrum is similar to 1/f noise or fractional Brownian motion.
Keywords: MF-DFA, operation force signal, sEMG, cross-correlation function
Abstract: This study investigated the correlation between AQP4 expression and DTI in rat brainstems after diffuse axonal injuries (DAI). Forty rats were imaged before injury and reimaged at 3, 6, 12, 24 and 72 h post-injury. A control group of 8 rats was imaged and sacrificed for histology but not injured. After brain injury, AQP4 expression and ADC values in the brainstems increased gradually, reaching peak values at 24 h and 12 h, respectively. FA values decreased within 72 h. There was a negative correlation between ADC values and brainstem AQP4 expression at 12 h, and a positive correlation at 24…h or 72 h ( P < 0.01 ), respectively. Changes in the ADC and FA values in the brainstems indicated brain edema and severe axonal injuries. The correlations between AQP4 expression and time-dependent ADC values aid in understanding brain edema development after DAI.
Show more
Abstract: Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model,…and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.
Show more
Keywords: Magneto-acoustic effect, vector method, sonic source locating, amplitude, phase
Abstract: Decoding brain states from response patterns with multivariate pattern recognition techniques is a popular method for detecting multivoxel patterns of brain activation. These patterns are informative with respect to a subject’s perceptual or cognitive states. Linear discriminant analysis (LDA) cannot be directly applied to fMRI data analysis because of the “few samples and large features” nature of functional magnetic resonance imaging (fMRI) data. Although several improved LDA methods have been used in fMRI-based decoding, little is known regarding the relative performance of different LDA classifiers on fMRI data. In this study, we compared five LDA classifiers using both simulated data…with varied noise levels and real fMRI data. The compared LDA classifiers include LDA combined with PCA (LDA-PCA), LDA with three types of regularizations (identity matrix, diagonal matrix and scaled identity matrix) and LDA with optimal-shrinkage covariance estimator using Ledoit and Wolf lemma (LDA-LW). The results indicated that LDA-LW was the most robust to noises. Moreover, LDA-LW and LDA with scaled identity matrix showed better stability and classification accuracy than the other methods. LDA-LW demonstrated the best overall performance.
Show more
Abstract: In this study, the radiation generated in the diagnosis of scoliosis, to solve the problems by using an infrared camera and an optical marker system that can diagnose scoliosis developed. System developed by the infrared camera attached to the optical spinal curvature is recognized as a marker to shoot the angle between the two optical markers are measured. Measurement of angle, we used the Cobb’s Angle method used in the diagnosis of spinal scoliosis. We developed a software to be able to output to the screen using an infrared camera to diagnose spinal scoliosis. Software is composed of camera output…unit was manufactured in Labview, angle measurement unit, in Cobb’s Angle measurement unit. In the future, kyphosis, Hallux Valgus, such as the diagnosis of orthopedic disorders that require the use of a diagnostic system is expected case.
Show more
Abstract: In this work, abundant anticorrelated networks were successfully detected in rest-stating fMRI-BOLD data from 20 subjects. Spatial independent component analysis (sICA) method was applied at both individual and group levels. At the individual level, for each subject, 30 independent components (IC) were estimated, and each IC was transformed using Z-score mapping. The voxels with >5 and < -5 Z-score were denoted as positive signals (PS) and negative signals (NS) respectively. The correlation coefficients between the mean time series of the PS and NS voxels were computed; if the calculated coefficients were <-0.3, the PS and NS voxels were considered to…form an anticorrelated PS-NS network. It was found that 36.5% of the ICs contained an anticorrelated PS-NS network. The spatiotemporal patterns of most PS-NS networks varied from subject to subject, but three networks displaying spatial patterns were comparably consistent among different subjects. For group-level analysis, no anticorrelated PS-NS networks were detected. Our results suggest that future investigations adopt a broader approach for negative BOLD signal characterization. Combined consideration of PS and NS systems help to better elucidate hemodynamic and neuronal brain behavior and further develop understanding of neural mechanisms of brain information processing.
Show more
Abstract: In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups…of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.
Show more
Abstract: Here we developed a real-time photoacoustic tomography (PAT) imaging acquisition device based on the linear array transducer utilized on ultrasonic devices. Also, we produced a phantom including diverse contrast media and acquired PAT imaging as the light source wavelength was changing to see if the contrast media reacted. Indocyanine green showed the highest reaction around the 800-nm band, methylene blue demonstrated the same in the 750-nm band, and gold nanoparticle showed the same in the 700-nm band. However, in the case of superparamagnetic iron oxide, we observed not reaction within the wavelength bands used herein to obtain imaging. Moreover, we…applied selective filtering to the acquired PAT imaging to remove noise from around and reinforce the object’s area. Consequentially, we could see the object area in the imaging was effectively detected and the image noise was removed.
Show more
Abstract: To improve the speed and accuracy of cerebral vessel extraction, a fast and robust method is proposed in this paper. First, volume data are divided into sub-volumes by using octree, and at the same time invalid volume data are eliminated. Second, fuzzy connectedness is introduced to achieve fast cerebral vessel segmentation from 3D MRA Images. The values of gradient and Laplacian transformation are then calculated to improve the accuracy of the distance field. Last, the center of gravity is utilized to refine the initial centerline to make it closer to the actual centerline of the vessel cavity. The experiment demonstrates…that the proposed method can effectively improve the speed and precision of centerline extraction.
Show more
Abstract: Pelger-Huet anomaly (PHA) and Pseudo Pelger-Huet anomaly (PPHA) are neutrophil with abnormal morphology. They have the bilobed or unilobed nucleus and excessive clumping chromatin. Currently, detection of this kind of cell mainly depends on the manual microscopic examination by a clinician, thus, the quality of detection is limited by the efficiency and a certain subjective consciousness of the clinician. In this paper, a detection method for PHA and PPHA is proposed based on karyomorphism and chromatin distribution features. Firstly, the skeleton of the nucleus is extracted using an augmented Fast Marching Method (AFMM) and width distribution is obtained through distance…transform. Then, caryoplastin in the nucleus is extracted based on Speeded Up Robust Features (SURF) and a K-nearest-neighbor (KNN) classifier is constructed to analyze the features. Experiment shows that the sensitivity and specificity of this method achieved 87.5% and 83.33%, which means that the detection accuracy of PHA is acceptable. Meanwhile, the detection method should be helpful to the automatic morphological classification of blood cells.
Show more
Keywords: Pelger-Huet anomaly, fast marching method, SURF
Abstract: In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top ( k 1 , k 2 ) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space,…where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top ( k 1 , k 2 ) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods’ high accuracy and high efficiency.
Show more
Abstract: Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Abstract: Due to the inherent speckling and low contrast of ultrasonic images, the accurate and efficient location of regions of interest (ROIs) is still a challenging task for breast ultrasound (BUS) computer-aided diagnosis (CAD) systems. In this paper, a fully automatic and efficient ROI generation approach is proposed. First, a BUS image is preprocessed to improve image quality. Second, a phase of max-energy orientation (PMO) image of the preprocessed image is calculated. Otsu’s threshold selection method is then used to binarize the preprocessed image, the phase image and the composed image obtained by adding and normalizing the set of two images.…Finally, a region selection algorithm is developed to select the true tumor region from these three binary images before generating a final ROI. The method was validated on a BUS database with 168 cases (81 benign and 87 malignant); the accuracy, average precision rate and average recall rate are calculated and compared with conventional method. The results indicate that the proposed method is more accurate and efficient in locating ROIs.
Show more
Keywords: breast tumor, ultrasound image, region of interest, phase in max-energy orientation, region selection
Abstract: Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to…establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.
Show more
Abstract: With the aim of developing an accurate pathological brain detection system, we proposed a novel automatic computer-aided diagnosis (CAD) to detect pathological brains from normal brains obtained by magnetic resonance imaging (MRI) scanning. The problem still remained a challenge for technicians and clinicians, since MR imaging generated an exceptionally large information dataset. A new two-step approach was proposed in this study. We used wavelet entropy (WE) and Hu moment invariants (HMI) for feature extraction, and the generalized eigenvalue proximal support vector machine (GEPSVM) for classification. To further enhance classification accuracy, the popular radial basis function (RBF) kernel was employed. The…10 runs of k -fold stratified cross validation result showed that the proposed “WE + HMI + GEPSVM + RBF” method was superior to existing methods w.r.t. classification accuracy. It obtained the average classification accuracies of 100%, 100%, and 99.45% over Dataset-66, Dataset-160, and Dataset-255, respectively. The proposed method is effective and can be applied to realistic use.
Show more
Keywords: Wavelet entropy, Hu’s moment invariant, magnetic resonance imaging, support vector machine, computer-aided diagnosis, radial basis function
Abstract: Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.
Keywords: Level set method, image segmentation, active shape model
Abstract: 3D anatomical feature curves (AFC) on bone models reconstructed from CT/MRI images are important in some fields, such as preoperative planning, intra-operative navigation, patient-specific prosthesis design, etc . Interactive extraction of feature curves on patient-specific bone models is time-consuming, has low repeatability and accuracy. This paper presents a computer graphics method to automatically extract AFC from 3D hip bone models reconstructed from CT images. A DCSS (direct curvature scale space)-based technique is firstly used to extract anatomical feature points (AFP) in every contour, using anatomical structure information as prior knowledge so that AFP are extracted and only extracted. Then, corresponding…AFP are linked in different contours and AFC is generated. AFC obtained by our method were compared with those interactively extracted by three surgeons, which showed that our method is feasible (Dice coefficient: 0.94; Average symmetric surface distance: 3.97 mm). The method was also applied to identify anatomical landmarks, which showed that our method is superior to the curvature-based methods that fail to identify landmark regions or have too many redundant regions, which results in failures to subsequently label landmark regions using pre-defined spatial adjacency matrices.
Show more
Keywords: Anatomical, feature, curve, landmark, hip
Abstract: For quantitative analysis of glioma, multimodal Magnetic Resonance Imaging (MRI) signals are required in combination to perform a complementary analysis of morphological, metabolic, and functional changes. Most of the morphological analyses are based on T1-weighted and T2-weighted signals, called traditional MRI. But more detailed information about tumorous tissues could not be explained. An information combination scheme of Diffusion-Weighted Imaging (DWI) and Blood-Oxygen-Level Dependent (BOLD) contrast Imaging is proposed in this paper. This is a non-model segmentation scheme of brain glioma tissues in a particular perspective of combining multi-parameters of DWI and BOLD contrast functional Magnetic Resonance Imaging (fMRI). Compared with…traditional MRI, a promising advantage of our work is to provide an effective and adequate subdivision of the related pathological regions with glioma, by incorporating both knowledge of image graylevel and spatial structure. Furthermore, it is an automatic segmentation method without needs of parameter selection and model fitting for the extracted tissues. By the experiments in patients with glioma, the proposed method has achieved the average overlap ratios of 83.6% in the whole tumor region and 82.5% in the peritumoral edema region with the manual segmentation as “ground truth”.
Show more
Abstract: In this study, we develop a new algorithm based on fractional operators of variable-order in order to enhance image quality. First, three kinds of popular high-order discrete formulas are adopted to obtain the coefficients, and subsequently, a mask optimization method for selecting the fractional order adaptively is applied to construct a variable-order fractional differential mask along with the coefficients generated from the first step. We carry out experiments on OCT thoracic aorta images and some nature images with low contrast and noise, demonstrating that the high-order discrete method leads to significantly better performance in enhancing the edge information nonlinearly compared…to the standard first-order discrete method. Moreover, the optimized mask with variable-order of the fractional derivative not only can preserve the edge information of the processed images adequately, but it also effectively suppresses the noise in the smooth area.
Show more
Abstract: An automatic method for estimating the density of cell nuclei in whole slide images (WSI’s) of hepatic histological sections is presented. The method can be applied to hematoxylin-eosin (H&E) stained sections with slight histological atypism, such as early well-differentiated hepatocellular carcinoma (ewHCC). It is shown that measured nuclear density is affected by the nuclear size due to fragments of nuclei. This size-dependent problem has been solved by estimating the standard nuclear area for each image patch. The method extracts typical nuclei, which are used to automatically adjust the parameters, including the standard nuclear area. The method is robust for variations…in contrast, color, and nuclear size. 40 image patches sampled from 20 WSI’s of surgical sections were used for accuracy evaluation. The mean absolute percentage error of estimated nuclear densities was 8.2%. It was also confirmed that the distributions of nuclear density were successfully estimated and visualized for all 20 WSI’s. The computation time for a WSI of typical surgical section (754 mm2 , about 1,280,000 nuclei) was about 57 minutes on a PC.
Show more
Abstract: Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu’s objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental…results including Otsu’s objective values and standard deviations.
Show more
Abstract: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is widely used for breast lesion differentiation. Manual segmentation in DCE-MRI is difficult and open to viewer interpretation. In this paper, an automatic segmentation method based on image manifold revealing was introduced to overcome the problems of the currently used method. First, high dimensional datasets were constructed from a dynamic image series. Next, an embedded image manifold was revealed in the feature image by nonlinear dimensionality reduction technique. In the last stage, k-means clustering was performed to obtain final segmentation results. The proposed method was applied in actual clinical cases and compared with the…gold standard. Statistical analysis showed that the proposed method achieved an acceptable accuracy, sensitivity, and specificity rates.
Show more
Keywords: Breast lesion, manifold learning, DCE-MRI, tumor segmentation, dimensionality reduction
Abstract: In this paper, a fully automatic scheme for measuring liver volume in 3D MR images was developed. The proposed MRI liver volumetry scheme consisted of four main stages. First, the preprocessing stage was applied to T1-weighted MR images of the liver in the portal-venous phase to reduce noise. The histogram of the 3D image was determined, and the second-to-last peak of the histogram was calculated using a neural network. Thresholds, which are determined based upon the second-to-last peak, were used to generate a thresholding image. This thresholding image was refined using a gradient magnitude image. The morphological and connected component…operations were applied to the refined image to generate the rough shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the rough shape in order to more precisely determine the liver boundaries. The liver volumes determined by the proposed automatic volumetry were compared to those manually traced by radiologists; these manual volumes were used as a “gold standard.” The two volumetric methods reached an excellent agreement. The Dice overlap coefficient and the average accuracy were 91.0 ±2.8% and 99.0 ±0.4%, respectively. The mean processing time for the proposed automatic scheme was 1.02 ±0.08 min (CPU: Intel, core i7, 2.8GHz), whereas that of the manual volumetry was 24.3 ±3.7 min (p < 0.001).
Show more
Keywords: Liver volumetry, MR volumetry, resection, transplantation, magnetic resonance imaging
Abstract: Pinhole SPECT for small animal has become a routine procedure in many applications of molecular biology and pharmaceutical development. There is an increasing demand in the whole body imaging of lab animals. A simple and direct solution is to scan the object along a helical trajectory, similar to a helical CT scan. The corresponding acquisition time can be greatly reduced, while the over-lapping and gap between consecutive bed positions can be avoided. However, helical pinhole SPECT inevitably leads to the tremendous increase in computational complexity when the iterative reconstruction algorithms are applied. We suggest a novel voxel-driven (VD) system model…which can be integrated with geometric symmetries from helical trajectory for fast iterative image reconstruction. Such a model construction can also achieve faster calculation and lower storage requirement of the system matrix. Due to the independence among various symmetries, it permits parallel coding to further boost computation efficiency of forward/backward projection. From phantom study, the results also indicate that the proposed VD model can adequately model the helical pinhole SPECT scanner with manageable storage size of system matrix and clinically acceptable computation loading of reconstruction.
Show more
Keywords: Pinhole SPECT, helical trajectory, voxel-driven system model, ray-tracing algorithms
Abstract: Static imaging of the electrical impedance tomography can obtain the absolute electrical conductivity distribution at one section of the subject. The test is performed on a cylinder physical phantom in which slim rectangle, hollow cylinder, small rectangle or three cylinders are selected to simulate complex conductivity perturbation objects. The measurement data is obtained by a data acquisition system with 32 compound electrodes. A group of static images of conductivity distribution in the cylinder phantom are reconstructed by the modified Newton-Raphson algorithm with two kinds of regularization methods. The results show correct position, size, conductivity difference, and similar shape of the…perturbation objects in the images.
Show more
Abstract: Low-dose computed tomography reconstruction is an important issue in the medical imaging domain. Sparse-view has been widely studied as a potential strategy. Compressed sensing (CS) method has shown great potential to reconstruct high-quality CT images from sparse-view projection data. Nonetheless, low-contrast structures tend to be blurred by the total variation (TV, L1 -norm of the gradient image) regularization. Moreover, TV will produce blocky effects on smooth and edge regions. To overcome this limitation, this study has proposed an iterative image reconstruction algorithm by combining L1 regularization and smoothed L0 (SL0 ) regularization. SL0 is a smooth approximation…of L0 norm and can solve the problem of L0 norm being sensitive to noise. To evaluate the proposed method, both qualitative and quantitative studies were conducted on a digital Shepp-Logan phantom and a real head phantom. Experimental comparative results have indicated that the proposed L1 /SL0 -POCS algorithm can effectively suppress noise and artifacts, as well as preserve more structural information compared to other existing methods.
Show more
Keywords: Sparse-view reconstruction, smoothed L0 regularization, L1 regularization, total variation
Abstract: Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L 1…-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
Show more
Abstract: Electron tomography (ET) is an essential imaging technique for studying structures of large biological specimens. These structures are reconstructed from a set of projections obtained at different sample orientations by tilting the specimen. However, most of existing reconstruction methods are not appropriate when the data are extremely noisy and incomplete. A new iterative method has been proposed: adaptive simultaneous algebraic reconstruction with inter-iteration adaptive non-linear anisotropic diffusion (NAD) filter (FASART). We also adopted an adaptive parameter and discussed the step for the filter in this reconstruction method. Experimental results show that FASART can restrain the noise generated in the process…of iterative reconstruction and still preserve the more details of the structure edges.
Show more
Keywords: electron tomography (ET), 3D reconstruction, iterative method, non-linear anisotropic diffusion (NAD) filter
Abstract: Recently, advanced visualizing techniques in computer graphics have considerably enhanced the visual appearance of synthetic models. To realize enhanced visual graphics for synthetic medical effects, the first step followed by rendering techniques involves attaching albedo textures to the region where a certain graphic is to be rendered. For instance, in order to render wound textures efficiently, the first step is to recognize the area where the user wants to attach a wound. However, in general, face indices are not stored in sequential order, which makes sub-texturing difficult. In this paper, we present a novel mesh tagging algorithm that utilizes a…task for mesh traversals and level extension in the general case of a wound sub-texture mapping and a selected region deformation in a three-dimensional (3D) model. This method works automatically on both regular and irregular mesh surfaces. The approach consists of mesh selection (MS), mesh leveling (ML), and mesh tagging (MT). To validate our approach, we performed experiments for synthesizing wounds on a 3D face model and on a simulated mesh.
Show more
Abstract: Magnetic Resonance Imaging (MRI) has been one of the most revolutionary medical imaging modalities in the past three decades. It has been recognized as a potential technique in the clinical diagnosis of diseases as well as tumor differentiation. Although MRI has now become the preferred choice in many clinical examinations, there are some drawbacks, which still limit its applications. One of the crucial issues of MRI is the geometric distortion caused by magnetic field inhomogeneity and susceptibility effects. The farther the lesion from the center of a magnetic field (off-center field), the more severe the distortion becomes, especially in low-field…MRI. Hence, it might hinder the diagnosis and characterization of lesions in the presence of field inhomogeneity. In this study, an innovative multi-orientated water-phantom was used to evaluate the geometric distortion. The correlations between the level of image distortion and the relative off-center positions, as well as the variation of signal intensities, were both investigated. The image distortion ratios of axial, coronal and sagittal images were calculated.
Show more
Keywords: Geometric distortion, field inhomogeneity, off-center field, multi-orientated water-phantom