Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Lu, Hongchuna; b | Tian, Shengweia; * | Yu, Longc | Xing, Yand | Cheng, Junlongb; e | Liu, Luf
Affiliations: [a] School of Software, Xinjiang University, Urumqi, Xinjiang, China | [b] Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, Xinjiang, China | [c] Network Center, Xinjiang University, Urumqi, Xinjiang, China | [d] The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China | [e] School of Information Science and Engineering, Xinjiang University, Urumqi, Xinjiang, China | [f] School of Educational Science, Xinjiang Normal University, Urumqi, Xinjiang, China
Correspondence: [*] Corresponding author. Shengwei Tian, School of Software, Xinjiang University, Urumqi 830046, Xinjiang, China. E-mail: tianshengwei@163.com.
Abstract: BACKGROUND: The automatic segmentation of medical images is an important task in clinical applications. However, due to the complexity of the background of the organs, the unclear boundary, and the variable size of different organs, some of the features are lost during network learning, and the segmentation accuracy is low. OBJECTIVE: To address these issues, this prompted us to study whether it is possible to better preserve the deep feature information of the image and solve the problem of low segmentation caused by unclear image boundaries. METHODS: In this study, we (1) build a reliable deep learning network framework, named BGRANet,to improve the segmentation performance for medical images; (2) propose a packet rotation convolutional fusion encoder network to extract features; (3) build a boundary enhanced guided packet rotation dual attention decoder network, which is used to enhance the boundary of the segmentation map and effectively fuse more prior information; and (4) propose a multi-resolution fusion module to generate high-resolution feature maps. We demonstrate the effffectiveness of the proposed method on two publicly available datasets. RESULTS: BGRANet has been trained and tested on the prepared dataset and the experimental results show that our proposed model has better segmentation performance. For 4 class classifification (CHAOS dataset), the average dice similarity coeffiffifficient reached 91.73%. For 2 class classifification (Herlev dataset), the prediction, sensitivity, specifificity, accuracy, and Dice reached 93.75%, 94.30%, 98.19%, 97.43%, and 98.08% respectively. The experimental results show that BGRANet can improve the segmentation effffect for medical images. CONCLUSION: We propose a boundary-enhanced guided packet rotation dual attention decoder network. It achieved high segmentation accuracy with a reduced parameter number.
Keywords: Medical image segmentation, packet rotation convolution, dual attention mechanism, boundary enhancement, convolutional neural network
DOI: 10.3233/THC-202789
Journal: Technology and Health Care, vol. 30, no. 1, pp. 129-143, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl