Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Wang, Yuna | Chang, Wanrub | Huang, Chongfeic | Kong, Dexinga; d; *
Affiliations: [a] School of Mathematical Sciences, Zhejiang University, Hangzhou, China | [b] College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou, China | [c] China Mobile (Hangzhou) Information Technology Co., Ltd. | [d] Zhejiang Qiushi Institute for Mathematical Medicine, Hangzhou, China
Correspondence: [*] Corresponding author: Dexing Kong. E-mail: dxkong@zju.edu.cn.
Abstract: BACKGROUND:Deformable image registration (DIR) plays an important part in many clinical tasks, and deep learning has made significant progress in DIR over the past few years. OBJECTIVE:To propose a fast multiscale unsupervised deformable image registration (referred to as FMIRNet) method for monomodal image registration. METHODS:We designed a multiscale fusion module to estimate the large displacement field by combining and refining the deformation fields of three scales. The spatial attention mechanism was employed in our fusion module to weight the displacement field pixel by pixel. Except mean square error (MSE), we additionally added structural similarity (ssim) measure during the training phase to enhance the structural consistency between the deformed images and the fixed images. RESULTS:Our registration method was evaluated on EchoNet, CHAOS and SLIVER, and had indeed performance improvement in terms of SSIM, NCC and NMI scores. Furthermore, we integrated the FMIRNet into the segmentation network (FCN, UNet) to boost the segmentation task on a dataset with few manual annotations in our joint leaning frameworks. The experimental results indicated that the joint segmentation methods had performance improvement in terms of Dice, HD and ASSD scores. CONCLUSIONS:Our proposed FMIRNet is effective for large deformation estimation, and its registration capability is generalizable and robust in joint registration and segmentation frameworks to generate reliable labels for training segmentation tasks.
Keywords: Deformable image registration, multiscale fusion, spatial attention, unsupervised learning, image segmentation
DOI: 10.3233/XST-240159
Journal: Journal of X-Ray Science and Technology, vol. Pre-press, no. Pre-press, pp. 1-14, 2024
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
sales@iospress.com
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
info@iospress.nl
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office info@iospress.nl
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
china@iospress.cn
For editorial issues, like the status of your submitted paper or proposals, write to editorial@iospress.nl
如果您在出版方面需要帮助或有任何建, 件至: editorial@iospress.nl