Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-25181
Publication type: | Conference poster |
Type of review: | Peer review (abstract) |
Title: | Deep learning-based simultaneous multi-phase deformable image registration of sparse 4D-CBCT |
Authors: | Herzig, Ivo Paysan, Pascal Scheib, Stefan Züst, Alexander Schilling, Frank-Peter Montoya, Javier Amirian, Mohammadreza Stadelmann, Thilo Eggenberger Hotz, Peter Füchslin, Rudolf Marcel Lichtensteiger, Lukas |
et. al: | No |
DOI: | 10.1002/mp.15769 10.21256/zhaw-25181 |
Published in: | Medical Physics |
Volume(Issue): | 49 |
Issue: | 6 |
Page(s): | e325 |
Pages to: | e326 |
Conference details: | AAPM Annual Meeting, Washington, DC, USA, 10-14 July 2022 |
Issue Date: | 9-Jun-2022 |
Publisher / Ed. Institution: | American Association of Physicists in Medicine |
Language: | English |
Subjects: | Deep learning; Deformable image registration; CBCT; Medical imaging; Artificial intelligence |
Subject (DDC): | 006: Special computer methods 616: Internal medicine and diseases |
Abstract: | Purpose: Respiratory gated 4D-CBCT suffers from sparseness artefacts caused by the limited number of projections available for each respiratory phase/amplitude. These artefacts severely impact deformable image registration methods used to extract motion information. We use deep learning-based methods to predict displacement vector-fields (DVF) from sparse 4D-CBCT images to alleviate the impacts of sparseness artefacts. Methods: We trained U-Net-type convolutional neural network models to predict multiple (10) DVFs in a single forward pass given multiple sparse, gated CBCT and an optional artefact-free reference image as inputs. The predicted DVFs are used to warp the reference image to the different motion states, resulting in an artefact-free image for each state. The supervised training uses data generated by a motion simulation framework. The training dataset consists of 560 simulated 4D-CBCT images of 56 different patients; the generated data include fully sampled ground-truth images that are used to train the network. We compare the results of our method to pairwise image registration (reference image to single sparse image) using a) the deeds algorithm and b) VoxelMorph with image pair inputs. Results: We show that our method clearly outperforms pairwise registration using the deeds algorithm alone. PSNR improved from 25.8 to 46.4, SSIM from 0.9296 to 0.9999. In addition, the runtime of our learning-based method is orders of magnitude shorter (2 seconds instead of 10 minutes). Our results also indicate slightly improved performance compared to pairwise registration (delta-PSNR=1.2). We also trained a model that does not require the artefact-free reference image (which is usually not available) during inference demonstrating only marginally compromised results (delta-PSNR=-0.8). Conclusion: To the best of our knowledge, this is the first time CNNs are used to predict multi-phase DVFs in a single forward pass. This enables novel applications such as 4D-auto-segmentation, motion compensated image reconstruction, motion analyses, and patient motion modeling. |
URI: | https://digitalcollection.zhaw.ch/handle/11475/25181 |
Fulltext version: | Published version |
License (according to publishing contract): | Licence according to publishing contract |
Departement: | School of Engineering |
Organisational Unit: | Centre for Artificial Intelligence (CAI) Institute of Applied Mathematics and Physics (IAMP) |
Published as part of the ZHAW project: | DIR3CT: Deep Image Reconstruction through X-Ray Projection-based 3D Learning of Computed Tomography Volumes |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2022_Herzig-etal_Sparse4DCBCT-multiphase-DIR_AAPM-eposter.pdf | 713.76 kB | Adobe PDF | View/Open |
Show full item record
Herzig, I., Paysan, P., Scheib, S., Züst, A., Schilling, F.-P., Montoya, J., Amirian, M., Stadelmann, T., Eggenberger Hotz, P., Füchslin, R. M., & Lichtensteiger, L. (2022). Deep learning-based simultaneous multi-phase deformable image registration of sparse 4D-CBCT [Conference poster]. Medical Physics, 49(6), e325–e326. https://doi.org/10.1002/mp.15769
Herzig, I. et al. (2022) ‘Deep learning-based simultaneous multi-phase deformable image registration of sparse 4D-CBCT’, in Medical Physics. American Association of Physicists in Medicine, pp. e325–e326. Available at: https://doi.org/10.1002/mp.15769.
I. Herzig et al., “Deep learning-based simultaneous multi-phase deformable image registration of sparse 4D-CBCT,” in Medical Physics, Jun. 2022, vol. 49, no. 6, pp. e325–e326. doi: 10.1002/mp.15769.
HERZIG, Ivo, Pascal PAYSAN, Stefan SCHEIB, Alexander ZÜST, Frank-Peter SCHILLING, Javier MONTOYA, Mohammadreza AMIRIAN, Thilo STADELMANN, Peter EGGENBERGER HOTZ, Rudolf Marcel FÜCHSLIN und Lukas LICHTENSTEIGER, 2022. Deep learning-based simultaneous multi-phase deformable image registration of sparse 4D-CBCT. In: Medical Physics. Conference poster. American Association of Physicists in Medicine. 9 Juni 2022. S. e325–e326
Herzig, Ivo, Pascal Paysan, Stefan Scheib, Alexander Züst, Frank-Peter Schilling, Javier Montoya, Mohammadreza Amirian, et al. 2022. “Deep Learning-Based Simultaneous Multi-Phase Deformable Image Registration of Sparse 4D-CBCT.” Conference poster. In Medical Physics, 49:e325–e26. American Association of Physicists in Medicine. https://doi.org/10.1002/mp.15769.
Herzig, Ivo, et al. “Deep Learning-Based Simultaneous Multi-Phase Deformable Image Registration of Sparse 4D-CBCT.” Medical Physics, vol. 49, no. 6, American Association of Physicists in Medicine, 2022, pp. e325–26, https://doi.org/10.1002/mp.15769.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.