Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-27603
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAmirian, Mohammadreza-
dc.contributor.authorMontoya-Zegarra, Javier A.-
dc.contributor.authorHerzig, Ivo-
dc.contributor.authorEggenberger Hotz, Peter-
dc.contributor.authorLichtensteiger, Lukas-
dc.contributor.authorMorf, Marco-
dc.contributor.authorZüst, Alexander-
dc.contributor.authorPaysan, Pascal-
dc.contributor.authorPeterlik, Igor-
dc.contributor.authorScheib, Stefan-
dc.contributor.authorFüchslin, Rudolf Marcel-
dc.contributor.authorStadelmann, Thilo-
dc.contributor.authorSchilling, Frank-Peter-
dc.date.accessioned2023-04-15T12:41:49Z-
dc.date.available2023-04-15T12:41:49Z-
dc.date.issued2023-03-30-
dc.identifier.issn0094-2405de_CH
dc.identifier.issn2473-4209de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/27603-
dc.description.abstractBackground: Cone beam computed tomography (CBCT) is often employed on radiation therapy treatment devices (linear accelerators) used in image-guided radiation therapy (IGRT). For each treatment session, it is necessary to obtain the image of the day in order to accurately position the patient, and to enable adaptive treatment capabilities including auto-segmentation and dose calculation. Reconstructed CBCT images often suffer from artifacts, in particular those induced by patient motion. Deep-learning based approaches promise ways to mitigate such artifacts. Purpose: We propose a novel deep-learning based approach with the goal to reduce motion induced artifacts in CBCT images and improve image quality. It is based on supervised learning and includes neural network architectures employed as pre- and/or post-processing steps during CBCT reconstruction. Methods: Our approach is based on deep convolutional neural networks which complement the standard CBCT reconstruction, which is performed either with the analytical Feldkamp-Davis-Kress (FDK) method, or with an iterative algebraic reconstruction technique (SART-TV). The neural networks, which are based on refined U-net architectures, are trained end-to-end in a supervised learning setup. Labeled training data are obtained by means of a motion simulation, which uses the two extreme phases of 4D CT scans, their deformation vector fields, as well as time-dependent amplitude signals as input. The trained networks are validated against ground truth using quantitative metrics, as well as by using real patient CBCT scans for a qualitative evaluation by clinical experts. Results: The presented novel approach is able to generalize to unseen data and yields significant reductions in motion induced artifacts as well as improvements in image quality compared with existing state-of-the-art CBCT reconstruction algorithms (up to +6.3 dB and +0.19 improvements in peak signal-to-noise ratio, PSNR, and structural similarity index measure, SSIM, respectively), as evidenced by validation with an unseen test dataset, and confirmed by a clincal evaluation on real patient scans (up to 74% preference for motion artifact reduction over standard reconstruction). Conclusions: For the first time, it is demonstrated, also by means of clinical evaluation, that inserting deep neural networks as pre- and post-processing plugins in the existing 3D CBCT reconstruction and trained end-to-end yield significant improvements in image quality and reduction of motion artifacts.de_CH
dc.language.isoende_CH
dc.publisherWileyde_CH
dc.relation.ispartofMedical Physicsde_CH
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/de_CH
dc.subjectCBCTde_CH
dc.subjectDeep learningde_CH
dc.subjectMotion artifactde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.subject.ddc616: Innere Medizin und Krankheitende_CH
dc.titleMitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networksde_CH
dc.typeBeitrag in wissenschaftlicher Zeitschriftde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitCentre for Artificial Intelligence (CAI)de_CH
zhaw.organisationalunitInstitut für Angewandte Mathematik und Physik (IAMP)de_CH
dc.identifier.doi10.1002/mp.16405de_CH
dc.identifier.doi10.21256/zhaw-27603-
dc.identifier.pmid36995003de_CH
zhaw.funding.euNode_CH
zhaw.issue10de_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end6242de_CH
zhaw.pages.start6228de_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.volume50de_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.webfeedMachine Perception and Cognitionde_CH
zhaw.webfeedIntelligent Vision Systemsde_CH
zhaw.funding.zhawDIR3CT: Deep Image Reconstruction through X-Ray Projection-based 3D Learning of Computed Tomography Volumesde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Show simple item record
Amirian, M., Montoya-Zegarra, J. A., Herzig, I., Eggenberger Hotz, P., Lichtensteiger, L., Morf, M., Züst, A., Paysan, P., Peterlik, I., Scheib, S., Füchslin, R. M., Stadelmann, T., & Schilling, F.-P. (2023). Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks. Medical Physics, 50(10), 6228–6242. https://doi.org/10.1002/mp.16405
Amirian, M. et al. (2023) ‘Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks’, Medical Physics, 50(10), pp. 6228–6242. Available at: https://doi.org/10.1002/mp.16405.
M. Amirian et al., “Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks,” Medical Physics, vol. 50, no. 10, pp. 6228–6242, Mar. 2023, doi: 10.1002/mp.16405.
AMIRIAN, Mohammadreza, Javier A. MONTOYA-ZEGARRA, Ivo HERZIG, Peter EGGENBERGER HOTZ, Lukas LICHTENSTEIGER, Marco MORF, Alexander ZÜST, Pascal PAYSAN, Igor PETERLIK, Stefan SCHEIB, Rudolf Marcel FÜCHSLIN, Thilo STADELMANN und Frank-Peter SCHILLING, 2023. Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks. Medical Physics. 30 März 2023. Bd. 50, Nr. 10, S. 6228–6242. DOI 10.1002/mp.16405
Amirian, Mohammadreza, Javier A. Montoya-Zegarra, Ivo Herzig, Peter Eggenberger Hotz, Lukas Lichtensteiger, Marco Morf, Alexander Züst, et al. 2023. “Mitigation of Motion-Induced Artifacts in Cone Beam Computed Tomography Using Deep Convolutional Neural Networks.” Medical Physics 50 (10): 6228–42. https://doi.org/10.1002/mp.16405.
Amirian, Mohammadreza, et al. “Mitigation of Motion-Induced Artifacts in Cone Beam Computed Tomography Using Deep Convolutional Neural Networks.” Medical Physics, vol. 50, no. 10, Mar. 2023, pp. 6228–42, https://doi.org/10.1002/mp.16405.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.