A Knowledge-Based Deep Learning Model of Synthetic CT Generation Using a Single MR Volume for Stereotactic Radiosurgery (SRS)



Abstract

Objectives:

Current stereotactic radiosurgery (SRS) workflows necessitate acquiring both magnetic resonance (MR) and computed tomography (CT) images for anatomy contouring and treatment planning. This dual acquisition often complicates the simulation process, leading to delays between simulation and treatment—particularly problematic for urgent cases. This study seeks to mitigate these challenges by introducing and evaluating a novel knowledge-based deep learning model. The model's objective is to generate synthetic CT volumes from a single MR volume, enabling immediate CT-based dose calculation post-MR anatomy contouring. This innovation can potentially streamline the simulation process, offering a “sim and treat” approach on the same day for SRS.

Methods:

Our proposed model incorporates an unprecedented knowledge assimilation feature. It utilizes an auxiliary data volume of healthy brain CT images sourced from the Visible Human Project, serving as a foundational knowledge base with vital CT-specific brain anatomical data. The backbone of the proposed model is a modified Residual U-Net (ResUNet), optimized specifically for the task of cross-modality image transformation between MR and CT scans. This deep learning network utilizes the knowledge base CT volume and a patient-specific high resolution T1+c MR volume as the input, generating a patient-specific synthetic CT volume as the outcome.

The model was trained, validated, and tested using 114 patients treated at our institution with paired T1+c MR-CT fusion. The split ratio among training, validation, and test is 70:30:14. Model performance was gauged by comparing synthetic CT volumes produced with our method to the ground truth CT volumes using metrics of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and mean absolute error (MAE).

Results:

Quantitative assessments on 14 test samples revealed an average PSNR of 35.45 dB, SSIM of 0.97, and MAE of 19.04 HU, with minimal standard deviations (1.40 dB, 0.004, and 2.21 HU respectively). These metrics underline the model's proficiency in generating CT contrast with high accuracy in quantitative HU results. Moreover, leveraging the knowledge base CT volume resulted in synthetic CT volumes with high structural accuracy (evidenced by close-to-1 SSIM scores) and negligible geometric distortion, ensuring their suitability for precise dose calculation.

Conclusion(s):

This innovative knowledge-based deep learning model can adeptly produce synthetic CT volumes from a single T1+c MR volume, benefiting from the anatomical insights of a knowledge base CT volume. Given its promising results, this model warrants the further pre-clinical investigation for dosimetric outcome benchmarks before the promising study of same-day MR-only SRS workflow.

Related content

abstract
non-peer-reviewed

A Knowledge-Based Deep Learning Model of Synthetic CT Generation Using a Single MR Volume for Stereotactic Radiosurgery (SRS)


Author Information

Ke Lu Corresponding Author

Radiation Oncology, Duke University Medical Center, Durham, USA

Fang-Fang Yin

Department of Radiation Oncology, Duke University Health System, Durham, USA

John P. Kirkpatrick

Department of Radiation Oncology, Duke University Health System, Durham, USA

Chunhao Wang

Department of Radiation Oncology, Duke University Medical Center, Durham, USA


PDF Share