Refine
Has Fulltext
- yes (4)
Is part of the Bibliography
- yes (4)
Document Type
- Journal article (3)
- Preprint (1)
Language
- English (4) (remove)
Keywords
- deep learning (2)
- 7T (1)
- CMR (1)
- Deep learning (1)
- Myocardial infarction (1)
- Scar (1)
- Segmentation (1)
- cardiac function (1)
- cardiac magnetic resonance (1)
- cardiovascular magnetic resonance (CMR) (1)
- free‐breathing (1)
- lung (1)
- neural networks (1)
- nnU-net (1)
- radial (1)
- segmentation (1)
- self‐gated (1)
- semantic segmentation (1)
- transfer learning (1)
- ultrahigh-field (1)
- undersampling (1)
- wave‐CAIPI (1)
Institute
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (4) (remove)
Purpose
Image acquisition and subsequent manual analysis of cardiac cine MRI is time-consuming. The purpose of this study was to train and evaluate a 3D artificial neural network for semantic segmentation of radially undersampled cardiac MRI to accelerate both scan time and postprocessing.
Methods
A database of Cartesian short-axis MR images of the heart (148,500 images, 484 examinations) was assembled from an openly accessible database and radial undersampling was simulated. A 3D U-Net architecture was pretrained for segmentation of undersampled spatiotemporal cine MRI. Transfer learning was then performed using samples from a second database, comprising 108 non-Cartesian radial cine series of the midventricular myocardium to optimize the performance for authentic data. The performance was evaluated for different levels of undersampling by the Dice similarity coefficient (DSC) with respect to reference labels, as well as by deriving ventricular volumes and myocardial masses.
Results
Without transfer learning, the pretrained model performed moderately on true radial data [maximum number of projections tested, P = 196; DSC = 0.87 (left ventricle), DSC = 0.76 (myocardium), and DSC =0.64 (right ventricle)]. After transfer learning with authentic data, the predictions achieved human level even for high undersampling rates (P = 33, DSC = 0.95, 0.87, and 0.93) without significant difference compared with segmentations derived from fully sampled data.
Conclusion
A 3D U-Net architecture can be used for semantic segmentation of radially undersampled cine acquisitions, achieving a performance comparable with human experts in fully sampled data. This approach can jointly accelerate time-consuming cine image acquisition and cumbersome manual image analysis.
Purpose
Artificial neural networks show promising performance in automatic segmentation of cardiac MRI. However, training requires large amounts of annotated data and generalization to different vendors, field strengths, sequence parameters, and pathologies is limited. Transfer learning addresses this challenge, but specific recommendations regarding type and amount of data required is lacking. In this study, we assess data requirements for transfer learning to experimental cardiac MRI at 7T where the segmentation task can be challenging. In addition, we provide guidelines, tools, and annotated data to enable transfer learning approaches by other researchers and clinicians.
Methods
A publicly available segmentation model was used to annotate a publicly available data set. This labeled data set was subsequently used to train a neural network for segmentation of left ventricle and myocardium in cardiac cine MRI. The network is used as starting point for transfer learning to 7T cine data of healthy volunteers (n = 22; 7873 images) by updating the pre-trained weights. Structured and random data subsets of different sizes were used to systematically assess data requirements for successful transfer learning.
Results
Inconsistencies in the publically available data set were corrected, labels created, and a neural network trained. On 7T cardiac cine images the model pre-trained on public imaging data, acquired at 1.5T and 3T, achieved DICE\(_{LV}\) = 0.835 and DICE\(_{MY}\) = 0.670. Transfer learning using 7T cine data and ImageNet weight initialization improved model performance to DICE\(_{LV}\) = 0.900 and DICE\(_{MY}\) = 0.791. Using only end-systolic and end-diastolic images reduced training data by 90%, with no negative impact on segmentation performance (DICE\(_{LV}\) = 0.908, DICE\(_{MY}\) = 0.805).
Conclusions
This work demonstrates and quantifies the benefits of transfer learning for cardiac cine image segmentation. We provide practical guidelines for researchers planning transfer learning projects in cardiac MRI and make data, models, and code publicly available.
Purpose
To fully automatically derive quantitative parameters from late gadolinium enhancement (LGE) cardiac MR (CMR) in patients with myocardial infarction and to investigate if phase sensitive or magnitude reconstructions or a combination of both results in best segmentation accuracy.
Methods
In this retrospective single center study, a convolutional neural network with a U-Net architecture with a self-configuring framework (“nnU-net”) was trained for segmentation of left ventricular myocardium and infarct zone in LGE-CMR. A database of 170 examinations from 78 patients with history of myocardial infarction was assembled. Separate fitting of the model was performed, using phase sensitive inversion recovery, the magnitude reconstruction or both contrasts as input channels.
Manual labelling served as ground truth. In a subset of 10 patients, the performance of the trained models was evaluated and quantitatively compared by determination of the Sørensen-Dice similarity coefficient (DSC) and volumes of the infarct zone compared with the manual ground truth using Pearson’s r correlation and Bland-Altman analysis.
Results
The model achieved high similarity coefficients for myocardium and scar tissue. No significant difference was observed between using PSIR, magnitude reconstruction or both contrasts as input (PSIR and MAG; mean DSC: 0.83 ± 0.03 for myocardium and 0.72 ± 0.08 for scars). A strong correlation for volumes of infarct zone was observed between manual and model-based approach (r = 0.96), with a significant underestimation of the volumes obtained from the neural network.
Conclusion
The self-configuring nnU-net achieves predictions with strong agreement compared to manual segmentation, proving the potential as a promising tool to provide fully automatic quantitative evaluation of LGE-CMR.
Purpose
The aim of this study was to compare the wave‐CAIPI (controlled aliasing in parallel imaging) trajectory to the Cartesian sampling for accelerated free‐breathing 4D lung MRI.
Methods
The wave‐CAIPI k‐space trajectory was implemented in a respiratory self‐gated 3D spoiled gradient echo pulse sequence. Trajectory correction applying the gradient system transfer function was used, and images were reconstructed using an iterative conjugate gradient SENSE (CG SENSE) algorithm. Five healthy volunteers and one patient with squamous cell carcinoma in the lung were examined on a clinical 3T scanner, using both sampling schemes. For quantitative comparison of wave‐CAIPI and standard Cartesian imaging, the normalized mutual information and the RMS error between retrospectively accelerated acquisitions and their respective references were calculated. The SNR ratios were investigated in a phantom study.
Results
The obtained normalized mutual information values indicate a lower information loss due to acceleration for the wave‐CAIPI approach. Average normalized mutual information values of the wave‐CAIPI acquisitions were 10% higher, compared with Cartesian sampling. Furthermore, the RMS error of the wave‐CAIPI technique was lower by 19% and the SNR was higher by 14%. Especially for short acquisition times (down to 1 minute), the undersampled Cartesian images showed an increased artifact level, compared with wave‐CAIPI.
Conclusion
The application of the wave‐CAIPI technique to 4D lung MRI reduces undersampling artifacts, in comparison to a Cartesian acquisition of the same scan time. The benefit of wave‐CAIPI sampling can therefore be traded for shorter examinations, or enhancing image quality of undersampled 4D lung acquisitions, keeping the scan time constant.