Refine
Has Fulltext
- yes (3)
Is part of the Bibliography
- yes (3)
Year of publication
- 2021 (3) (remove)
Document Type
- Journal article (2)
- Preprint (1)
Language
- English (3)
Keywords
- deep learning (2)
- 7T (1)
- CMR (1)
- Deep learning (1)
- MRI (1)
- Myocardial infarction (1)
- Scar (1)
- Segmentation (1)
- cardiac function (1)
- cardiac magnetic resonance (1)
- image segmentation (1)
- lung (1)
- neural networks (1)
- nnU-net (1)
- segmentation (1)
- transfer learning (1)
- ultrahigh-field (1)
Purpose
Artificial neural networks show promising performance in automatic segmentation of cardiac MRI. However, training requires large amounts of annotated data and generalization to different vendors, field strengths, sequence parameters, and pathologies is limited. Transfer learning addresses this challenge, but specific recommendations regarding type and amount of data required is lacking. In this study, we assess data requirements for transfer learning to experimental cardiac MRI at 7T where the segmentation task can be challenging. In addition, we provide guidelines, tools, and annotated data to enable transfer learning approaches by other researchers and clinicians.
Methods
A publicly available segmentation model was used to annotate a publicly available data set. This labeled data set was subsequently used to train a neural network for segmentation of left ventricle and myocardium in cardiac cine MRI. The network is used as starting point for transfer learning to 7T cine data of healthy volunteers (n = 22; 7873 images) by updating the pre-trained weights. Structured and random data subsets of different sizes were used to systematically assess data requirements for successful transfer learning.
Results
Inconsistencies in the publically available data set were corrected, labels created, and a neural network trained. On 7T cardiac cine images the model pre-trained on public imaging data, acquired at 1.5T and 3T, achieved DICE\(_{LV}\) = 0.835 and DICE\(_{MY}\) = 0.670. Transfer learning using 7T cine data and ImageNet weight initialization improved model performance to DICE\(_{LV}\) = 0.900 and DICE\(_{MY}\) = 0.791. Using only end-systolic and end-diastolic images reduced training data by 90%, with no negative impact on segmentation performance (DICE\(_{LV}\) = 0.908, DICE\(_{MY}\) = 0.805).
Conclusions
This work demonstrates and quantifies the benefits of transfer learning for cardiac cine image segmentation. We provide practical guidelines for researchers planning transfer learning projects in cardiac MRI and make data, models, and code publicly available.
Purpose
To fully automatically derive quantitative parameters from late gadolinium enhancement (LGE) cardiac MR (CMR) in patients with myocardial infarction and to investigate if phase sensitive or magnitude reconstructions or a combination of both results in best segmentation accuracy.
Methods
In this retrospective single center study, a convolutional neural network with a U-Net architecture with a self-configuring framework (“nnU-net”) was trained for segmentation of left ventricular myocardium and infarct zone in LGE-CMR. A database of 170 examinations from 78 patients with history of myocardial infarction was assembled. Separate fitting of the model was performed, using phase sensitive inversion recovery, the magnitude reconstruction or both contrasts as input channels.
Manual labelling served as ground truth. In a subset of 10 patients, the performance of the trained models was evaluated and quantitatively compared by determination of the Sørensen-Dice similarity coefficient (DSC) and volumes of the infarct zone compared with the manual ground truth using Pearson’s r correlation and Bland-Altman analysis.
Results
The model achieved high similarity coefficients for myocardium and scar tissue. No significant difference was observed between using PSIR, magnitude reconstruction or both contrasts as input (PSIR and MAG; mean DSC: 0.83 ± 0.03 for myocardium and 0.72 ± 0.08 for scars). A strong correlation for volumes of infarct zone was observed between manual and model-based approach (r = 0.96), with a significant underestimation of the volumes obtained from the neural network.
Conclusion
The self-configuring nnU-net achieves predictions with strong agreement compared to manual segmentation, proving the potential as a promising tool to provide fully automatic quantitative evaluation of LGE-CMR.
Background
Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters.
Methods
In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis.
To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer.
Results
The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L.
Conclusions
Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.