Refine
Has Fulltext
- yes (2)
Is part of the Bibliography
- yes (2)
Year of publication
- 2023 (2)
Document Type
- Journal article (2) (remove)
Language
- English (2) (remove)
Keywords
- GRAPPA (1)
- RAKI (1)
- anatomy (1)
- bone imaging (1)
- complex‐valued machine learning (1)
- data augmentation (1)
- deep learning (1)
- diagnosis (1)
- medical imaging (1)
- parallel imaging (1)
- software (1)
- three-dimensional imaging (1)
- tomography (1)
To evaluate an iterative learning approach for enhanced performance of robust artificial‐neural‐networks for k‐space interpolation (RAKI), when only a limited amount of training data (auto‐calibration signals [ACS]) are available for accelerated standard 2D imaging.
Methods
In a first step, the RAKI model was tailored for the case of limited training data amount. In the iterative learning approach (termed iterative RAKI [iRAKI]), the tailored RAKI model is initially trained using original and augmented ACS obtained from a linear parallel imaging reconstruction. Subsequently, the RAKI convolution filters are refined iteratively using original and augmented ACS extracted from the previous RAKI reconstruction. Evaluation was carried out on 200 retrospectively undersampled in vivo datasets from the fastMRI neuro database with different contrast settings.
Results
For limited training data (18 and 22 ACS lines for R = 4 and R = 5, respectively), iRAKI outperforms standard RAKI by reducing residual artifacts and yields better noise suppression when compared to standard parallel imaging, underlined by quantitative reconstruction quality metrics. Additionally, iRAKI shows better performance than both GRAPPA and standard RAKI in case of pre‐scan calibration with varying contrast between training‐ and undersampled data.
Conclusion
RAKI benefits from the iterative learning approach, which preserves the noise suppression feature, but requires less original training data for the accurate reconstruction of standard 2D images thereby improving net acceleration.
Automated analysis of the inner ear anatomy in radiological data instead of time-consuming manual assessment is a worthwhile goal that could facilitate preoperative planning and clinical research. We propose a framework encompassing joint semantic segmentation of the inner ear and anatomical landmark detection of helicotrema, oval and round window. A fully automated pipeline with a single, dual-headed volumetric 3D U-Net was implemented, trained and evaluated using manually labeled in-house datasets from cadaveric specimen (N = 43) and clinical practice (N = 9). The model robustness was further evaluated on three independent open-source datasets (N = 23 + 7 + 17 scans) consisting of cadaveric specimen scans. For the in-house datasets, Dice scores of 0.97 and 0.94, intersection-over-union scores of 0.94 and 0.89 and average Hausdorf distances of 0.065 and 0.14 voxel units were achieved. The landmark localization task was performed automatically with an average localization error of 3.3 and 5.2 voxel units. A robust, albeit reduced performance could be
attained for the catalogue of three open-source datasets. Results of the ablation studies with 43 mono-parametric variations of the basal architecture and training protocol provided task-optimal parameters for both categories. Ablation studies against single-task variants of the basal architecture showed a clear performance beneft of coupling landmark localization with segmentation and a dataset-dependent performance impact on segmentation ability.