Refine
Has Fulltext
- yes (26)
Is part of the Bibliography
- yes (26)
Document Type
- Journal article (23)
- Doctoral Thesis (3)
Language
- English (26)
Keywords
- deep learning (26) (remove)
Institute
- Institut für Geographie und Geologie (7)
- Institut für Informatik (4)
- Institut für diagnostische und interventionelle Radiologie (Institut für Röntgendiagnostik) (4)
- Medizinische Klinik und Poliklinik II (4)
- Betriebswirtschaftliches Institut (3)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (3)
- Center for Computational and Theoretical Biology (2)
- Klinik und Poliklinik für Mund-, Kiefer- und Plastische Gesichtschirurgie (2)
- Medizinische Klinik und Poliklinik I (2)
- Fakultät für Biologie (1)
Sonstige beteiligte Institutionen
- EMBL Heidelberg (1)
Purpose
To evaluate whether a deep learning model (DLM) could increase the detection sensitivity of radiologists for intracranial aneurysms on CT angiography (CTA) in aneurysmal subarachnoid hemorrhage (aSAH).
Methods
Three different DLMs were trained on CTA datasets of 68 aSAH patients with 79 aneurysms with their outputs being combined applying ensemble learning (DLM-Ens). The DLM-Ens was evaluated on an independent test set of 104 aSAH patients with 126 aneuryms (mean volume 129.2 ± 185.4 mm3, 13.0% at the posterior circulation), which were determined by two radiologists and one neurosurgeon in consensus using CTA and digital subtraction angiography scans. CTA scans of the test set were then presented to three blinded radiologists (reader 1: 13, reader 2: 4, and reader 3: 3 years of experience in diagnostic neuroradiology), who assessed them individually for aneurysms. Detection sensitivities for aneurysms of the readers with and without the assistance of the DLM were compared.
Results
In the test set, the detection sensitivity of the DLM-Ens (85.7%) was comparable to the radiologists (reader 1: 91.2%, reader 2: 86.5%, and reader 3: 86.5%; Fleiss κ of 0.502). DLM-assistance significantly increased the detection sensitivity (reader 1: 97.6%, reader 2: 97.6%,and reader 3: 96.0%; overall P=.024; Fleiss κ of 0.878), especially for secondary aneurysms (88.2% of the additional aneurysms provided by the DLM).
Conclusion
Deep learning significantly improved the detection sensitivity of radiologists for aneurysms in aSAH, especially for secondary aneurysms. It therefore represents a valuable adjunct for physicians to establish an accurate diagnosis in order to optimize patient treatment.
Abstract
Cell lineage decisions occur in three-dimensional spatial patterns that are difficult to identify by eye. There is an ongoing effort to replicate such patterns using mathematical modeling. One approach uses long ranging cell-cell communication to replicate common spatial arrangements like checkerboard and engulfing patterns. In this model, the cell-cell communication has been implemented as a signal that disperses throughout the tissue. On the other hand, machine learning models have been developed for pattern recognition and pattern reconstruction tasks. We combined synthetic data generated by the mathematical model with spatial summary statistics and deep learning algorithms to recognize and reconstruct cell fate patterns in organoids of mouse embryonic stem cells. Application of Moran’s index and pair correlation functions for in vitro and synthetic data from the model showed local clustering and radial segregation. To assess the patterns as a whole, a graph neural network was developed and trained on synthetic data from the model. Application to in vitro data predicted a low signal dispersion value. To test this result, we implemented a multilayer perceptron for the prediction of a given cell fate based on the fates of the neighboring cells. The results show a 70% accuracy of cell fate imputation based on the nine nearest neighbors of a cell. Overall, our approach combines deep learning with mathematical modeling to link cell fate patterns with potential underlying mechanisms.
Author summary
Mammalian embryo development relies on organized differentiation of stem cells into different lineages. Particularly at the early stages of embryogenesis, cells of different fates form three-dimensional spatial patterns that are difficult to identify by eye. Pattern quantification and mathematical modeling have produced first insights into potential mechanisms for the cell fate arrangements. However, these approaches have relied on classifications of the patterns such as inside-out or random, or used summary statistics such as pair correlation functions or cluster radii. Deep neural networks allow characterizing patterns directly. Since the tissue context can be readily reproduced by a graph, we implemented a graph neural network to characterize the patterns of embryonic stem cell organoids as a whole. In addition, we implemented a multilayer perceptron model to reconstruct the fate of a given cell based on its neighbors. To train and test the models, we used synthetic data generated by our mathematical model for cell-cell communication. This interplay of deep learning and mathematical modeling in combination with summary statistics allowed us to identify a potential mechanism for cell fate determination in mouse embryonic stem cells. Our results agree with a mechanism with a dispersion of the intercellular signal that links a cell’s fate to those of the local neighborhood.
Pilot study of a new freely available computer-aided polyp detection system in clinical practice
(2022)
Purpose
Computer-aided polyp detection (CADe) systems for colonoscopy are already presented to increase adenoma detection rate (ADR) in randomized clinical trials. Those commercially available closed systems often do not allow for data collection and algorithm optimization, for example regarding the usage of different endoscopy processors. Here, we present the first clinical experiences of a, for research purposes publicly available, CADe system.
Methods
We developed an end-to-end data acquisition and polyp detection system named EndoMind. Examiners of four centers utilizing four different endoscopy processors used EndoMind during their clinical routine. Detected polyps, ADR, time to first detection of a polyp (TFD), and system usability were evaluated (NCT05006092).
Results
During 41 colonoscopies, EndoMind detected 29 of 29 adenomas in 66 of 66 polyps resulting in an ADR of 41.5%. Median TFD was 130 ms (95%-CI, 80–200 ms) while maintaining a median false positive rate of 2.2% (95%-CI, 1.7–2.8%). The four participating centers rated the system using the System Usability Scale with a median of 96.3 (95%-CI, 70–100).
Conclusion
EndoMind’s ability to acquire data, detect polyps in real-time, and high usability score indicate substantial practical value for research and clinical practice. Still, clinical benefit, measured by ADR, has to be determined in a prospective randomized controlled trial.
A circum-Arctic monitoring framework for quantifying annual erosion rates of permafrost coasts
(2023)
This study demonstrates a circum-Arctic monitoring framework for quantifying annual change of permafrost-affected coasts at a spatial resolution of 10 m. Frequent cloud coverage and challenging lighting conditions, including polar night, limit the usability of optical data in Arctic regions. For this reason, Synthetic Aperture RADAR (SAR) data in the form of annual median and standard deviation (sd) Sentinel-1 (S1) backscatter images covering the months June–September for the years 2017–2021 were computed. Annual composites for the year 2020 were hereby utilized as input for the generation of a high-quality coastline product via a Deep Learning (DL) workflow, covering 161,600 km of the Arctic coastline. The previously computed annual S1 composites for the years 2017 and 2021 were employed as input data for the Change Vector Analysis (CVA)-based coastal change investigation. The generated DL coastline product served hereby as a reference. Maximum erosion rates of up to 67 m per year could be observed based on 400 m coastline segments. Overall highest average annual erosion can be reported for the United States (Alaska) with 0.75 m per year, followed by Russia with 0.62 m per year. Out of all seas covered in this study, the Beaufort Sea featured the overall strongest average annual coastal erosion of 1.12 m. Several quality layers are provided for both the DL coastline product and the CVA-based coastal change analysis to assess the applicability and accuracy of the output products. The predicted coastal change rates show good agreement with findings published in previous literature. The proposed methods and data may act as a valuable tool for future analysis of permafrost loss and carbon emissions in Arctic coastal environments.
Artificial intelligence (AI) has already arrived in many areas of our lives and, because of the increasing availability of computing power, can now be used for complex tasks in medicine and dentistry. This is reflected by an exponential increase in scientific publications aiming to integrate AI into everyday clinical routines. Applications of AI in orthodontics are already manifold and range from the identification of anatomical/pathological structures or reference points in imaging to the support of complex decision-making in orthodontic treatment planning. The aim of this article is to give the reader an overview of the current state of the art regarding applications of AI in orthodontics and to provide a perspective for the use of such AI solutions in clinical routine. For this purpose, we present various use cases for AI in orthodontics, for which research is already available. Considering the current scientific progress, it is not unreasonable to assume that AI will become an integral part of orthodontic diagnostics and treatment planning in the near future. Although AI will equally likely not be able to replace the knowledge and experience of human experts in the not-too-distant future, it probably will be able to support practitioners, thus serving as a quality-assuring component in orthodontic patient care.
Periodontitis is one of the most prevalent diseases worldwide. The degree of radiographic bone loss can be used to assess the course of therapy or the severity of the disease. Since automated bone loss detection has many benefits, our goal was to develop a multi-object detection algorithm based on artificial intelligence that would be able to detect and quantify radiographic bone loss using standard two-dimensional radiographic images in the maxillary posterior region. This study was conducted by combining three recent online databases and validating the results using an external validation dataset from our organization. There were 1414 images for training and testing and 341 for external validation in the final dataset. We applied a Keypoint RCNN with a ResNet-50-FPN backbone network for both boundary box and keypoint detection. The intersection over union (IoU) and the object keypoint similarity (OKS) were used for model evaluation. The evaluation of the boundary box metrics showed a moderate overlapping with the ground truth, revealing an average precision of up to 0.758. The average precision and recall over all five folds were 0.694 and 0.611, respectively. Mean average precision and recall for the keypoint detection were 0.632 and 0.579, respectively. Despite only using a small and heterogeneous set of images for training, our results indicate that the algorithm is able to learn the objects of interest, although without sufficient accuracy due to the limited number of images and a large amount of information available in panoramic radiographs. Considering the widespread availability of panoramic radiographs as well as the increasing use of online databases, the presented model can be further improved in the future to facilitate its implementation in clinics.
Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations.
To evaluate an iterative learning approach for enhanced performance of robust artificial‐neural‐networks for k‐space interpolation (RAKI), when only a limited amount of training data (auto‐calibration signals [ACS]) are available for accelerated standard 2D imaging.
Methods
In a first step, the RAKI model was tailored for the case of limited training data amount. In the iterative learning approach (termed iterative RAKI [iRAKI]), the tailored RAKI model is initially trained using original and augmented ACS obtained from a linear parallel imaging reconstruction. Subsequently, the RAKI convolution filters are refined iteratively using original and augmented ACS extracted from the previous RAKI reconstruction. Evaluation was carried out on 200 retrospectively undersampled in vivo datasets from the fastMRI neuro database with different contrast settings.
Results
For limited training data (18 and 22 ACS lines for R = 4 and R = 5, respectively), iRAKI outperforms standard RAKI by reducing residual artifacts and yields better noise suppression when compared to standard parallel imaging, underlined by quantitative reconstruction quality metrics. Additionally, iRAKI shows better performance than both GRAPPA and standard RAKI in case of pre‐scan calibration with varying contrast between training‐ and undersampled data.
Conclusion
RAKI benefits from the iterative learning approach, which preserves the noise suppression feature, but requires less original training data for the accurate reconstruction of standard 2D images thereby improving net acceleration.
Oroantral communication (OAC) is a common complication after tooth extraction of upper molars. Profound preoperative panoramic radiography analysis might potentially help predict OAC following tooth extraction. In this exploratory study, we evaluated n = 300 consecutive cases (100 OAC and 200 controls) and trained five machine learning algorithms (VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50) to predict OAC versus non-OAC (binary classification task) from the input images. Further, four oral and maxillofacial experts evaluated the respective panoramic radiography and determined performance metrics (accuracy, area under the curve (AUC), precision, recall, F1-score, and receiver operating characteristics curve) of all diagnostic approaches. Cohen's kappa was used to evaluate the agreement between expert evaluations. The deep learning algorithms reached high specificity (highest specificity 100% for InceptionV3) but low sensitivity (highest sensitivity 42.86% for MobileNetV2). The AUCs from VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50 were 0.53, 0.60, 0.67, 0.51, and 0.56, respectively. Expert 1–4 reached an AUC of 0.550, 0.629, 0.500, and 0.579, respectively. The specificity of the expert evaluations ranged from 51.74% to 95.02%, whereas sensitivity ranged from 14.14% to 59.60%. Cohen's kappa revealed a poor agreement for the oral and maxillofacial expert evaluations (Cohen's kappa: 0.1285). Overall, present data indicate that OAC cannot be sufficiently predicted from preoperative panoramic radiography. The false-negative rate, i.e., the rate of positive cases (OAC) missed by the deep learning algorithms, ranged from 57.14% to 95.24%. Surgeons should not solely rely on panoramic radiography when evaluating the probability of OAC occurrence. Clinical testing of OAC is warranted after each upper-molar tooth extraction.
The holy grail of structural biology is to study a protein in situ, and this goal has been fast approaching since the resolution revolution and the achievement of atomic resolution. A cell's interior is not a dilute environment, and proteins have evolved to fold and function as needed in that environment; as such, an investigation of a cellular component should ideally include the full complexity of the cellular environment. Imaging whole cells in three dimensions using electron cryotomography is the best method to accomplish this goal, but it comes with a limitation on sample thickness and produces noisy data unamenable to direct analysis. This thesis establishes a novel workflow to systematically analyse whole-cell electron cryotomography data in three dimensions and to find and identify instances of protein complexes in the data to set up a determination of their structure and identity for success. Mycoplasma pneumoniae is a very small parasitic bacterium with fewer than 700 protein-coding genes, is thin enough and small enough to be imaged in large quantities by electron cryotomography, and can grow directly on the grids used for imaging, making it ideal for exploratory studies in structural proteomics. As part of the workflow, a methodology for training deep-learning-based particle-picking models is established.
As a proof of principle, a dataset of whole-cell Mycoplasma pneumoniae tomograms is used with this workflow to characterize a novel membrane-associated complex observed in the data. Ultimately, 25431 such particles are picked from 353 tomograms and refined to a density map with a resolution of 11 Å. Making good use of orthogonal datasets to filter search space and verify results, structures were predicted for candidate proteins and checked for suitable fit in the density map. In the end, with this approach, nine proteins were found to be part of the complex, which appears to be associated with chaperone activity and interact with translocon machinery.
Visual proteomics refers to the ultimate potential of in situ electron cryotomography: the comprehensive interpretation of tomograms. The workflow presented here is demonstrated to help in reaching that potential.