TY - THES A1 - Kleineisel, Jonas T1 - Variational networks in magnetic resonance imaging - Application to spiral cardiac MRI and investigations on image quality T1 - Variational Networks in der Magnetresonanztomographie - Anwendung auf spirale Herzbildgebung und Untersuchungen zur Bildqualität N2 - Acceleration is a central aim of clinical and technical research in magnetic resonance imaging (MRI) today, with the potential to increase robustness, accessibility and patient comfort, reduce cost, and enable entirely new kinds of examinations. A key component in this endeavor is image reconstruction, as most modern approaches build on advanced signal and image processing. Here, deep learning (DL)-based methods have recently shown considerable potential, with numerous publications demonstrating benefits for MRI reconstruction. However, these methods often come at the cost of an increased risk for subtle yet critical errors. Therefore, the aim of this thesis is to advance DL-based MRI reconstruction, while ensuring high quality and fidelity with measured data. A network architecture specifically suited for this purpose is the variational network (VN). To investigate the benefits these can bring to non-Cartesian cardiac imaging, the first part presents an application of VNs, which were specifically adapted to the reconstruction of accelerated spiral acquisitions. The proposed method is compared to a segmented exam, a U-Net and a compressed sensing (CS) model using qualitative and quantitative measures. While the U-Net performed poorly, the VN as well as the CS reconstruction showed good output quality. In functional cardiac imaging, the proposed real-time method with VN reconstruction substantially accelerates examinations over the gold-standard, from over 10 to just 1 minute. Clinical parameters agreed on average. Generally in MRI reconstruction, the assessment of image quality is complex, in particular for modern non-linear methods. Therefore, advanced techniques for precise evaluation of quality were subsequently demonstrated. With two distinct methods, resolution and amplification or suppression of noise are quantified locally in each pixel of a reconstruction. Using these, local maps of resolution and noise in parallel imaging (GRAPPA), CS, U-Net and VN reconstructions were determined for MR images of the brain. In the tested images, GRAPPA delivers uniform and ideal resolution, but amplifies noise noticeably. The other methods adapt their behavior to image structure, where different levels of local blurring were observed at edges compared to homogeneous areas, and noise was suppressed except at edges. Overall, VNs were found to combine a number of advantageous properties, including a good trade-off between resolution and noise, fast reconstruction times, and high overall image quality and fidelity of the produced output. Therefore, this network architecture seems highly promising for MRI reconstruction. N2 - Eine Beschleunigung des Bildgebungsprozesses ist heute ein wichtiges Ziel von klinischer und technischer Forschung in der Magnetresonanztomographie (MRT). Dadurch könnten Robustheit, Verfügbarkeit und Patientenkomfort erhöht, Kosten gesenkt und ganz neue Arten von Untersuchungen möglich gemacht werden. Da sich die meisten modernen Ansätze hierfür auf eine fortgeschrittene Signal- und Bildverarbeitung stützen, ist die Bildrekonstruktion ein zentraler Baustein. In diesem Bereich haben Deep Learning (DL)-basierte Methoden in der jüngeren Vergangenheit bemerkenswertes Potenzial gezeigt und eine Vielzahl an Publikationen konnte deren Nutzen in der MRT-Rekonstruktion feststellen. Allerdings besteht dabei das Risiko von subtilen und doch kritischen Fehlern. Daher ist das Ziel dieser Arbeit, die DL-basierte MRT-Rekonstruktion weiterzuentwickeln, während gleichzeitig hohe Bildqualität und Treue der erzeugten Bilder mit den gemessenen Daten gewährleistet wird. Eine Netzwerkarchitektur, die dafür besonders geeignet ist, ist das Variational Network (VN). Um den Nutzen dieser Netzwerke für nicht-kartesische Herzbildgebung zu untersuchen, beschreibt der erste Teil dieser Arbeit eine Anwendung von VNs, welche spezifisch für die Rekonstruktion von beschleunigten Akquisitionen mit spiralen Auslesetrajektorien angepasst wurden. Die vorgeschlagene Methode wird mit einer segmentierten Rekonstruktion, einem U-Net, und einem Compressed Sensing (CS)-Modell anhand von qualitativen und quantitativen Metriken verglichen. Während das U-Net schlecht abschneidet, zeigen die VN- und CS-Methoden eine gute Bildqualität. In der funktionalen Herzbildgebung beschleunigt die vorgeschlagene Echtzeit-Methode mit VN-Rekonstruktion die Aufnahme gegenüber dem Goldstandard wesentlich, von etwa zehn zu nur einer Minute. Klinische Parameter stimmen im Mittel überein. Die Bewertung von Bildqualität in der MRT-Rekonstruktion ist im Allgemeinen komplex, vor allem für moderne, nichtlineare Methoden. Daher wurden anschließend forgeschrittene Techniken zur präsizen Analyse von Bildqualität demonstriert. Mit zwei separaten Methoden wurde einerseits die Auflösung und andererseits die Verstärkung oder Unterdrückung von Rauschen in jedem Pixel eines untersuchten Bildes lokal quantifiziert. Damit wurden lokale Karten von Auflösung und Rauschen in Rekonstruktionen durch Parallele Bildgebung (GRAPPA), CS, U-Net und VN für MR-Aufnahmen des Gehirns berechnet. In den untersuchten Bildern zeigte GRAPPA gleichmäßig eine ideale Auflösung, aber merkliche Rauschverstärkung. Die anderen Methoden verhalten sich lokal unterschiedlich je nach Struktur des untersuchten Bildes. Die gemessene lokale Unschärfe unterschied sich an den Kanten gegenüber homogenen Bildbereichen, und Rauschen wurde überall außer an Kanten unterdrückt. Insgesamt wurde für VNs eine Kombination von verschiedenen günstigen Eigenschaften festgestellt, unter anderem ein guter Kompromiss zwischen Auflösung und Rauschen, schnelle Laufzeit, und hohe Qualität und Datentreue der erzeugten Bilder. Daher erscheint diese Netzwerkarchitektur als ein äußerst vielversprechender Ansatz für MRT-Rekonstruktion. KW - Kernspintomografie KW - Convolutional Neural Network KW - Maschinelles Lernen KW - Bildgebendes Verfahren KW - magnetic resonance imaging KW - convolutional neural network KW - variational network KW - cardiac imaging KW - machine learning KW - local point-spread function KW - resolution KW - g-factor Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-347370 ER - TY - JOUR A1 - Dhillon, Maninder Singh A1 - Dahms, Thorsten A1 - Kuebert-Flock, Carina A1 - Rummler, Thomas A1 - Arnault, Joel A1 - Steffan-Dewenter, Ingolf A1 - Ullmann, Tobias T1 - Integrating random forest and crop modeling improves the crop yield prediction of winter wheat and oil seed rape JF - Frontiers in Remote Sensing N2 - The fast and accurate yield estimates with the increasing availability and variety of global satellite products and the rapid development of new algorithms remain a goal for precision agriculture and food security. However, the consistency and reliability of suitable methodologies that provide accurate crop yield outcomes still need to be explored. The study investigates the coupling of crop modeling and machine learning (ML) to improve the yield prediction of winter wheat (WW) and oil seed rape (OSR) and provides examples for the Free State of Bavaria (70,550 km2), Germany, in 2019. The main objectives are to find whether a coupling approach [Light Use Efficiency (LUE) + Random Forest (RF)] would result in better and more accurate yield predictions compared to results provided with other models not using the LUE. Four different RF models [RF1 (input: Normalized Difference Vegetation Index (NDVI)), RF2 (input: climate variables), RF3 (input: NDVI + climate variables), RF4 (input: LUE generated biomass + climate variables)], and one semi-empiric LUE model were designed with different input requirements to find the best predictors of crop monitoring. The results indicate that the individual use of the NDVI (in RF1) and the climate variables (in RF2) could not be the most accurate, reliable, and precise solution for crop monitoring; however, their combined use (in RF3) resulted in higher accuracies. Notably, the study suggested the coupling of the LUE model variables to the RF4 model can reduce the relative root mean square error (RRMSE) from −8% (WW) and −1.6% (OSR) and increase the R 2 by 14.3% (for both WW and OSR), compared to results just relying on LUE. Moreover, the research compares models yield outputs by inputting three different spatial inputs: Sentinel-2(S)-MOD13Q1 (10 m), Landsat (L)-MOD13Q1 (30 m), and MOD13Q1 (MODIS) (250 m). The S-MOD13Q1 data has relatively improved the performance of models with higher mean R 2 [0.80 (WW), 0.69 (OSR)], and lower RRMSE (%) (9.18, 10.21) compared to L-MOD13Q1 (30 m) and MOD13Q1 (250 m). Satellite-based crop biomass, solar radiation, and temperature are found to be the most influential variables in the yield prediction of both crops. KW - crop modeling KW - random forest KW - machine learning KW - NDVI KW - satellite KW - landsat KW - sentinel-2 KW - winter wheat Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301462 SN - 2673-6187 VL - 3 ER - TY - JOUR A1 - Oberdorf, Felix A1 - Schaschek, Myriam A1 - Weinzierl, Sven A1 - Stein, Nikolai A1 - Matzner, Martin A1 - Flath, Christoph M. T1 - Predictive end-to-end enterprise process network monitoring JF - Business & Information Systems Engineering N2 - Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations. KW - predictive process analytics KW - predictive process monitoring KW - deep learning KW - machine learning KW - neural network KW - business process anagement KW - process mining Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-323814 SN - 2363-7005 VL - 65 IS - 1 ER - TY - JOUR A1 - Kunz, Felix A1 - Stellzig-Eisenhauer, Angelika A1 - Boldt, Julian T1 - Applications of artificial intelligence in orthodontics — an overview and perspective based on the current state of the art JF - Applied Sciences N2 - Artificial intelligence (AI) has already arrived in many areas of our lives and, because of the increasing availability of computing power, can now be used for complex tasks in medicine and dentistry. This is reflected by an exponential increase in scientific publications aiming to integrate AI into everyday clinical routines. Applications of AI in orthodontics are already manifold and range from the identification of anatomical/pathological structures or reference points in imaging to the support of complex decision-making in orthodontic treatment planning. The aim of this article is to give the reader an overview of the current state of the art regarding applications of AI in orthodontics and to provide a perspective for the use of such AI solutions in clinical routine. For this purpose, we present various use cases for AI in orthodontics, for which research is already available. Considering the current scientific progress, it is not unreasonable to assume that AI will become an integral part of orthodontic diagnostics and treatment planning in the near future. Although AI will equally likely not be able to replace the knowledge and experience of human experts in the not-too-distant future, it probably will be able to support practitioners, thus serving as a quality-assuring component in orthodontic patient care. KW - orthodontics KW - artificial intelligence KW - machine learning KW - deep learning KW - cephalometry KW - age determination by skeleton KW - tooth extraction KW - orthognathic surgery Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-310940 SN - 2076-3417 VL - 13 IS - 6 ER - TY - JOUR A1 - Henckert, David A1 - Malorgio, Amos A1 - Schweiger, Giovanna A1 - Raimann, Florian J. A1 - Piekarski, Florian A1 - Zacharowski, Kai A1 - Hottenrott, Sebastian A1 - Meybohm, Patrick A1 - Tscholl, David W. A1 - Spahn, Donat R. A1 - Roche, Tadzio R. T1 - Attitudes of anesthesiologists toward artificial intelligence in anesthesia: a multicenter, mixed qualitative–quantitative study JF - Journal of Clinical Medicine N2 - Artificial intelligence (AI) is predicted to play an increasingly important role in perioperative medicine in the very near future. However, little is known about what anesthesiologists know and think about AI in this context. This is important because the successful introduction of new technologies depends on the understanding and cooperation of end users. We sought to investigate how much anesthesiologists know about AI and what they think about the introduction of AI-based technologies into the clinical setting. In order to better understand what anesthesiologists think of AI, we recruited 21 anesthesiologists from 2 university hospitals for face-to-face structured interviews. The interview transcripts were subdivided sentence-by-sentence into discrete statements, and statements were then grouped into key themes. Subsequently, a survey of closed questions based on these themes was sent to 70 anesthesiologists from 3 university hospitals for rating. In the interviews, the base level of knowledge of AI was good at 86 of 90 statements (96%), although awareness of the potential applications of AI in anesthesia was poor at only 7 of 42 statements (17%). Regarding the implementation of AI in anesthesia, statements were split roughly evenly between pros (46 of 105, 44%) and cons (59 of 105, 56%). Interviewees considered that AI could usefully be used in diverse tasks such as risk stratification, the prediction of vital sign changes, or as a treatment guide. The validity of these themes was probed in a follow-up survey of 70 anesthesiologists with a response rate of 70%, which confirmed an overall positive view of AI in this group. Anesthesiologists hold a range of opinions, both positive and negative, regarding the application of AI in their field of work. Survey-based studies do not always uncover the full breadth of nuance of opinion amongst clinicians. Engagement with specific concerns, both technical and ethical, will prove important as this technology moves from research to the clinic. KW - artificial intelligence KW - machine learning KW - anesthesia KW - anesthesiology KW - qualitative research KW - clinical decision support Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-311189 SN - 2077-0383 VL - 12 IS - 6 ER - TY - JOUR A1 - Haufe, Stefan A1 - Isaias, Ioannis U. A1 - Pellegrini, Franziska A1 - Palmisano, Chiara T1 - Gait event prediction using surface electromyography in parkinsonian patients JF - Bioengineering N2 - Gait disturbances are common manifestations of Parkinson’s disease (PD), with unmet therapeutic needs. Inertial measurement units (IMUs) are capable of monitoring gait, but they lack neurophysiological information that may be crucial for studying gait disturbances in these patients. Here, we present a machine learning approach to approximate IMU angular velocity profiles and subsequently gait events using electromyographic (EMG) channels during overground walking in patients with PD. We recorded six parkinsonian patients while they walked for at least three minutes. Patient-agnostic regression models were trained on temporally embedded EMG time series of different combinations of up to five leg muscles bilaterally (i.e., tibialis anterior, soleus, gastrocnemius medialis, gastrocnemius lateralis, and vastus lateralis). Gait events could be detected with high temporal precision (median displacement of <50 ms), low numbers of missed events (<2%), and next to no false-positive event detections (<0.1%). Swing and stance phases could thus be determined with high fidelity (median F1-score of ~0.9). Interestingly, the best performance was obtained using as few as two EMG probes placed on the left and right vastus lateralis. Our results demonstrate the practical utility of the proposed EMG-based system for gait event prediction, which allows the simultaneous acquisition of an electromyographic signal to be performed. This gait analysis approach has the potential to make additional measurement devices such as IMUs and force plates less essential, thereby reducing financial and preparation overheads and discomfort factors in gait studies. KW - electromyography KW - inertial measurement units KW - gait-phase prediction KW - machine learning KW - Parkinson’s disease Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-304380 SN - 2306-5354 VL - 10 IS - 2 ER - TY - JOUR A1 - Dresia, Kai A1 - Kurudzija, Eldin A1 - Deeken, Jan A1 - Waxenegger-Wilfing, Günther T1 - Improved wall temperature prediction for the LUMEN rocket combustion chamber with neural networks JF - Aerospace N2 - Accurate calculations of the heat transfer and the resulting maximum wall temperature are essential for the optimal design of reliable and efficient regenerative cooling systems. However, predicting the heat transfer of supercritical methane flowing in cooling channels of a regeneratively cooled rocket combustor presents a significant challenge. High-fidelity CFD calculations provide sufficient accuracy but are computationally too expensive to be used within elaborate design optimization routines. In a previous work it has been shown that a surrogate model based on neural networks is able to predict the maximum wall temperature along straight cooling channels with convincing precision when trained with data from CFD simulations for simple cooling channel segments. In this paper, the methodology is extended to cooling channels with curvature. The predictions of the extended model are tested against CFD simulations with different boundary conditions for the representative LUMEN combustor contour with varying geometries and heat flux densities. The high accuracy of the extended model’s predictions, suggests that it will be a valuable tool for designing and analyzing regenerative cooling systems with greater efficiency and effectiveness. KW - neural network KW - surrogate model KW - heat transfer KW - machine learning KW - LUMEN KW - rocket engine KW - regenerative cooling Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-319169 SN - 2226-4310 VL - 10 IS - 5 ER - TY - JOUR A1 - Marquardt, André A1 - Hartrampf, Philipp A1 - Kollmannsberger, Philip A1 - Solimando, Antonio G. A1 - Meierjohann, Svenja A1 - Kübler, Hubert A1 - Bargou, Ralf A1 - Schilling, Bastian A1 - Serfling, Sebastian E. A1 - Buck, Andreas A1 - Werner, Rudolf A. A1 - Lapa, Constantin A1 - Krebs, Markus T1 - Predicting microenvironment in CXCR4- and FAP-positive solid tumors — a pan-cancer machine learning workflow for theranostic target structures JF - Cancers N2 - (1) Background: C-X-C Motif Chemokine Receptor 4 (CXCR4) and Fibroblast Activation Protein Alpha (FAP) are promising theranostic targets. However, it is unclear whether CXCR4 and FAP positivity mark distinct microenvironments, especially in solid tumors. (2) Methods: Using Random Forest (RF) analysis, we searched for entity-independent mRNA and microRNA signatures related to CXCR4 and FAP overexpression in our pan-cancer cohort from The Cancer Genome Atlas (TCGA) database — representing n = 9242 specimens from 29 tumor entities. CXCR4- and FAP-positive samples were assessed via StringDB cluster analysis, EnrichR, Metascape, and Gene Set Enrichment Analysis (GSEA). Findings were validated via correlation analyses in n = 1541 tumor samples. TIMER2.0 analyzed the association of CXCR4 / FAP expression and infiltration levels of immune-related cells. (3) Results: We identified entity-independent CXCR4 and FAP gene signatures representative for the majority of solid cancers. While CXCR4 positivity marked an immune-related microenvironment, FAP overexpression highlighted an angiogenesis-associated niche. TIMER2.0 analysis confirmed characteristic infiltration levels of CD8+ cells for CXCR4-positive tumors and endothelial cells for FAP-positive tumors. (4) Conclusions: CXCR4- and FAP-directed PET imaging could provide a non-invasive decision aid for entity-agnostic treatment of microenvironment in solid malignancies. Moreover, this machine learning workflow can easily be transferred towards other theranostic targets. KW - machine learning KW - tumor microenvironment KW - immune infiltration KW - angiogenesis KW - mRNA KW - miRNA KW - transcriptome Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-305036 SN - 2072-6694 VL - 15 IS - 2 ER - TY - JOUR A1 - Griebel, Matthias A1 - Segebarth, Dennis A1 - Stein, Nikolai A1 - Schukraft, Nina A1 - Tovote, Philip A1 - Blum, Robert A1 - Flath, Christoph M. T1 - Deep learning-enabled segmentation of ambiguous bioimages with deepflash2 JF - Nature Communications N2 - Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability. KW - machine learning KW - microscopy KW - quality control KW - software Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357286 VL - 14 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Banck, Michael A1 - Makowski, Kevin A1 - Hekalo, Amar A1 - Fitting, Daniel A1 - Troya, Joel A1 - Sudarevic, Boban A1 - Zoller, Wolfgang G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - A real-time polyp-detection system with clinical application in colonoscopy using deep convolutional neural networks JF - Journal of Imaging N2 - Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark. KW - machine learning KW - deep learning KW - endoscopy KW - gastroenterology KW - automation KW - object detection KW - video object detection KW - real-time Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-304454 SN - 2313-433X VL - 9 IS - 2 ER -