Refine
Has Fulltext
- yes (50)
Is part of the Bibliography
- yes (50)
Document Type
- Journal article (49)
- Doctoral Thesis (1)
Language
- English (50) (remove)
Keywords
- machine learning (50) (remove)
Institute
- Institut für Geographie und Geologie (9)
- Institut für Informatik (9)
- Center for Computational and Theoretical Biology (5)
- Institut für Klinische Epidemiologie und Biometrie (5)
- Medizinische Klinik und Poliklinik II (4)
- Pathologisches Institut (4)
- Theodor-Boveri-Institut für Biowissenschaften (4)
- Betriebswirtschaftliches Institut (3)
- Institut für diagnostische und interventionelle Radiologie (Institut für Röntgendiagnostik) (3)
- Klinik und Poliklinik für Mund-, Kiefer- und Plastische Gesichtschirurgie (3)
Sonstige beteiligte Institutionen
Artificial intelligence (AI) has already arrived in many areas of our lives and, because of the increasing availability of computing power, can now be used for complex tasks in medicine and dentistry. This is reflected by an exponential increase in scientific publications aiming to integrate AI into everyday clinical routines. Applications of AI in orthodontics are already manifold and range from the identification of anatomical/pathological structures or reference points in imaging to the support of complex decision-making in orthodontic treatment planning. The aim of this article is to give the reader an overview of the current state of the art regarding applications of AI in orthodontics and to provide a perspective for the use of such AI solutions in clinical routine. For this purpose, we present various use cases for AI in orthodontics, for which research is already available. Considering the current scientific progress, it is not unreasonable to assume that AI will become an integral part of orthodontic diagnostics and treatment planning in the near future. Although AI will equally likely not be able to replace the knowledge and experience of human experts in the not-too-distant future, it probably will be able to support practitioners, thus serving as a quality-assuring component in orthodontic patient care.
Artificial intelligence (AI) is predicted to play an increasingly important role in perioperative medicine in the very near future. However, little is known about what anesthesiologists know and think about AI in this context. This is important because the successful introduction of new technologies depends on the understanding and cooperation of end users. We sought to investigate how much anesthesiologists know about AI and what they think about the introduction of AI-based technologies into the clinical setting. In order to better understand what anesthesiologists think of AI, we recruited 21 anesthesiologists from 2 university hospitals for face-to-face structured interviews. The interview transcripts were subdivided sentence-by-sentence into discrete statements, and statements were then grouped into key themes. Subsequently, a survey of closed questions based on these themes was sent to 70 anesthesiologists from 3 university hospitals for rating. In the interviews, the base level of knowledge of AI was good at 86 of 90 statements (96%), although awareness of the potential applications of AI in anesthesia was poor at only 7 of 42 statements (17%). Regarding the implementation of AI in anesthesia, statements were split roughly evenly between pros (46 of 105, 44%) and cons (59 of 105, 56%). Interviewees considered that AI could usefully be used in diverse tasks such as risk stratification, the prediction of vital sign changes, or as a treatment guide. The validity of these themes was probed in a follow-up survey of 70 anesthesiologists with a response rate of 70%, which confirmed an overall positive view of AI in this group. Anesthesiologists hold a range of opinions, both positive and negative, regarding the application of AI in their field of work. Survey-based studies do not always uncover the full breadth of nuance of opinion amongst clinicians. Engagement with specific concerns, both technical and ethical, will prove important as this technology moves from research to the clinic.
Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations.
Acceleration is a central aim of clinical and technical research in magnetic resonance imaging (MRI) today, with the potential to increase robustness, accessibility and patient comfort, reduce cost, and enable entirely new kinds of examinations. A key component in this endeavor is image reconstruction, as most modern approaches build on advanced signal and image processing. Here, deep learning (DL)-based methods have recently shown considerable potential, with numerous publications demonstrating benefits for MRI reconstruction. However, these methods often come at the cost of an increased risk for subtle yet critical errors. Therefore, the aim of this thesis is to advance DL-based MRI reconstruction, while ensuring high quality and fidelity with measured data. A network architecture specifically suited for this purpose is the variational network (VN). To investigate the benefits these can bring to non-Cartesian cardiac imaging, the first part presents an application of VNs, which were specifically adapted to the reconstruction of accelerated spiral acquisitions. The proposed method is compared to a segmented exam, a U-Net and a compressed sensing (CS) model using qualitative and quantitative measures. While the U-Net performed poorly, the VN as well as the CS reconstruction showed good output quality. In functional cardiac imaging, the proposed real-time method with VN reconstruction substantially accelerates examinations over the gold-standard, from over 10 to just 1 minute. Clinical parameters agreed on average.
Generally in MRI reconstruction, the assessment of image quality is complex, in particular for modern non-linear methods. Therefore, advanced techniques for precise evaluation of quality were subsequently demonstrated.
With two distinct methods, resolution and amplification or suppression of noise are quantified locally in each pixel of a reconstruction. Using these, local maps of resolution and noise in parallel imaging (GRAPPA), CS, U-Net and VN reconstructions were determined for MR images of the brain. In the tested images, GRAPPA delivers uniform and ideal resolution, but amplifies noise noticeably. The other methods adapt their behavior to image structure, where different levels of local blurring were observed at edges compared to homogeneous areas, and noise was suppressed except at edges. Overall, VNs were found to combine a number of advantageous properties, including a good trade-off between resolution and noise, fast reconstruction times, and high overall image quality and fidelity of the produced output. Therefore, this network architecture seems highly promising for MRI reconstruction.
(1) Background: C-X-C Motif Chemokine Receptor 4 (CXCR4) and Fibroblast Activation Protein Alpha (FAP) are promising theranostic targets. However, it is unclear whether CXCR4 and FAP positivity mark distinct microenvironments, especially in solid tumors. (2) Methods: Using Random Forest (RF) analysis, we searched for entity-independent mRNA and microRNA signatures related to CXCR4 and FAP overexpression in our pan-cancer cohort from The Cancer Genome Atlas (TCGA) database — representing n = 9242 specimens from 29 tumor entities. CXCR4- and FAP-positive samples were assessed via StringDB cluster analysis, EnrichR, Metascape, and Gene Set Enrichment Analysis (GSEA). Findings were validated via correlation analyses in n = 1541 tumor samples. TIMER2.0 analyzed the association of CXCR4 / FAP expression and infiltration levels of immune-related cells. (3) Results: We identified entity-independent CXCR4 and FAP gene signatures representative for the majority of solid cancers. While CXCR4 positivity marked an immune-related microenvironment, FAP overexpression highlighted an angiogenesis-associated niche. TIMER2.0 analysis confirmed characteristic infiltration levels of CD8+ cells for CXCR4-positive tumors and endothelial cells for FAP-positive tumors. (4) Conclusions: CXCR4- and FAP-directed PET imaging could provide a non-invasive decision aid for entity-agnostic treatment of microenvironment in solid malignancies. Moreover, this machine learning workflow can easily be transferred towards other theranostic targets.
An approach to aerodynamically optimizing cycling posture and reducing drag in an Ironman (IM) event was elaborated. Therefore, four commonly used positions in cycling were investigated and simulated for a flow velocity of 10 m/s and yaw angles of 0–20° using OpenFoam-based Nabla Flow CFD simulation software software. A cyclist was scanned using an IPhone 12, and a special-purpose meshing software BLENDER was used. Significant differences were observed by changing and optimizing the cyclist’s posture. Aerodynamic drag coefficient (CdA) varies by more than a factor of 2, ranging from 0.214 to 0.450. Within a position, the CdA tends to increase slightly at yaw angles of 5–10° and decrease at higher yaw angles compared to a straight head wind, except for the time trial (TT) position. The results were applied to the IM Hawaii bike course (180 km), estimating a constant power output of 300 W. Including the wind distributions, two different bike split models for performance prediction were applied. Significant time saving of roughly 1 h was found. Finally, a machine learning approach to deduce 3D triangulation for specific body shapes from 2D pictures was tested.
Snow is a vital environmental parameter and dynamically responsive to climate change, particularly in mountainous regions. Snow cover can be monitored at variable spatial scales using Earth Observation (EO) data. Long-lasting remote sensing missions enable the generation of multi-decadal time series and thus the detection of long-term trends. However, there have been few attempts to use these to model future snow cover dynamics. In this study, we, therefore, explore the potential of such time series to forecast the Snow Line Elevation (SLE) in the European Alps. We generate monthly SLE time series from the entire Landsat archive (1985–2021) in 43 Alpine catchments. Positive long-term SLE change rates are detected, with the highest rates (5–8 m/y) in the Western and Central Alps. We utilize this SLE dataset to implement and evaluate seven uni-variate time series modeling and forecasting approaches. The best results were achieved by Random Forests, with a Nash–Sutcliffe efficiency (NSE) of 0.79 and a Mean Absolute Error (MAE) of 258 m, Telescope (0.76, 268 m), and seasonal ARIMA (0.75, 270 m). Since the model performance varies strongly with the input data, we developed a combined forecast based on the best-performing methods in each catchment. This approach was then used to forecast the SLE for the years 2022–2029. In the majority of the catchments, the shift of the forecast median SLE level retained the sign of the long-term trend. In cases where a deviating SLE dynamic is forecast, a discussion based on the unique properties of the catchment and past SLE dynamics is required. In the future, we expect major improvements in our SLE forecasting efforts by including external predictor variables in a multi-variate modeling approach.
Predicting hypertension subtypes with machine learning using targeted metabolites and their ratios
(2022)
Hypertension is a major global health problem with high prevalence and complex associated health risks. Primary hypertension (PHT) is most common and the reasons behind primary hypertension are largely unknown. Endocrine hypertension (EHT) is another complex form of hypertension with an estimated prevalence varying from 3 to 20% depending on the population studied. It occurs due to underlying conditions associated with hormonal excess mainly related to adrenal tumours and sub-categorised: primary aldosteronism (PA), Cushing’s syndrome (CS), pheochromocytoma or functional paraganglioma (PPGL). Endocrine hypertension is often misdiagnosed as primary hypertension, causing delays in treatment for the underlying condition, reduced quality of life, and costly antihypertensive treatment that is often ineffective. This study systematically used targeted metabolomics and high-throughput machine learning methods to predict the key biomarkers in classifying and distinguishing the various subtypes of endocrine and primary hypertension. The trained models successfully classified CS from PHT and EHT from PHT with 92% specificity on the test set. The most prominent targeted metabolites and metabolite ratios for hypertension identification for different disease comparisons were C18:1, C18:2, and Orn/Arg. Sex was identified as an important feature in CS vs. PHT classification.
In the past decades, various Earth observation-based time series products have emerged, which have enabled studies and analysis of global change processes. Besides their contribution to understanding past processes, time series datasets hold enormous potential for predictive modeling and thereby meet the demands of decision makers on future scenarios. In order to further exploit these data, a novel pixel-based approach has been introduced, which is the spatio-temporal matrix (STM). The approach integrates the historical characteristics of a specific land cover at a high temporal frequency in order to interpret the spatial and temporal information for the neighborhood of a given target pixel. The provided information can be exploited with common predictive models and algorithms. In this study, this approach was utilized and evaluated for the prediction of future urban/built-settlement growth. Random forest and multi-layer perceptron were employed for the prediction. The tests have been carried out with training strategies based on a one-year and a ten-year time span for the urban agglomerations of Surat (India), Ho-Chi-Minh City (Vietnam), and Abidjan (Ivory Coast). The slope, land use, exclusion, urban, transportation, hillshade (SLEUTH) model was selected as a baseline indicator for the performance evaluation. The statistical results from the receiver operating characteristic curve (ROC) demonstrate a good ability of the STM to facilitate the prediction of future settlement growth and its transferability to different cities, with area under the curve (AUC) values greater than 0.85. Compared with SLEUTH, the STM-based model achieved higher AUC in all of the test cases, while being independent of the additional datasets for the restricted and the preferential development areas.
In most countries, freight is predominantly transported by road cargo trucks. We present a new satellite remote sensing method for detecting moving trucks on roads using Sentinel-2 data. The method exploits a temporal sensing offset of the Sentinel-2 multispectral instrument, causing spatially and spectrally distorted signatures of moving objects. A random forest classifier was trained (overall accuracy: 84%) on visual-near-infrared-spectra of 2500 globally labelled targets. Based on the classification, the target objects were extracted using a developed recursive neighbourhood search. The speed and the heading of the objects were approximated. Detections were validated by employing 350 globally labelled target boxes (mean F\(_1\) score: 0.74). The lowest F\(_1\) score was achieved in Kenya (0.36), the highest in Poland (0.88). Furthermore, validated at 26 traffic count stations in Germany on in sum 390 dates, the truck detections correlate spatio-temporally with station figures (Pearson r-value: 0.82, RMSE: 43.7). Absolute counts were underestimated on 81% of the dates. The detection performance may differ by season and road condition. Hence, the method is only suitable for approximating the relative truck traffic abundance rather than providing accurate absolute counts. However, existing road cargo monitoring methods that rely on traffic count stations or very high resolution remote sensing data have limited global availability. The proposed moving truck detection method could fill this gap, particularly where other information on road cargo traffic are sparse by employing globally and freely available Sentinel-2 data. It is inferior to the accuracy and the temporal detail of station counts, but superior in terms of spatial coverage.