TY - JOUR A1 - Wanner, Jonas A1 - Herm, Lukas-Valentin A1 - Heinrich, Kai A1 - Janiesch, Christian T1 - The effect of transparency and trust on intelligent system acceptance: evidence from a user-based study JF - Electronic Markets N2 - Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These systems have human-like decision capacity for selected applications based on a decision rationale which cannot be looked-up conveniently and constitutes a black box. As a consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said to hinder trust and enforce aversion towards these systems, studies that connect user trust to transparency and subsequently acceptance are scarce. In response, our research is concerned with the development of a theoretical model that explains end-user acceptance of intelligent systems. We utilize the unified theory of acceptance and use in information technology as well as explanation theory and related theories on initial trust and user trust in information systems. The proposed model is tested in an industrial maintenance workplace scenario using maintenance experts as participants to represent the user group. Results show that acceptance is performance-driven at first sight. However, transparency plays an important indirect role in regulating trust and the perception of performance. KW - user acceptance KW - intelligent system KW - artificial intelligence KW - trust KW - system transparency Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-323829 SN - 1019-6781 VL - 32 IS - 4 ER - TY - JOUR A1 - Vollmer, Andreas A1 - Vollmer, Michael A1 - Lang, Gernot A1 - Straub, Anton A1 - Shavlokhova, Veronika A1 - Kübler, Alexander A1 - Gubik, Sebastian A1 - Brands, Roman A1 - Hartmann, Stefan A1 - Saravi, Babak T1 - Associations between periodontitis and COPD: An artificial intelligence-based analysis of NHANES III JF - Journal of Clinical Medicine N2 - A number of cross-sectional epidemiological studies suggest that poor oral health is associated with respiratory diseases. However, the number of cases within the studies was limited, and the studies had different measurement conditions. By analyzing data from the National Health and Nutrition Examination Survey III (NHANES III), this study aimed to investigate possible associations between chronic obstructive pulmonary disease (COPD) and periodontitis in the general population. COPD was diagnosed in cases where FEV (1)/FVC ratio was below 70% (non-COPD versus COPD; binary classification task). We used unsupervised learning utilizing k-means clustering to identify clusters in the data. COPD classes were predicted with logistic regression, a random forest classifier, a stochastic gradient descent (SGD) classifier, k-nearest neighbors, a decision tree classifier, Gaussian naive Bayes (GaussianNB), support vector machines (SVM), a custom-made convolutional neural network (CNN), a multilayer perceptron artificial neural network (MLP), and a radial basis function neural network (RBNN) in Python. We calculated the accuracy of the prediction and the area under the curve (AUC). The most important predictors were determined using feature importance analysis. Results: Overall, 15,868 participants and 19 feature variables were included. Based on k-means clustering, the data were separated into two clusters that identified two risk characteristic groups of patients. The algorithms reached AUCs between 0.608 (DTC) and 0.953% (CNN) for the classification of COPD classes. Feature importance analysis of deep learning algorithms indicated that age and mean attachment loss were the most important features in predicting COPD. Conclusions: Data analysis of a large population showed that machine learning and deep learning algorithms could predict COPD cases based on demographics and oral health feature variables. This study indicates that periodontitis might be an important predictor of COPD. Further prospective studies examining the association between periodontitis and COPD are warranted to validate the present results. KW - COPD KW - periodontitis KW - bone loss KW - machine learning KW - prediction KW - artificial intelligence KW - model KW - gingivitis Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-312713 SN - 2077-0383 VL - 11 IS - 23 ER - TY - JOUR A1 - Vollmer, Andreas A1 - Vollmer, Michael A1 - Lang, Gernot A1 - Straub, Anton A1 - Kübler, Alexander A1 - Gubik, Sebastian A1 - Brands, Roman C. A1 - Hartmann, Stefan A1 - Saravi, Babak T1 - Performance analysis of supervised machine learning algorithms for automatized radiographical classification of maxillary third molar impaction JF - Applied Sciences N2 - Background: Oro-antral communication (OAC) is a common complication following the extraction of upper molar teeth. The Archer and the Root Sinus (RS) systems can be used to classify impacted teeth in panoramic radiographs. The Archer classes B-D and the Root Sinus classes III, IV have been associated with an increased risk of OAC following tooth extraction in the upper molar region. In our previous study, we found that panoramic radiographs are not reliable for predicting OAC. This study aimed to (1) determine the feasibility of automating the classification (Archer/RS classes) of impacted teeth from panoramic radiographs, (2) determine the distribution of OAC stratified by classification system classes for the purposes of decision tree construction, and (3) determine the feasibility of automating the prediction of OAC utilizing the mentioned classification systems. Methods: We utilized multiple supervised pre-trained machine learning models (VGG16, ResNet50, Inceptionv3, EfficientNet, MobileNetV2), one custom-made convolutional neural network (CNN) model, and a Bag of Visual Words (BoVW) technique to evaluate the performance to predict the clinical classification systems RS and Archer from panoramic radiographs (Aim 1). We then used Chi-square Automatic Interaction Detectors (CHAID) to determine the distribution of OAC stratified by the Archer/RS classes to introduce a decision tree for simple use in clinics (Aim 2). Lastly, we tested the ability of a multilayer perceptron artificial neural network (MLP) and a radial basis function neural network (RBNN) to predict OAC based on the high-risk classes RS III, IV, and Archer B-D (Aim 3). Results: We achieved accuracies of up to 0.771 for EfficientNet and MobileNetV2 when examining the Archer classification. For the AUC, we obtained values of up to 0.902 for our custom-made CNN. In comparison, the detection of the RS classification achieved accuracies of up to 0.792 for the BoVW and an AUC of up to 0.716 for our custom-made CNN. Overall, the Archer classification was detected more reliably than the RS classification when considering all algorithms. CHAID predicted 77.4% correctness for the Archer classification and 81.4% for the RS classification. MLP (AUC: 0.590) and RBNN (AUC: 0.590) for the Archer classification as well as MLP 0.638) and RBNN (0.630) for the RS classification did not show sufficient predictive capability for OAC. Conclusions: The results reveal that impacted teeth can be classified using panoramic radiographs (best AUC: 0.902), and the classification systems can be stratified according to their relationship to OAC (81.4% correct for RS classification). However, the Archer and RS classes did not achieve satisfactory AUCs for predicting OAC (best AUC: 0.638). Additional research is needed to validate the results externally and to develop a reliable risk stratification tool based on the present findings. KW - oro-antral communication KW - oro-antral fistula KW - prediction KW - machine learning KW - teeth extraction KW - complications KW - classification KW - artificial intelligence Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-281662 SN - 2076-3417 VL - 12 IS - 13 ER - TY - JOUR A1 - Vollmer, Andreas A1 - Vollmer, Michael A1 - Lang, Gernot A1 - Straub, Anton A1 - Kübler, Alexander A1 - Gubik, Sebastian A1 - Brands, Roman C. A1 - Hartmann, Stefan A1 - Saravi, Babak T1 - Automated assessment of radiographic bone loss in the posterior maxilla utilizing a multi-object detection artificial intelligence algorithm JF - Applied Sciences N2 - Periodontitis is one of the most prevalent diseases worldwide. The degree of radiographic bone loss can be used to assess the course of therapy or the severity of the disease. Since automated bone loss detection has many benefits, our goal was to develop a multi-object detection algorithm based on artificial intelligence that would be able to detect and quantify radiographic bone loss using standard two-dimensional radiographic images in the maxillary posterior region. This study was conducted by combining three recent online databases and validating the results using an external validation dataset from our organization. There were 1414 images for training and testing and 341 for external validation in the final dataset. We applied a Keypoint RCNN with a ResNet-50-FPN backbone network for both boundary box and keypoint detection. The intersection over union (IoU) and the object keypoint similarity (OKS) were used for model evaluation. The evaluation of the boundary box metrics showed a moderate overlapping with the ground truth, revealing an average precision of up to 0.758. The average precision and recall over all five folds were 0.694 and 0.611, respectively. Mean average precision and recall for the keypoint detection were 0.632 and 0.579, respectively. Despite only using a small and heterogeneous set of images for training, our results indicate that the algorithm is able to learn the objects of interest, although without sufficient accuracy due to the limited number of images and a large amount of information available in panoramic radiographs. Considering the widespread availability of panoramic radiographs as well as the increasing use of online databases, the presented model can be further improved in the future to facilitate its implementation in clinics. KW - radiographic bone loss KW - alveolar bone loss KW - maxillofacial surgery KW - deep learning KW - classification KW - artificial intelligence KW - object detection Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-305050 SN - 2076-3417 VL - 13 IS - 3 ER - TY - JOUR A1 - Vollmer, Andreas A1 - Saravi, Babak A1 - Vollmer, Michael A1 - Lang, Gernot Michael A1 - Straub, Anton A1 - Brands, Roman C. A1 - Kübler, Alexander A1 - Gubik, Sebastian A1 - Hartmann, Stefan T1 - Artificial intelligence-based prediction of oroantral communication after tooth extraction utilizing preoperative panoramic radiography JF - Diagnostics N2 - Oroantral communication (OAC) is a common complication after tooth extraction of upper molars. Profound preoperative panoramic radiography analysis might potentially help predict OAC following tooth extraction. In this exploratory study, we evaluated n = 300 consecutive cases (100 OAC and 200 controls) and trained five machine learning algorithms (VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50) to predict OAC versus non-OAC (binary classification task) from the input images. Further, four oral and maxillofacial experts evaluated the respective panoramic radiography and determined performance metrics (accuracy, area under the curve (AUC), precision, recall, F1-score, and receiver operating characteristics curve) of all diagnostic approaches. Cohen's kappa was used to evaluate the agreement between expert evaluations. The deep learning algorithms reached high specificity (highest specificity 100% for InceptionV3) but low sensitivity (highest sensitivity 42.86% for MobileNetV2). The AUCs from VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50 were 0.53, 0.60, 0.67, 0.51, and 0.56, respectively. Expert 1–4 reached an AUC of 0.550, 0.629, 0.500, and 0.579, respectively. The specificity of the expert evaluations ranged from 51.74% to 95.02%, whereas sensitivity ranged from 14.14% to 59.60%. Cohen's kappa revealed a poor agreement for the oral and maxillofacial expert evaluations (Cohen's kappa: 0.1285). Overall, present data indicate that OAC cannot be sufficiently predicted from preoperative panoramic radiography. The false-negative rate, i.e., the rate of positive cases (OAC) missed by the deep learning algorithms, ranged from 57.14% to 95.24%. Surgeons should not solely rely on panoramic radiography when evaluating the probability of OAC occurrence. Clinical testing of OAC is warranted after each upper-molar tooth extraction. KW - artificial intelligence KW - deep learning KW - X-ray KW - tooth extraction KW - oroantral fistula KW - operative planning Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-278814 SN - 2075-4418 VL - 12 IS - 6 ER - TY - JOUR A1 - Vollmer, Andreas A1 - Nagler, Simon A1 - Hörner, Marius A1 - Hartmann, Stefan A1 - Brands, Roman C. A1 - Breitenbücher, Niko A1 - Straub, Anton A1 - Kübler, Alexander A1 - Vollmer, Michael A1 - Gubik, Sebastian A1 - Lang, Gernot A1 - Wollborn, Jakob A1 - Saravi, Babak T1 - Performance of artificial intelligence-based algorithms to predict prolonged length of stay after head and neck cancer surgery JF - Heliyon N2 - Background Medical resource management can be improved by assessing the likelihood of prolonged length of stay (LOS) for head and neck cancer surgery patients. The objective of this study was to develop predictive models that could be used to determine whether a patient's LOS after cancer surgery falls within the normal range of the cohort. Methods We conducted a retrospective analysis of a dataset consisting of 300 consecutive patients who underwent head and neck cancer surgery between 2017 and 2022 at a single university medical center. Prolonged LOS was defined as LOS exceeding the 75th percentile of the cohort. Feature importance analysis was performed to evaluate the most important predictors for prolonged LOS. We then constructed 7 machine learning and deep learning algorithms for the prediction modeling of prolonged LOS. Results The algorithms reached accuracy values of 75.40 (radial basis function neural network) to 97.92 (Random Trees) for the training set and 64.90 (multilayer perceptron neural network) to 84.14 (Random Trees) for the testing set. The leading parameters predicting prolonged LOS were operation time, ischemia time, the graft used, the ASA score, the intensive care stay, and the pathological stages. The results revealed that patients who had a higher number of harvested lymph nodes (LN) had a lower probability of recurrence but also a greater LOS. However, patients with prolonged LOS were also at greater risk of recurrence, particularly when fewer (LN) were extracted. Further, LOS was more strongly correlated with the overall number of extracted lymph nodes than with the number of positive lymph nodes or the ratio of positive to overall extracted lymph nodes, indicating that particularly unnecessary lymph node extraction might be associated with prolonged LOS. Conclusions The results emphasize the need for a closer follow-up of patients who experience prolonged LOS. Prospective trials are warranted to validate the present results. KW - prediction KW - head and neck cancer KW - machine learning KW - deep learning KW - artificial intelligence KW - length of stay KW - cancer Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-350416 SN - 2405-8440 VL - 9 IS - 11 ER - TY - CHAP A1 - Sanusi, Khaleel Asyraaf Mat A1 - Klemke, Roland T1 - Immersive Multimodal Environments for Psychomotor Skills Training T2 - Proceedings of the 1st Games Technology Summit N2 - Modern immersive multimodal technologies enable the learners to completely get immersed in various learning situations in a way that feels like experiencing an authentic learning environment. These environments also allow the collection of multimodal data, which can be used with artificial intelligence to further improve the immersion and learning outcomes. The use of artificial intelligence has been widely explored for the interpretation of multimodal data collected from multiple sensors, thus giving insights to support learners’ performance by providing personalised feedback. In this paper, we present a conceptual approach for creating immersive learning environments, integrated with multi-sensor setup to help learners improve their psychomotor skills in a remote setting. KW - immersive learning technologies KW - multimodal learning KW - sensor devices KW - artificial intelligence KW - psychomotor training Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246016 ER - TY - JOUR A1 - Lux, Thomas J. A1 - Banck, Michael A1 - Saßmannshausen, Zita A1 - Troya, Joel A1 - Krenzer, Adrian A1 - Fitting, Daniel A1 - Sudarevic, Boban A1 - Zoller, Wolfram G. A1 - Puppe, Frank A1 - Meining, Alexander A1 - Hann, Alexander T1 - Pilot study of a new freely available computer-aided polyp detection system in clinical practice JF - International Journal of Colorectal Disease N2 - Purpose Computer-aided polyp detection (CADe) systems for colonoscopy are already presented to increase adenoma detection rate (ADR) in randomized clinical trials. Those commercially available closed systems often do not allow for data collection and algorithm optimization, for example regarding the usage of different endoscopy processors. Here, we present the first clinical experiences of a, for research purposes publicly available, CADe system. Methods We developed an end-to-end data acquisition and polyp detection system named EndoMind. Examiners of four centers utilizing four different endoscopy processors used EndoMind during their clinical routine. Detected polyps, ADR, time to first detection of a polyp (TFD), and system usability were evaluated (NCT05006092). Results During 41 colonoscopies, EndoMind detected 29 of 29 adenomas in 66 of 66 polyps resulting in an ADR of 41.5%. Median TFD was 130 ms (95%-CI, 80–200 ms) while maintaining a median false positive rate of 2.2% (95%-CI, 1.7–2.8%). The four participating centers rated the system using the System Usability Scale with a median of 96.3 (95%-CI, 70–100). Conclusion EndoMind’s ability to acquire data, detect polyps in real-time, and high usability score indicate substantial practical value for research and clinical practice. Still, clinical benefit, measured by ADR, has to be determined in a prospective randomized controlled trial. KW - colonoscopy KW - polyp KW - artificial intelligence KW - deep learning KW - CADe Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-324459 VL - 37 IS - 6 ER - TY - JOUR A1 - Loda, Sophia A1 - Krebs, Jonathan A1 - Danhof, Sophia A1 - Schreder, Martin A1 - Solimando, Antonio G. A1 - Strifler, Susanne A1 - Rasche, Leo A1 - Kortüm, Martin A1 - Kerscher, Alexander A1 - Knop, Stefan A1 - Puppe, Frank A1 - Einsele, Hermann A1 - Bittrich, Max T1 - Exploration of artificial intelligence use with ARIES in multiple myeloma research JF - Journal of Clinical Medicine N2 - Background: Natural language processing (NLP) is a powerful tool supporting the generation of Real-World Evidence (RWE). There is no NLP system that enables the extensive querying of parameters specific to multiple myeloma (MM) out of unstructured medical reports. We therefore created a MM-specific ontology to accelerate the information extraction (IE) out of unstructured text. Methods: Our MM ontology consists of extensive MM-specific and hierarchically structured attributes and values. We implemented “A Rule-based Information Extraction System” (ARIES) that uses this ontology. We evaluated ARIES on 200 randomly selected medical reports of patients diagnosed with MM. Results: Our system achieved a high F1-Score of 0.92 on the evaluation dataset with a precision of 0.87 and recall of 0.98. Conclusions: Our rule-based IE system enables the comprehensive querying of medical reports. The IE accelerates the extraction of data and enables clinicians to faster generate RWE on hematological issues. RWE helps clinicians to make decisions in an evidence-based manner. Our tool easily accelerates the integration of research evidence into everyday clinical practice. KW - natural language processing KW - ontology KW - artificial intelligence KW - multiple myeloma KW - real world evidence Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-197231 SN - 2077-0383 VL - 8 IS - 7 ER - TY - JOUR A1 - Kunz, Felix A1 - Stellzig-Eisenhauer, Angelika A1 - Boldt, Julian T1 - Applications of artificial intelligence in orthodontics — an overview and perspective based on the current state of the art JF - Applied Sciences N2 - Artificial intelligence (AI) has already arrived in many areas of our lives and, because of the increasing availability of computing power, can now be used for complex tasks in medicine and dentistry. This is reflected by an exponential increase in scientific publications aiming to integrate AI into everyday clinical routines. Applications of AI in orthodontics are already manifold and range from the identification of anatomical/pathological structures or reference points in imaging to the support of complex decision-making in orthodontic treatment planning. The aim of this article is to give the reader an overview of the current state of the art regarding applications of AI in orthodontics and to provide a perspective for the use of such AI solutions in clinical routine. For this purpose, we present various use cases for AI in orthodontics, for which research is already available. Considering the current scientific progress, it is not unreasonable to assume that AI will become an integral part of orthodontic diagnostics and treatment planning in the near future. Although AI will equally likely not be able to replace the knowledge and experience of human experts in the not-too-distant future, it probably will be able to support practitioners, thus serving as a quality-assuring component in orthodontic patient care. KW - orthodontics KW - artificial intelligence KW - machine learning KW - deep learning KW - cephalometry KW - age determination by skeleton KW - tooth extraction KW - orthognathic surgery Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-310940 SN - 2076-3417 VL - 13 IS - 6 ER - TY - JOUR A1 - Kazuhino, Koshino A1 - Werner, Rudolf A. A1 - Toriumi, Fuijo A1 - Javadi, Mehrbod S. A1 - Pomper, Martin G. A1 - Solnes, Lilja B. A1 - Verde, Franco A1 - Higuchi, Takahiro A1 - Rowe, Steven P. T1 - Generative Adversarial Networks for the Creation of Realistic Artificial Brain Magnetic Resonance Images JF - Tomography N2 - Even as medical data sets become more publicly accessible, most are restricted to specific medical conditions. Thus, data collection for machine learning approaches remains challenging, and synthetic data augmentation, such as generative adversarial networks (GAN), may overcome this hurdle. In the present quality control study, deep convolutional GAN (DCGAN)-based human brain magnetic resonance (MR) images were validated by blinded radiologists. In total, 96 T1-weighted brain images from 30 healthy individuals and 33 patients with cerebrovascular accident were included. A training data set was generated from the T1-weighted images and DCGAN was applied to generate additional artificial brain images. The likelihood that images were DCGAN-created versus acquired was evaluated by 5 radiologists (2 neuroradiologists [NRs], vs 3 non-neuroradiologists [NNRs]) in a binary fashion to identify real vs created images. Images were selected randomly from the data set (variation of created images, 40%-60%). None of the investigated images was rated as unknown. Of the created images, the NRs rated 45% and 71% as real magnetic resonance imaging images (NNRs, 24%, 40%, and 44%). In contradistinction, 44% and 70% of the real images were rated as generated images by NRs (NNRs, 10%, 17%, and 27%). The accuracy for the NRs was 0.55 and 0.30 (NNRs, 0.83, 0.72, and 0.64). DCGAN-created brain MR images are similar enough to acquired MR images so as to be indistinguishable in some cases. Such an artificial intelligence algorithm may contribute to synthetic data augmentation for "data-hungry" technologies, such as supervised machine learning approaches, in various clinical applications. KW - AI KW - Magnetresonanztomografie KW - artificial intelligence KW - magnetic resonance imaging KW - MRI KW - DCGAN KW - GAN KW - stroke KW - machine learning Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-172185 VL - 4 IS - 4 ER - TY - JOUR A1 - Janiesch, Christian A1 - Zschech, Patrick A1 - Heinrich, Kai T1 - Machine learning and deep learning JF - Electronic Markets N2 - Today, intelligent systems that offer artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Deep learning is a machine learning concept based on artificial neural networks. For many applications, deep learning models outperform shallow machine learning models and traditional data analysis approaches. In this article, we summarize the fundamentals of machine learning and deep learning to generate a broader understanding of the methodical underpinning of current intelligent systems. In particular, we provide a conceptual distinction between relevant terms and concepts, explain the process of automated analytical model building through machine learning and deep learning, and discuss the challenges that arise when implementing such intelligent systems in the field of electronic markets and networked business. These naturally go beyond technological aspects and highlight issues in human-machine interaction and artificial intelligence servitization. KW - analytical model building KW - machine learning KW - deep learning KW - artificial intelligence KW - artificial neural networks Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-270155 SN - 1422-8890 VL - 31 IS - 3 ER - TY - THES A1 - Höser, Thorsten T1 - Global Dynamics of the Offshore Wind Energy Sector Derived from Earth Observation Data - Deep Learning Based Object Detection Optimised with Synthetic Training Data for Offshore Wind Energy Infrastructure Extraction from Sentinel-1 Imagery T1 - Globale Dynamik des Offshore-Windenergiesektors abgeleitet aus Erdbeobachtungsdaten - Deep Learning-basierte Objekterkennung, optimiert mit synthetischen Trainingsdaten für die Extraktion von Offshore-Windenergieinfrastrukturen aus Sentinel-1 Bildern N2 - The expansion of renewable energies is being driven by the gradual phaseout of fossil fuels in order to reduce greenhouse gas emissions, the steadily increasing demand for energy and, more recently, by geopolitical events. The offshore wind energy sector is on the verge of a massive expansion in Europe, the United Kingdom, China, but also in the USA, South Korea and Vietnam. Accordingly, the largest marine infrastructure projects to date will be carried out in the upcoming decades, with thousands of offshore wind turbines being installed. In order to accompany this process globally and to provide a database for research, development and monitoring, this dissertation presents a deep learning-based approach for object detection that enables the derivation of spatiotemporal developments of offshore wind energy infrastructures from satellite-based radar data of the Sentinel-1 mission. For training the deep learning models for offshore wind energy infrastructure detection, an approach is presented that makes it possible to synthetically generate remote sensing data and the necessary annotation for the supervised deep learning process. In this synthetic data generation process, expert knowledge about image content and sensor acquisition techniques is made machine-readable. Finally, extensive and highly variable training data sets are generated from this knowledge representation, with which deep learning models can learn to detect objects in real-world satellite data. The method for the synthetic generation of training data based on expert knowledge offers great potential for deep learning in Earth observation. Applications of deep learning based methods can be developed and tested faster with this procedure. Furthermore, the synthetically generated and thus controllable training data offer the possibility to interpret the learning process of the optimised deep learning models. The method developed in this dissertation to create synthetic remote sensing training data was finally used to optimise deep learning models for the global detection of offshore wind energy infrastructure. For this purpose, images of the entire global coastline from ESA's Sentinel-1 radar mission were evaluated. The derived data set includes over 9,941 objects, which distinguish offshore wind turbines, transformer stations and offshore wind energy infrastructures under construction from each other. In addition to this spatial detection, a quarterly time series from July 2016 to June 2021 was derived for all objects. This time series reveals the start of construction, the construction phase and the time of completion with subsequent operation for each object. The derived offshore wind energy infrastructure data set provides the basis for an analysis of the development of the offshore wind energy sector from July 2016 to June 2021. For this analysis, further attributes of the detected offshore wind turbines were derived. The most important of these are the height and installed capacity of a turbine. The turbine height was calculated by a radargrammetric analysis of the previously detected Sentinel-1 signal and then used to statistically model the installed capacity. The results show that in June 2021, 8,885 offshore wind turbines with a total capacity of 40.6 GW were installed worldwide. The largest installed capacities are in the EU (15.2 GW), China (14.1 GW) and the United Kingdom (10.7 GW). From July 2016 to June 2021, China has expanded 13 GW of offshore wind energy infrastructure. The EU has installed 8 GW and the UK 5.8 GW of offshore wind energy infrastructure in the same period. This temporal analysis shows that China was the main driver of the expansion of the offshore wind energy sector in the period under investigation. The derived data set for the description of the offshore wind energy sector was made publicly available. It is thus freely accessible to all decision-makers and stakeholders involved in the development of offshore wind energy projects. Especially in the scientific context, it serves as a database that enables a wide range of investigations. Research questions regarding offshore wind turbines themselves as well as the influence of the expansion in the coming decades can be investigated. This supports the imminent and urgently needed expansion of offshore wind energy in order to promote sustainable expansion in addition to the expansion targets that have been set. N2 - Der Ausbau erneuerbarer Energien wird durch den sukzessiven Verzicht auf fossile Energieträger zur Reduktion der Treibhausgasemissionen, dem stetig steigenden Energiebedarf sowie, in jüngster Zeit, von geopolitischen Ereignissen stark vorangetrieben. Der offshore Windenergiesektor steht in Europa, dem Vereinigten Königreich, China, aber auch den USA, Süd-Korea und Vietnam vor einer massiven Expansion. In den nächsten Dekaden werden die bislang größten marinen Infrastrukturprojekte mit tausenden neu installierten offshore Windturbinen realisiert. Um diesen Prozess global zu begleiten und eine Datengrundlage für die Forschung, für Entscheidungsträger und für ein kontinuierliches Monitoring bereit zu stellen, präsentiert diese Dissertation einen Deep Learning basierten Ansatz zur Detektion von offshore Windkraftanalagen aus satellitengestützten Radardaten der Sentinel-1 Mission. Für das überwachte Training der verwendeten Deep Learning Modelle zur Objektdetektion wird ein Ansatz vorgestellt, der es ermöglicht, Fernerkundungsdaten und die notwendigen Label synthetisch zu generieren. Hierbei wird Expertenwissen über die Bildinhalte, wie offshore Windkraftanlagen aber auch ihre natürliche Umgebung, wie Küsten oder andere Infrastruktur, gemeinsam mit Informationen über den Sensor strukturiert und maschinenlesbar gemacht. Aus dieser Wissensrepräsentation werden schließlich umfangreiche und höchst variable Trainingsdaten erzeugt, womit Deep Learning Modelle die Detektion von Objekten in Satellitendaten erlernen können. Das Verfahren zur synthetischen Erzeugung von Trainingsdaten basierend auf Expertenwissen bietet großes Potential für Deep Learning in der Erdbeobachtung. Deep Learning Ansätze können hierdurch schneller entwickelt und getestet werden. Darüber hinaus bieten die synthetisch generierten und somit kontrollierbaren Trainingsdaten die Möglichkeit, den Lernprozess der optimierten Deep Learning Modelle zu interpretieren. Das in dieser Dissertation für Fernerkundungsdaten entwickelte Verfahren zur Erstellung synthetischer Trainingsdaten wurde schließlich zur Optimierung von Deep Learning Modellen für die globale Detektion von offshore Windenergieanlagen eingesetzt. Hierfür wurden Aufnahmen der gesamten globalen Küstenlinie der Sentinel-1 Mission der ESA ausgewertet. Der abgeleitete Datensatz, welcher 9.941 Objekte umfasst, unterscheidet offshore Windturbinen, Trafostationen und im Bau befindliche offshore Windenergieinfrastrukturen voneinander. Zusätzlich zu dieser räumlichen Detektion wurde eine vierteljährliche Zeitreihe von Juli 2016 bis Juni 2021 für alle Objekte generiert. Diese Zeitreihe zeigt den Start des Baubeginns, die Bauphase und den Zeitpunkt der Fertigstellung mit anschließendem Betrieb für jedes Objekt. Der gewonnene Datensatz dient weiterhin als Grundlage für eine Analyse der Entwicklung des offshore Windenergiesektors von Juli 2016 bis Juni 2021. Für diese Analyse wurden weitere Attribute der Turbinen abgeleitet. In einem radargrammetrischen Verfahren wurde die Turbinenhöhe berechnet und anschließend verwendet, um die installierte Leistung statistisch zu modellieren. Die Ergebnisse hierzu zeigen, dass im Juni 2021 weltweit 8.885 offshore Windturbinen mit insgesamt 40,6 GW Leistung installiert waren. Die größten installierten Leistungen stellen dabei die EU (15,2 GW), China (14,1 GW) und das Vereinigte Königreich (10,7 GW). Von Juli 2016 bis Juni 2021 hat China 13 GW installierte Leistung ausgebaut. Die EU hat im selben Zeitraum 8 GW und das Vereinigte Königreich 5,8 GW offshore Windenergieinfrastruktur installiert. Diese zeitliche Analyse verdeutlicht, dass China der maßgebliche Treiber in der Expansion des offshore Windenergiesektors im untersuchten Zeitraum war. Der abgeleitete Datensatz zur Beschreibung des offshore Windenergiesektors wurde öffentlich zugänglich gemacht. Somit steht er allen Entscheidungsträgern und Stakeholdern, die am Ausbau von offshore Windenergieanlagen beteiligt sind, frei zur Verfügung. Vor allem im wissenschaftlichen Kontext dient er als Datenbasis, welche unterschiedlichste Untersuchungen ermöglicht. Hierbei können sowohl Forschungsfragen bezüglich der offshore Windenergieanlagen selbst, als auch der Einfluss des Ausbaus der kommenden Dekaden untersucht werden. Somit wird der bevorstehende und dringend notwendige Ausbau der offshore Windenergie unterstützt, um neben den gesteckten Zielen auch einen nachhaltigen Ausbau zu fördern. KW - deep learning KW - offshore wind energy KW - artificial intelligence KW - earth observation KW - remote sensing Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-292857 ER - TY - JOUR A1 - Hoeser, Thorsten A1 - Kuenzer, Claudia T1 - Object detection and image segmentation with deep learning on Earth observation data: a review-part I: evolution and recent trends JF - Remote Sensing N2 - Deep learning (DL) has great influence on large parts of science and increasingly established itself as an adaptive method for new challenges in the field of Earth observation (EO). Nevertheless, the entry barriers for EO researchers are high due to the dense and rapidly developing field mainly driven by advances in computer vision (CV). To lower the barriers for researchers in EO, this review gives an overview of the evolution of DL with a focus on image segmentation and object detection in convolutional neural networks (CNN). The survey starts in 2012, when a CNN set new standards in image recognition, and lasts until late 2019. Thereby, we highlight the connections between the most important CNN architectures and cornerstones coming from CV in order to alleviate the evaluation of modern DL models. Furthermore, we briefly outline the evolution of the most popular DL frameworks and provide a summary of datasets in EO. By discussing well performing DL architectures on these datasets as well as reflecting on advances made in CV and their impact on future research in EO, we narrow the gap between the reviewed, theoretical concepts from CV and practical application in EO. KW - artificial intelligence KW - AI KW - machine learning KW - deep learning KW - neural networks KW - convolutional neural networks KW - CNN KW - image segmentation KW - object detection KW - Earth observation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205918 SN - 2072-4292 VL - 12 IS - 10 ER - TY - JOUR A1 - Hoeser, Thorsten A1 - Bachofer, Felix A1 - Kuenzer, Claudia T1 - Object detection and image segmentation with deep learning on Earth Observation data: a review — part II: applications JF - Remote Sensing N2 - In Earth observation (EO), large-scale land-surface dynamics are traditionally analyzed by investigating aggregated classes. The increase in data with a very high spatial resolution enables investigations on a fine-grained feature level which can help us to better understand the dynamics of land surfaces by taking object dynamics into account. To extract fine-grained features and objects, the most popular deep-learning model for image analysis is commonly used: the convolutional neural network (CNN). In this review, we provide a comprehensive overview of the impact of deep learning on EO applications by reviewing 429 studies on image segmentation and object detection with CNNs. We extensively examine the spatial distribution of study sites, employed sensors, used datasets and CNN architectures, and give a thorough overview of applications in EO which used CNNs. Our main finding is that CNNs are in an advanced transition phase from computer vision to EO. Upon this, we argue that in the near future, investigations which analyze object dynamics with CNNs will have a significant impact on EO research. With a focus on EO applications in this Part II, we complete the methodological review provided in Part I. KW - artificial intelligence KW - AI KW - machine learning KW - deep learning KW - neural networks KW - convolutional neural networks KW - CNN KW - image segmentation KW - object detection KW - earth observation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-213152 SN - 2072-4292 VL - 12 IS - 18 ER - TY - JOUR A1 - Herm, Lukas-Valentin A1 - Steinbach, Theresa A1 - Wanner, Jonas A1 - Janiesch, Christian T1 - A nascent design theory for explainable intelligent systems JF - Electronic Markets N2 - Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors. KW - artificial intelligence KW - explainable artificial intelligence KW - XAI KW - design science research KW - design theory KW - intelligent systems Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-323809 SN - 1019-6781 VL - 32 IS - 4 ER - TY - JOUR A1 - Henckert, David A1 - Malorgio, Amos A1 - Schweiger, Giovanna A1 - Raimann, Florian J. A1 - Piekarski, Florian A1 - Zacharowski, Kai A1 - Hottenrott, Sebastian A1 - Meybohm, Patrick A1 - Tscholl, David W. A1 - Spahn, Donat R. A1 - Roche, Tadzio R. T1 - Attitudes of anesthesiologists toward artificial intelligence in anesthesia: a multicenter, mixed qualitative–quantitative study JF - Journal of Clinical Medicine N2 - Artificial intelligence (AI) is predicted to play an increasingly important role in perioperative medicine in the very near future. However, little is known about what anesthesiologists know and think about AI in this context. This is important because the successful introduction of new technologies depends on the understanding and cooperation of end users. We sought to investigate how much anesthesiologists know about AI and what they think about the introduction of AI-based technologies into the clinical setting. In order to better understand what anesthesiologists think of AI, we recruited 21 anesthesiologists from 2 university hospitals for face-to-face structured interviews. The interview transcripts were subdivided sentence-by-sentence into discrete statements, and statements were then grouped into key themes. Subsequently, a survey of closed questions based on these themes was sent to 70 anesthesiologists from 3 university hospitals for rating. In the interviews, the base level of knowledge of AI was good at 86 of 90 statements (96%), although awareness of the potential applications of AI in anesthesia was poor at only 7 of 42 statements (17%). Regarding the implementation of AI in anesthesia, statements were split roughly evenly between pros (46 of 105, 44%) and cons (59 of 105, 56%). Interviewees considered that AI could usefully be used in diverse tasks such as risk stratification, the prediction of vital sign changes, or as a treatment guide. The validity of these themes was probed in a follow-up survey of 70 anesthesiologists with a response rate of 70%, which confirmed an overall positive view of AI in this group. Anesthesiologists hold a range of opinions, both positive and negative, regarding the application of AI in their field of work. Survey-based studies do not always uncover the full breadth of nuance of opinion amongst clinicians. Engagement with specific concerns, both technical and ethical, will prove important as this technology moves from research to the clinic. KW - artificial intelligence KW - machine learning KW - anesthesia KW - anesthesiology KW - qualitative research KW - clinical decision support Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-311189 SN - 2077-0383 VL - 12 IS - 6 ER - TY - THES A1 - Griebel, Matthias T1 - Applied Deep Learning: from Data to Deployment T1 - Deep Learning in der Praxis: von der Datenerhebung bis zum Einsatz N2 - Novel deep learning (DL) architectures, better data availability, and a significant increase in computing power have enabled scientists to solve problems that were considered unassailable for many years. A case in point is the “protein folding problem“, a 50-year-old grand challenge in biology that was recently solved by the DL-system AlphaFold. Other examples comprise the development of large DL-based language models that, for instance, generate newspaper articles that hardly differ from those written by humans. However, developing unbiased, reliable, and accurate DL models for various practical applications remains a major challenge - and many promising DL projects get stuck in the piloting stage, never to be completed. In light of these observations, this thesis investigates the practical challenges encountered throughout the life cycle of DL projects and proposes solutions to develop and deploy rigorous DL models. The first part of the thesis is concerned with prototyping DL solutions in different domains. First, we conceptualize guidelines for applied image recognition and showcase their application in a biomedical research project. Next, we illustrate the bottom-up development of a DL backend for an augmented intelligence system in the manufacturing sector. We then turn to the fashion domain and present an artificial curation system for individual fashion outfit recommendations that leverages DL techniques and unstructured data from social media and fashion blogs. After that, we showcase how DL solutions can assist fashion designers in the creative process. Finally, we present our award-winning DL solution for the segmentation of glomeruli in human kidney tissue images that was developed for the Kaggle data science competition HuBMAP - Hacking the Kidney. The second part continues the development path of the biomedical research project beyond the prototyping stage. Using data from five laboratories, we show that ground truth estimation from multiple human annotators and training of DL model ensembles help to establish objectivity, reliability, and validity in DL-based bioimage analyses. In the third part, we present deepflash2, a DL solution that addresses the typical challenges encountered during training, evaluation, and application of DL models in bioimaging. The tool facilitates the objective and reliable segmentation of ambiguous bioimages through multi-expert annotations and integrated quality assurance. It is embedded in an easy-to-use graphical user interface and offers best-in-class predictive performance for semantic and instance segmentation under economical usage of computational resources. N2 - Die Entwicklung neuer Deep Learning (DL) Architekturen, flankiert durch eine bessere Datenverfügbarkeit und eine enorme Steigerung der Rechenleistung, ermöglicht Wissenschaftler:innen die Lösung von Problemen, die lange Zeit als unlösbar galten. Ein Paradebeispiel hierfür ist das 50 Jahre alte „Proteinfaltungsproblem“ in der Biologie, das vor Kurzem duch das DL-System AlphaFold gelöst wurde. Andere Beispiele sind moderne, DL-basierte Sprachmodelle. Diese können unter anderem Zeitungsartikel verfassen, die nur schwer von Artikeln menschlicher Autoren:innen unterscheidbar sind. Die Entwicklung unvoreingenommener, zuverlässiger und präziser DL-Modelle für die praktische Anwendung bleibt jedoch eine große Herausforderung. Dies wird an zahlreichen vielversprechenden DL-Projekten sichtbar, die nicht über die Pilotphase herauskommen. Vor diesem Hintergrund untersuche ich in dieser Dissertation die Herausforderungen, die während des Lebenszyklus von DL-Projekten auftreten, und schlage Lösungen für die Entwicklung und den Einsatz verlässlicher DL-Modelle vor. Der erste Teil der Arbeit befasst sich mit dem Prototyping von DL-Lösungen für verschiedene Anwendungsgebiete. Zunächst werden Richtlinien für die angewandte Bilderkennung konzipiert und deren Anwendung in einem biomedizinischen Forschungsprojekt gezeigt. Dem folgt die Darstellung einer Bottom-up-Entwicklung eines DL-Backends für ein Augmented-Intelligence-System im Fertigungssektor. Im Anschluss wird der Entwurf eines künstlichen Fashion-Curation-Systems für individuelle Outfit-Empfehlungen vorgestellt, das DL-Techniken und unstrukturierte Daten aus sozialen Medien und Modeblogs nutzt. Es folgt ein Abschnitt darüber, wie DL-Lösungen Modedesigner:innen im kreativen Prozess unterstützen können. Schließlich stelle ich meine prämierte DL-Lösung für die Segmentierung von Glomeruli in menschlichen Nierengewebe-Bildern vor, die für den Kaggle Data Science-Wettbewerb HuBMAP - Hacking the Kidney entwickelt wurde. Im zweiten Teil wird der Entwicklungspfad des biomedizinischen Forschungsprojekts über das Prototyping-Stadium hinaus fortgesetzt. Anhand von Daten aus fünf Laboren wird gezeigt, dass die Schätzung einer Ground-Truth durch die Annotationen mehrerer Experten:innen und das Training von DL-Modell-Ensembles dazu beiträgt, Objektivität, Zuverlässigkeit und Validität in DL-basierten Analysen von Mikroskopie-Bildern zu manifestieren. Im dritten Teil der Dissertation stelle ich die DL-Lösung deepflash2 vor, welche die typischen Herausforderungen beim Training, der Evaluation und der Anwendung von DL-Modellen in der biologischen Bildgebung adressiert. Das Tool erleichtert die objektive und zuverlässige Segmentierung von mehrdeutigen Mikroskopie-Bildern durch die Integration von Annotationen mehrerer Experten:innen und integrierte Qualitätssicherung. KW - artificial intelligence KW - deep learning KW - bioimage analysis Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-277650 ER - TY - CHAP A1 - Davies, Richard A1 - Dewell, Nathan A1 - Harvey, Carlo T1 - A framework for interactive, autonomous and semantic dialogue generation in games T2 - Proceedings of the 1st Games Technology Summit N2 - Immersive virtual environments provide users with the opportunity to escape from the real world, but scripted dialogues can disrupt the presence within the world the user is trying to escape within. Both Non-Playable Character (NPC) to Player and NPC to NPC dialogue can be non-natural and the reliance on responding with pre-defined dialogue does not always meet the players emotional expectations or provide responses appropriate to the given context or world states. This paper investigates the application of Artificial Intelligence (AI) and Natural Language Processing to generate dynamic human-like responses within a themed virtual world. Each thematic has been analysed against humangenerated responses for the same seed and demonstrates invariance of rating across a range of model sizes, but shows an effect of theme and the size of the corpus used for fine-tuning the context for the game world. KW - natural language processing · · · KW - interactive authoring system KW - semantic understanding KW - artificial intelligence Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-246023 ER - TY - JOUR A1 - Davidson, Padraig A1 - Düking, Peter A1 - Zinner, Christoph A1 - Sperlich, Billy A1 - Hotho, Andreas T1 - Smartwatch-Derived Data and Machine Learning Algorithms Estimate Classes of Ratings of Perceived Exertion in Runners: A Pilot Study JF - Sensors N2 - The rating of perceived exertion (RPE) is a subjective load marker and may assist in individualizing training prescription, particularly by adjusting running intensity. Unfortunately, RPE has shortcomings (e.g., underreporting) and cannot be monitored continuously and automatically throughout a training sessions. In this pilot study, we aimed to predict two classes of RPE (≤15 “Somewhat hard to hard” on Borg’s 6–20 scale vs. RPE >15 in runners by analyzing data recorded by a commercially-available smartwatch with machine learning algorithms. Twelve trained and untrained runners performed long-continuous runs at a constant self-selected pace to volitional exhaustion. Untrained runners reported their RPE each kilometer, whereas trained runners reported every five kilometers. The kinetics of heart rate, step cadence, and running velocity were recorded continuously ( 1 Hz ) with a commercially-available smartwatch (Polar V800). We trained different machine learning algorithms to estimate the two classes of RPE based on the time series sensor data derived from the smartwatch. Predictions were analyzed in different settings: accuracy overall and per runner type; i.e., accuracy for trained and untrained runners independently. We achieved top accuracies of 84.8 % for the whole dataset, 81.8 % for the trained runners, and 86.1 % for the untrained runners. We predict two classes of RPE with high accuracy using machine learning and smartwatch data. This approach might aid in individualizing training prescriptions. KW - artificial intelligence KW - endurance KW - exercise intensity KW - precision training KW - prediction KW - wearable Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205686 SN - 1424-8220 VL - 20 IS - 9 ER - TY - CHAP ED - von Mammen, Sebastian ED - Klemke, Roland ED - Lorber, Martin T1 - Proceedings of the 1st Games Technology Summit BT - part of Clash of Realites 11th International Conference on the Technology and Theory of Digital Games N2 - As part of the Clash of Realities International Conference on the Technology and Theory of Digital Games, the Game Technology Summit is a premium venue to bring together experts from academia and industry to disseminate state-of-the-art research on trending technology topics in digital games. In this first iteration of the Game Technology Summit, we specifically paid attention on how the successes in AI in Natural User Interfaces have been impacting the games industry (industry track) and which scientific, state-of-the-art ideas and approaches are currently pursued (scientific track). KW - Veranstaltung KW - Künstliche Intelligenz KW - Mensch-Maschine-Kommunikation KW - Computerspiel KW - natural user interfaces KW - artificial intelligence Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-245776 SN - 978-3-945459-36-2 ER -