TY - THES A1 - Lehrieder, Frank T1 - Performance Evaluation and Optimization of Content Distribution using Overlay Networks T1 - Leistungsbewertung und Optimierung von Overlay Netzwerken zum Verteilen großer Datenmengen N2 - The work presents a performance evaluation and optimization of so-called overlay networks for content distribution in the Internet. Chapter 1 describes the importance which have such networks in today's Internet, for example, for the transmission of video content. The focus of this work is on overlay networks based on the peer-to-peer principle. These are characterized by the fact that users who download content, also contribute to the distribution process by sharing parts of the data to other users. This enables efficient content distribution because each user not only consumes resources in the system, but also provides its own resources. Chapter 2 of the monograph contains a detailed description of the functionality of today's most popular overlay network BitTorrent. It explains the various components and their interaction. This is followed by an illustration of why such overlay networks for Internet service providers (ISPs) are problematic. The reason lies in the large amount of inter-ISP traffic that is produced by these overlay networks. Since this inter-ISP traffic leads to high costs for ISPs, they try to reduce it by improved mechanisms for overlay networks. One optimization approach is the use of topology awareness within the overlay networks. It provides users of the overlay networks with information about the underlying physical network topology. This allows them to avoid inter-ISP traffic by exchanging data preferrentially with other users that are connected to the same ISP. Another approach to save inter-ISP traffic is caching. In this case the ISP provides additional computers in its network, called caches, which store copies of popular content. The users of this ISP can then obtain such content from the cache. This prevents that the content must be retrieved from locations outside of the ISP's network, and saves costly inter-ISP traffic in this way. In the third chapter of the thesis, the results of a comprehensive measurement study of overlay networks, which can be found in today's Internet, are presented. After a short description of the measurement methodology, the results of the measurements are described. These results contain data on a variety of characteristics of current P2P overlay networks in the Internet. These include the popularity of content, i.e., how many users are interested in specific content, the evolution of the popularity and the size of the files. The distribution of users within the Internet is investigated in detail. Special attention is given to the number of users that exchange a particular file within the same ISP. On the basis of these measurement results, an estimation of the traffic savings that can achieved by topology awareness is derived. This new estimation is of scientific and practical importance, since it is not limited to individual ISPs and files, but considers the whole Internet and the total amount of data exchanged in overlay networks. Finally, the characteristics of regional content are considered, in which the popularity is limited to certain parts of the Internet. This is for example the case of videos in German, Italian or French language. Chapter 4 of the thesis is devoted to the optimization of overlay networks for content distribution through caching. It presents a deterministic flow model that describes the influence of caches. On the basis of this model, it derives an estimate of the inter-ISP traffic that is generated by an overlay network, and which part can be saved by caches. The results show that the influence of the cache depends on the structure of the overlay networks, and that caches can also lead to an increase in inter-ISP traffic under certain circumstances. The described model is thus an important tool for ISPs to decide for which overlay networks caches are useful and to dimension them. Chapter 5 summarizes the content of the work and emphasizes the importance of the findings. In addition, it explains how the findings can be applied to the optimization of future overlay networks. Special attention is given to the growing importance of video-on-demand and real-time video transmissions. N2 - Die Arbeit beschäftigt sich mit der Leistungsbewertung und Optimierung von sogenannten Overlay-Netzwerken zum Verteilen von großen Datenmengen im Internet. Im Kapitel 1 der Arbeit wird die große Bedeutung erläutert, die solche Netzwerke im heutigen Internet haben, beispielsweise für die Übertragung von Video-Inhalten. Im Fokus der Arbeit liegen Overlay-Netzwerke, die auf dem Peer-to-peer Prinzip basieren. Diese zeichnen sich dadurch aus, dass Nutzer, die Inhalte herunterladen, auch gleichzeitig an dem Verteilprozess teilnehmen, indem sie Teile der Daten an andere Nutzer weitergeben. Dies ermöglicht eine effiziente Verteilung der Daten, weil jeder Nutzer nicht nur Ressourcen im System belegt, sondern auch eigene Ressourcen einbringt. Kapitel 2 der Arbeit enthält eine detaillierte Beschreibung der Funktionsweise des heute populärsten Overlay-Netzwerks BitTorrent. Es werden die einzelnen Komponenten erläutert und deren Zusammenspiel erklärt. Darauf folgt eine Darstellung, warum solche Overlay-Netzwerke für Internet-Anbieter (Internet service provider, ISP) problematisch sind. Der Grund dafür liegt in der großen Menge an Inter-ISP Verkehr, den diese Overlays erzeugen. Da solcher Inter-ISP Verkehr zu hohen Kosten für ISPs führt, versuchen diese den Inter-ISP Verkehr zu reduzieren, indem sie die Mechanismen der Overlay-Netzwerke optimieren. Ein Ansatz zur Optimierung ist die Verwendung von Topologiebewusstsein innerhalb der Overlay-Netzwerke. Dabei erhalten die Nutzer der Overlay-Netzwerke Informationen über die zugrunde liegende, physikalische Netzwerktopologie. Diese ermöglichen es ihnen, Inter-ISP Verkehr zu vermeiden, indem sie Daten bevorzugt mit anderen Nutzern austauschen, die mit dem gleichen ISP verbunden sind. Ein weiterer Ansatz, um Inter-ISP Verkehr einzusparen, ist Caching. Dabei stellt der ISP zusätzliche Rechner, sogenannte Caches, in seinem Netzwerk zur Verfügung, die Kopien populärer Inhalte zwischenspeichern. Die Nutzer dieses ISP können solche Inhalte nun von den Caches beziehen. Dies verhindert, dass populäre Inhalte mehrfach von außerhalb des betrachteten ISPs bezogen werden müssen, und spart so kostenintensiven Inter-ISP Verkehr ein. Im dritten Kapitel der Arbeit werden Ergebnisse einer umfassenden Messung von Overlay-Netzwerken vorgestellt, wie sie heute im Internet anzutreffen sind. Nach einer kurzen Darstellung der bei der Messung verwendeten Methodik werden die Resultate der Messungen beschrieben. Diese Ergebnisse enthalten Daten über eine Vielzahl von Eigenschaften von heutigen P2P-basierten Overlay-Netzwerken im Internet. Dazu zählen die Popularität von Inhalten, d.h., wie viele Nutzer an bestimmten Inhalten interessiert sind, die zeitliche Entwicklung der Popularität und die Größe der Daten. Im Detail wird auch die Verteilung der Nutzer über das Internet analysiert. Ein besonderes Augenmerk liegt dabei auf der Anzahl der Nutzer, die gleichzeitig und im Netz desselben ISP eine bestimmte Datei tauschen. Auf der Basis dieser Messergebnisse wird eine Abschätzung durchgeführt, welches Einsparpotential die Optimierung von Overlay-Netzwerken durch Topologiebewusstsein bietet. Diese neuartige Abschätzung ist von wissenschaftlicher und praktischer Bedeutung, da sie sich nicht auf einzelne ISPs und Dateien beschränkt, sondern des gesamte Internet und die Menge aller in Overlay-Netzwerken verfügbaren Dateien umfasst. Schließlich werden die Besonderheiten von regionalen Inhalten betrachtet, bei denen sich die Popularität auf bestimmte Teile des Internets beschränkt. Dies ist beispielsweise bei Videos in deutscher, italienischer oder französischer Sprache der Fall. Kapitel 4 der Arbeit widmet sich der Optimierung von Overlay-Netzwerken zum Verteilen großer Datenmengen durch Caching. Es wird ein deterministisches Flussmodel entwickelt, das den Einfluss von Caches beschreibt. Auf der Basis dieses Modells leitet er eine Abschätzung des Inter-ISP Verkehrs ab, der von einem Overlay-Netzwerk erzeugt wird, und welcher Teil davon durch Caches eingespart werden kann. Die Ergebnisse zeigen, dass der Einfluss von Caches von der Struktur der Overlay-Netzwerke abhängt und dass Caches unter bestimmten Umständen auch zu einem erhöhten Inter-ISP Verkehr führen können. Das beschriebene Modell ist somit ein wichtiges Hilfsmittel für ISPs um zu entscheiden, für welche Overlay-Netzwerke Caches sinnvoll sind, und um diese anschließend richtig zu dimensionieren. Kapitel 5 fasst den Inhalt der Arbeit zusammen und stellt die Bedeutung der gewonnenen Erkenntnisse heraus. Abschließend wird erläutert, in welcher Weise die in der Arbeit beschriebenen Ergebnisse wichtige Grundlagen für die Optimierung von zukünftigen Overlay-Netzwerken darstellen werden. Dabei wird besonders auf die wachsende Bedeutung von Video-On-Demand und Echt-Zeit Video-Übertragungen eingegangen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/13 KW - Leistungsbewertung KW - Verteiltes System KW - Overlay-Netz KW - Overlay Netzwerke KW - Overlay networks Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-76018 ER - TY - RPRT A1 - Le, Duy Thanh A1 - Großmann, Marcel A1 - Krieger, Udo R. T1 - Cloudless Resource Monitoring in a Fog Computing System Enabled by an SDN/NFV Infrastructure T2 - Würzburg Workshop on Next-Generation Communication Networks (WueWoWas'22) N2 - Today’s advanced Internet-of-Things applications raise technical challenges on cloud, edge, and fog computing. The design of an efficient, virtualized, context-aware, self-configuring orchestration system of a fog computing system constitutes a major development effort within this very innovative area of research. In this paper we describe the architecture and relevant implementation aspects of a cloudless resource monitoring system interworking with an SDN/NFV infrastructure. It realizes the basic monitoring component of the fundamental MAPE-K principles employed in autonomic computing. Here we present the hierarchical layering and functionality within the underlying fog nodes to generate a working prototype of an intelligent, self-managed orchestrator for advanced IoT applications and services. The latter system has the capability to monitor automatically various performance aspects of the resource allocation among multiple hosts of a fog computing system interconnected by SDN. KW - Datennetz KW - fog computing KW - SDN/NVF KW - container virtualization KW - autonomic orchestration KW - docker Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-280723 ER - TY - JOUR A1 - Latoschik, Marc Erich A1 - Wienrich, Carolin T1 - Congruence and plausibility, not presence: pivotal conditions for XR experiences and effects, a novel approach JF - Frontiers in Virtual Reality N2 - Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects. KW - XR KW - experience KW - presence KW - congruence KW - plausibility KW - coherence KW - theory KW - prediction Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-284787 SN - 2673-4192 VL - 3 ER - TY - JOUR A1 - Landeck, Maximilian A1 - Alvarez Igarzábal, Federico A1 - Unruh, Fabian A1 - Habenicht, Hannah A1 - Khoshnoud, Shiva A1 - Wittmann, Marc A1 - Lugrin, Jean-Luc A1 - Latoschik, Marc Erich T1 - Journey through a virtual tunnel: Simulated motion and its effects on the experience of time JF - Frontiers in Virtual Reality N2 - This paper examines the relationship between time and motion perception in virtual environments. Previous work has shown that the perception of motion can affect the perception of time. We developed a virtual environment that simulates motion in a tunnel and measured its effects on the estimation of the duration of time, the speed at which perceived time passes, and the illusion of self-motion, also known as vection. When large areas of the visual field move in the same direction, vection can occur; observers often perceive this as self-motion rather than motion of the environment. To generate different levels of vection and investigate its effects on time perception, we developed an abstract procedural tunnel generator. The generator can simulate different speeds and densities of tunnel sections (visibly distinguishable sections that form the virtual tunnel), as well as the degree of embodiment of the user avatar (with or without virtual hands). We exposed participants to various tunnel simulations with different durations, speeds, and densities in a remote desktop and a virtual reality (VR) laboratory study. Time passed subjectively faster under high-speed and high-density conditions in both studies. The experience of self-motion was also stronger under high-speed and high-density conditions. Both studies revealed a significant correlation between the perceived passage of time and perceived self-motion. Subjects in the virtual reality study reported a stronger self-motion experience, a faster perceived passage of time, and shorter time estimates than subjects in the desktop study. Our results suggest that a virtual tunnel simulation can manipulate time perception in virtual reality. We will explore these results for the development of virtual reality applications for therapeutic approaches in our future work. This could be particularly useful in treating disorders like depression, autism, and schizophrenia, which are known to be associated with distortions in time perception. For example, the tunnel could be therapeutically applied by resetting patients’ time perceptions by exposing them to the tunnel under different conditions, such as increasing or decreasing perceived time. KW - passage of time KW - illusion of self-motion KW - vection KW - virtual tunnel KW - therapeutic application KW - virtual reality KW - extended reality (XR) Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-301519 SN - 2673-4192 VL - 3 ER - TY - JOUR A1 - Kunz, Meik A1 - Liang, Chunguang A1 - Nilla, Santosh A1 - Cecil, Alexander A1 - Dandekar, Thomas T1 - The drug-minded protein interaction database (DrumPID) for efficient target analysis and drug development JF - Database N2 - The drug-minded protein interaction database (DrumPID) has been designed to provide fast, tailored information on drugs and their protein networks including indications, protein targets and side-targets. Starting queries include compound, target and protein interactions and organism-specific protein families. Furthermore, drug name, chemical structures and their SMILES notation, affected proteins (potential drug targets), organisms as well as diseases can be queried including various combinations and refinement of searches. Drugs and protein interactions are analyzed in detail with reference to protein structures and catalytic domains, related compound structures as well as potential targets in other organisms. DrumPID considers drug functionality, compound similarity, target structure, interactome analysis and organismic range for a compound, useful for drug development, predicting drug side-effects and structure–activity relationships. KW - drug-minded protein KW - database Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-147369 VL - 2016 ER - TY - JOUR A1 - Kuhn, Joachim A1 - Gripp, Tatjana A1 - Flieder, Tobias A1 - Dittrich, Marcus A1 - Hendig, Doris A1 - Busse, Jessica A1 - Knabbe, Cornelius A1 - Birschmann, Ingvild T1 - UPLC-MRM Mass Spectrometry Method for Measurement of the Coagulation Inhibitors Dabigatran and Rivaroxaban in Human Plasma and Its Comparison with Functional Assays JF - PLOS ONE N2 - Introduction The fast, precise, and accurate measurement of the new generation of oral anticoagulants such as dabigatran and rivaroxaban in patients' plasma my provide important information in different clinical circumstances such as in the case of suspicion of overdose, when patients switch from existing oral anticoagulant, in patients with hepatic or renal impairment, by concomitant use of interaction drugs, or to assess anticoagulant concentration in patients' blood before major surgery. Methods Here, we describe a quick and precise method to measure the coagulation inhibitors dabigatran and rivaroxaban using ultra-performance liquid chromatography electrospray ionization-tandem mass spectrometry in multiple reactions monitoring (MRM) mode (UPLC-MRM MS). Internal standards (ISs) were added to the sample and after protein precipitation; the sample was separated on a reverse phase column. After ionization of the analytes the ions were detected using electrospray ionization-tandem mass spectrometry. Run time was 2.5 minutes per injection. Ion suppression was characterized by means of post-column infusion. Results The calibration curves of dabigatran and rivaroxaban were linear over the working range between 0.8 and 800 mu g/L (r > 0.99). Limits of detection (LOD) in the plasma matrix were 0.21 mu g/L for dabigatran and 0.34 mu g/L for rivaroxaban, and lower limits of quantification (LLOQ) in the plasma matrix were 0.46 mu g/L for dabigatran and 0.54 mu g/L for rivaroxaban. The intraassay coefficients of variation (CVs) for dabigatran and rivaroxaban were < 4% and 6%; respectively, the interassay CVs were < 6% for dabigatran and < 9% for rivaroxaban. Inaccuracy was < 5% for both substances. The mean recovery was 104.5% (range 83.8-113.0%) for dabigatran and 87.0%(range 73.6-105.4%) for rivaroxaban. No significant ion suppressions were detected at the elution times of dabigatran or rivaroxaban. Both coagulation inhibitors were stable in citrate plasma at -20 degrees C, 4 degrees C and even at RT for at least one week. A method comparison between our UPLC-MRM MS method, the commercially available automated Direct Thrombin Inhibitor assay (DTI assay) for dabigatran measurement from CoaChrom Diagnostica, as well as the automated anti-Xa assay for rivaroxaban measurement from Chromogenix both performed by ACL-TOP showed a high degree of correlation. However, UPLC-MRM MS measurement of dabigatran and rivaroxaban has a much better selectivity than classical functional assays measuring activities of various coagulation factors which are susceptible to interference by other coagulant drugs. Conclusions Overall, we developed and validated a sensitive and specific UPLC-MRM MS assay for the quick and specific measurement of dabigatran and rivaroxaban in human plasma. KW - LC-MS/MS KW - validation KW - serum KW - quantification KW - apixaban KW - diagnostic accuracy KW - performance liquid chromatography KW - factor XA inhibitor KW - direct oral anticoagulants KW - direct thrombin inhibitor Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-136023 VL - 10 IS - 12 ER - TY - JOUR A1 - Krupitzer, Christian A1 - Eberhardinger, Benedikt A1 - Gerostathopoulos, Ilias A1 - Raibulet, Claudia T1 - Introduction to the special issue “Applications in Self-Aware Computing Systems and their Evaluation” JF - Computers N2 - The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the 13th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in Umeå, Sweden on 20 June 2019. The goal of this one-day workshop was to bring together researchers and practitioners from academic environments and from the industry to share their solutions, ideas, visions, and doubts in self-aware computing systems in general and in the evaluation and measurements of such systems in particular. The workshop aimed to enable discussions, partnerships, and collaborations among the participants. This special issue follows the theme of the workshop. It contains extended versions of workshop presentations as well as additional contributions. KW - self-aware computing systems KW - quality evaluation KW - measurements KW - quality assurance KW - autonomous KW - self-adaptive KW - self-managing systems Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-203439 SN - 2073-431X VL - 9 IS - 1 ER - TY - JOUR A1 - Krueger, Beate A1 - Friedrich, Torben A1 - Förster, Frank A1 - Bernhardt, Jörg A1 - Gross, Roy A1 - Dandekar, Thomas T1 - Different evolutionary modifications as a guide to rewire two-component systems JF - Bioinformatics and Biology Insights N2 - Two-component systems (TCS) are short signalling pathways generally occurring in prokaryotes. They frequently regulate prokaryotic stimulus responses and thus are also of interest for engineering in biotechnology and synthetic biology. The aim of this study is to better understand and describe rewiring of TCS while investigating different evolutionary scenarios. Based on large-scale screens of TCS in different organisms, this study gives detailed data, concrete alignments, and structure analysis on three general modification scenarios, where TCS were rewired for new responses and functions: (i) exchanges in the sequence within single TCS domains, (ii) exchange of whole TCS domains; (iii) addition of new components modulating TCS function. As a result, the replacement of stimulus and promotor cassettes to rewire TCS is well defined exploiting the alignments given here. The diverged TCS examples are non-trivial and the design is challenging. Designed connector proteins may also be useful to modify TCS in selected cases. KW - histidine kinase KW - connector KW - Mycoplasma KW - engineering KW - promoter KW - sensor KW - response regulator KW - synthetic biology KW - sequence alignment Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-123647 N1 - This is an open access article. Unrestricted non-commercial use is permitted provided the original work is properly cited. VL - 6 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Makowski, Kevin A1 - Hekalo, Amar A1 - Fitting, Daniel A1 - Troya, Joel A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists JF - BioMedical Engineering OnLine N2 - Background Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Methods In our framework, an expert reviews the video and annotates a few video frames to verify the object’s annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. Results Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source. KW - object detection KW - machine learning KW - deep learning KW - annotation KW - endoscopy KW - gastroenterology KW - automation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-300231 VL - 21 IS - 1 ER - TY - JOUR A1 - Krenzer, Adrian A1 - Heil, Stefan A1 - Fitting, Daniel A1 - Matti, Safa A1 - Zoller, Wolfram G. A1 - Hann, Alexander A1 - Puppe, Frank T1 - Automated classification of polyps using deep learning architectures and few-shot learning JF - BMC Medical Imaging N2 - Background Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. Methods We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. Results For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. Conclusion Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning. KW - machine learning KW - deep learning KW - endoscopy KW - gastroenterology KW - automation KW - image classification KW - transformer KW - deep metric learning KW - few-shot learning Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357465 VL - 23 ER -