004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (38) (remove)
Year of publication
- 2023 (38) (remove)
Document Type
- Working Paper (19)
- Journal article (12)
- Doctoral Thesis (5)
- Conference Proceeding (1)
- Preprint (1)
Language
- English (38)
Keywords
- Deep learning (3)
- P4 (3)
- 5G (2)
- SDN (2)
- connected mobility applications (2)
- multipath scheduling (2)
- network calculus (2)
- 3D Reconstruction (1)
- 3D-Rekonstruktion (1)
- 4D-GIS (1)
- 5G core network (1)
- 6G (1)
- ATSSSS (1)
- Accessibility (1)
- Add-on-Miss (1)
- BPM (1)
- BPMN (1)
- Benutzererlebnis (1)
- Benutzerforschung (1)
- Bildverarbeitung (1)
- CHI Conference (1)
- Computer Vision (1)
- Containerization (1)
- Deep Learning (1)
- Dijkstra’s algorithm (1)
- Domänenspezifische Sprache (1)
- Dreidimensionale Rekonstruktion (1)
- Edge-MEC-Cloud (1)
- Emotion inference (1)
- Emotionserkennung (1)
- Emotionsinterpretation (1)
- FIFO caching strategies (1)
- Gastroenterologische Endoskopie (1)
- Gefühl (1)
- Human-centered computing / Access (1)
- Human-centered computing / Human computer interaction (HCI) / Interaction paradigms / Mixed / augmented reality (1)
- Human-centered computing / Human computer interaction (HCI) / Interaction paradigms / Virtual reality (1)
- Human-centered computing / Human computer interaction (HCI) / Interactiondevices (1)
- Human-centered computing / Human computerinteraction (HCI) / Interaction techniques (1)
- IT security (1)
- Internet of Things (1)
- IoT (1)
- IoT-driven processes (1)
- JCAS (1)
- Kathará (1)
- Klima (1)
- Kryoelektronenmikroskopie (1)
- LFU (1)
- LRU (1)
- Linux (1)
- MP-DCCP (1)
- Machine Learning (1)
- Maschinelles Lernen (1)
- Maschinelles Sehen (1)
- Medical Image Analysis (1)
- Metaverse (1)
- Methode (1)
- Modell (1)
- Mycoplasma pneumoniae (1)
- Network Emulator (1)
- Neuronales Netz (1)
- Object Detection (1)
- P4-INT (1)
- PROLOG <Programmiersprache> (1)
- Polypektomie (1)
- Punktwolke (1)
- Selbstkalibrierung (1)
- Self-calibration (1)
- Sensing-aaS (1)
- Structure-from-Motion (1)
- TTL validation of data consistency (1)
- Tomografie (1)
- Underwater Mapping (1)
- Underwater Scanning (1)
- Visualized Kathará (1)
- WhatsApp (1)
- Wissenschaftliche Beobachtung (1)
- anthropomorphism (1)
- availability (1)
- background knowledge (1)
- baseline detection (1)
- bit (1)
- camera orientation (1)
- climate (1)
- cognitive impairment (1)
- communication models (1)
- communication networks (1)
- computer performance evaluation (1)
- content-based image retrieval (1)
- cosmology (1)
- cryo-EM (1)
- cryo-ET (1)
- data warehouse (1)
- decision support system (1)
- deep learning (1)
- definite clause grammars (1)
- delay constrained (1)
- dementia (1)
- digital twin (1)
- disjoint multi-paths (1)
- eHealth (1)
- electronic health records (1)
- emergent time (1)
- emulation (1)
- energy efficiency (1)
- extended reality (1)
- feature matching (1)
- federated learning (1)
- fog computing (1)
- fully convolutional neural networks (1)
- future energy grid exploration (1)
- global IPX network (1)
- group-based communication (1)
- hardware-in-the-loop simulation (1)
- hardware-in-the-loop streaming system (1)
- historical document analysis (1)
- historical images (1)
- hit ratio analysis and simulation (1)
- hospital data (1)
- human–computer interaction (1)
- informal education (1)
- information extraction (1)
- intelligent voice assistant (1)
- key-insight extraction (1)
- knowledge representation (1)
- layout recognition (1)
- least cost (1)
- local energy system (1)
- logic programming (1)
- long-term analysis (1)
- media analysis (1)
- medical records (1)
- membrane protein (1)
- misconceptions (1)
- mobile instant messaging (1)
- mobile messaging application (1)
- model output statistics (1)
- multipath (1)
- multipath packet scheduling (1)
- multiscale encoder (1)
- mycoplasma (1)
- neural networks (1)
- non-terrestrial networks (1)
- ontology (1)
- orchestration (1)
- packet reception method (1)
- particle picking (1)
- performance (1)
- performance monitoring (1)
- phase space (1)
- phase transition (1)
- pneumoniae (1)
- private chat groups (1)
- qubit (1)
- radiology (1)
- ransomware (1)
- satellite communication (1)
- scalability (1)
- scalability evaluation (1)
- sentinel (1)
- service-curve estimation (1)
- shortest path routing (1)
- signaling traffic (1)
- simulation (1)
- smart meter data utilization (1)
- smart speaker (1)
- social interaction (1)
- social relationship (1)
- social role (1)
- state management (1)
- statistics and numerical data (1)
- surface model (1)
- sustainability (1)
- table extraction (1)
- table understanding (1)
- text line detection (1)
- timestamping method (1)
- tomography (1)
- visual proteomics (1)
Institute
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 101069547 (1)
The Fifth Generation (5G) communication technology, its infrastructure and architecture, though already deployed in campus and small scale networks, is still undergoing continuous changes and research. Especially, in the light of future large scale deployments and industrial use cases, a detailed analysis of the performance and utilization with regard to latency and service times constraints is crucial. To this end, a fine granular investigation of the Network Function (NF) based core system and the duration for all the tasks performed by these services is necessary. This work presents the first steps towards analyzing the signaling traffic in 5G core networks, and introduces a tool to automatically extract sequence diagrams and service times for NF tasks from traffic traces.
Packets sent over a network can either get lost or reach their destination. Protocols like TCP try to solve this problem by resending the lost packets. However, retransmissions consume a lot of time and are cumbersome for the transmission of critical data. Multipath solutions are quite common to address this reliability issue and are available on almost every layer of the ISO/OSI model. We propose a solution based on a P4 network to duplicate packets in order to send them to their destination via multiple routes. The last network hop ensures that only a single copy of the traffic is further forwarded to its destination by adopting a concept similar to Bloom filters. Besides, if fast delivery is requested we provide a P4 prototype, which randomly forwards the packets over different transmission paths. For reproducibility, we implement our approach in a container-based network emulation system called Kathará.
Given the growing interest of corporate stakeholders in Metaverse applications, there is a need to understand accessibility of these technologies for marginalized populations such as people living with dementia to ensure inclusive design of Metaverse applications. We assessed the accessibility of extended reality technology for people living with mild cognitive impairment and dementia to develop accessibility guidelines for these technologies. We used four strategies to synthesize evidence for barriers and facilitators of accessibility: (1) Findings from a non-systematic literature review, (2) guidelines from well-researched technology, (3) exploration of selected mixed reality technologies, and (4) observations from four sessions and video data of people living with dementia using mixed reality technologies. We utilized template analysis to develop codes and themes towards accessibility guidelines. Future work can validate our preliminary findings by applying them on video recordings or testing them in experiments.
Deep learning enables enormous progress in many computer vision-related tasks. Artificial Intel- ligence (AI) steadily yields new state-of-the-art results in the field of detection and classification. Thereby AI performance equals or exceeds human performance. Those achievements impacted many domains, including medical applications.
One particular field of medical applications is gastroenterology. In gastroenterology, machine learning algorithms are used to assist examiners during interventions. One of the most critical concerns for gastroenterologists is the development of Colorectal Cancer (CRC), which is one of the leading causes of cancer-related deaths worldwide. Detecting polyps in screening colonoscopies is the essential procedure to prevent CRC. Thereby, the gastroenterologist uses an endoscope to screen the whole colon to find polyps during a colonoscopy. Polyps are mucosal growths that can vary in severity.
This thesis supports gastroenterologists in their examinations with automated detection and clas- sification systems for polyps. The main contribution is a real-time polyp detection system. This system is ready to be installed in any gastroenterology practice worldwide using open-source soft- ware. The system achieves state-of-the-art detection results and is currently evaluated in a clinical trial in four different centers in Germany.
The thesis presents two additional key contributions: One is a polyp detection system with ex- tended vision tested in an animal trial. Polyps often hide behind folds or in uninvestigated areas. Therefore, the polyp detection system with extended vision uses an endoscope assisted by two additional cameras to see behind those folds. If a polyp is detected, the endoscopist receives a vi- sual signal. While the detection system handles the additional two camera inputs, the endoscopist focuses on the main camera as usual.
The second one are two polyp classification models, one for the classification based on shape (Paris) and the other on surface and texture (NBI International Colorectal Endoscopic (NICE) classification). Both classifications help the endoscopist with the treatment of and the decisions about the detected polyp.
The key algorithms of the thesis achieve state-of-the-art performance. Outstandingly, the polyp detection system tested on a highly demanding video data set shows an F1 score of 90.25 % while working in real-time. The results exceed all real-time systems in the literature. Furthermore, the first preliminary results of the clinical trial of the polyp detection system suggest a high Adenoma Detection Rate (ADR). In the preliminary study, all polyps were detected by the polyp detection system, and the system achieved a high usability score of 96.3 (max 100). The Paris classification model achieved an F1 score of 89.35 % which is state-of-the-art. The NICE classification model achieved an F1 score of 81.13 %.
Furthermore, a large data set for polyp detection and classification was created during this thesis. Therefore a fast and robust annotation system called Fast Colonoscopy Annotation Tool (FastCAT) was developed. The system simplifies the annotation process for gastroenterologists. Thereby the
i
gastroenterologists only annotate key parts of the endoscopic video. Afterward, those video parts are pre-labeled by a polyp detection AI to speed up the process. After the AI has pre-labeled the frames, non-experts correct and finish the annotation. This annotation process is fast and ensures high quality. FastCAT reduces the overall workload of the gastroenterologist on average by a factor of 20 compared to an open-source state-of-art annotation tool.
The holy grail of structural biology is to study a protein in situ, and this goal has been fast approaching since the resolution revolution and the achievement of atomic resolution. A cell's interior is not a dilute environment, and proteins have evolved to fold and function as needed in that environment; as such, an investigation of a cellular component should ideally include the full complexity of the cellular environment. Imaging whole cells in three dimensions using electron cryotomography is the best method to accomplish this goal, but it comes with a limitation on sample thickness and produces noisy data unamenable to direct analysis. This thesis establishes a novel workflow to systematically analyse whole-cell electron cryotomography data in three dimensions and to find and identify instances of protein complexes in the data to set up a determination of their structure and identity for success. Mycoplasma pneumoniae is a very small parasitic bacterium with fewer than 700 protein-coding genes, is thin enough and small enough to be imaged in large quantities by electron cryotomography, and can grow directly on the grids used for imaging, making it ideal for exploratory studies in structural proteomics. As part of the workflow, a methodology for training deep-learning-based particle-picking models is established.
As a proof of principle, a dataset of whole-cell Mycoplasma pneumoniae tomograms is used with this workflow to characterize a novel membrane-associated complex observed in the data. Ultimately, 25431 such particles are picked from 353 tomograms and refined to a density map with a resolution of 11 Å. Making good use of orthogonal datasets to filter search space and verify results, structures were predicted for candidate proteins and checked for suitable fit in the density map. In the end, with this approach, nine proteins were found to be part of the complex, which appears to be associated with chaperone activity and interact with translocon machinery.
Visual proteomics refers to the ultimate potential of in situ electron cryotomography: the comprehensive interpretation of tomograms. The workflow presented here is demonstrated to help in reaching that potential.
For formative evaluations of user experience (UX) a variety of methods have been developed over the years. However, most techniques require the users to interact with the study as a secondary task. This active involvement in the evaluation is not inclusive of all users and potentially biases the experience currently being studied. Yet there is a lack of methods for situations in which the user has no spare cognitive resources. This condition occurs when 1) users' cognitive abilities are impaired (e.g., people with dementia) or 2) users are confronted with very demanding tasks (e.g., air traffic controllers). In this work we focus on emotions as a key component of UX and propose the new structured observation method Proxemo for formative UX evaluations. Proxemo allows qualified observers to document users' emotions by proxy in real time and then directly link them to triggers. Technically this is achieved by synchronising the timestamps of emotions documented by observers with a video recording of the interaction.
In order to facilitate the documentation of observed emotions in highly diverse contexts we conceptualise and implement two separate versions of a documentation aid named Proxemo App. For formative UX evaluations of technology-supported reminiscence sessions with people with dementia, we create a smartwatch app to discreetly document emotions from the categories anger, general alertness, pleasure, wistfulness and pride. For formative UX evaluations of prototypical user interfaces with air traffic controllers we create a smartphone app to efficiently document emotions from the categories anger, boredom, surprise, stress and pride. Descriptive case studies in both application domains indicate the feasibility and utility of the method Proxemo and the appropriateness of the respectively adapted design of the Proxemo App.
The third part of this work is a series of meta-evaluation studies to determine quality criteria of Proxemo. We evaluate Proxemo regarding its reliability, validity, thoroughness and effectiveness, and compare Proxemo's efficiency and the observers' experience to documentation with pen and paper. Proxemo is reliable, as well as more efficient, thorough and effective than handwritten notes and provides a better UX to observers. Proxemo compares well with existing methods where benchmarks are available.
With Proxemo we contribute a validated structured observation method that has shown to meet requirements formative UX evaluations in the extreme contexts of users with cognitive impairments or high task demands. Proxemo is agnostic regarding researchers' theoretical approaches and unites reductionist and holistic perspectives within one method.
Future work should explore the applicability of Proxemo for further domains and extend the list of audited quality criteria to include, for instance, downstream utility. With respect to basic research we strive to better understand the sources leading observers to empathic judgments and propose reminisce and older adults as model environment for investigating mixed emotions.
The landscape of today’s programming languages is manifold. With the diversity of applications, the difficulty of adequately addressing and specifying the used programs increases. This often leads to newly designed and implemented domain-specific languages. They enable domain experts to express knowledge in their preferred format, resulting in more readable and concise programs. Due to its flexible and declarative syntax without reserved keywords, the logic programming language Prolog is particularly suitable for defining and embedding domain-specific languages.
This thesis addresses the questions and challenges that arise when integrating domain-specific languages into Prolog. We compare the two approaches to define them either externally or internally, and provide assisting tools for each. The grammar of a formal language is usually defined in the extended Backus–Naur form. In this work, we handle this formalism as a domain-specific language in Prolog, and define term expansions that allow to translate it into equivalent definite clause grammars. We present the package library(dcg4pt) for SWI-Prolog, which enriches them by an additional argument to automatically process the term’s corresponding parse tree. To simplify the work with definite clause grammars, we visualise their application by a web-based tracer.
The external integration of domain-specific languages requires the programmer to keep the grammar, parser, and interpreter in sync. In many cases, domain-specific languages can instead be directly embedded into Prolog by providing appropriate operator definitions. In addition, we propose syntactic extensions for Prolog to expand its expressiveness, for instance to state logic formulas with their connectives verbatim. This allows to use all tools that were originally written for Prolog, for instance code linters and editors with syntax highlighting. We present the package library(plammar), a standard-compliant parser for Prolog source code, written in Prolog. It is able to automatically infer from example sentences the required operator definitions with their classes and precedences as well as the required Prolog language extensions. As a result, we can automatically answer the question: Is it possible to model these example sentences as valid Prolog clauses, and how?
We discuss and apply the two approaches to internal and external integrations for several domain-specific languages, namely the extended Backus–Naur form, GraphQL, XPath, and a controlled natural language to represent expert rules in if-then form. The created toolchain with library(dcg4pt) and library(plammar) yields new application opportunities for static Prolog source code analysis, which we also present.
The phase space for the standard model of the basic four forces for n quanta includes all possible ensemble combinations of their quantum states m, a total of n**m states. Neighbor states reach according to transition possibilities (S-matrix) with emergent time from entropic ensemble gradients.
We replace the “big bang” by a condensation event (interacting qubits become decoherent) and inflation by a crystallization event – the crystal unit cell guarantees same symmetries everywhere. Interacting qubits solidify and form a rapidly growing domain where the n**m states become separated ensemble states, rising long-range forces stop ultimately further growth. After that very early events, standard cosmology with the hot fireball model takes over. Our theory agrees well with lack of inflation traces in cosmic background measurements, large-scale structure of voids and filaments, supercluster formation, galaxy formation, dominance of matter and life-friendliness.
We prove qubit interactions to be 1,2,4 or 8 dimensional (agrees with E8 symmetry of our universe). Repulsive forces at ultrashort distances result from quantization, long-range forces limit crystal growth. Crystals come and go in the qubit ocean. This selects for the ability to lay seeds for new crystals, for self-organization and life-friendliness.
We give energy estimates for free qubits vs bound qubits, misplacements in the qubit crystal and entropy increase during qubit decoherence / crystal formation. Scalar fields for color interaction and gravity derive from the permeating qubit-interaction field. Hence, vacuum energy gets low only inside the qubit crystal. Condensed mathematics may advantageously model free / bound qubits in phase space.