Refine
Has Fulltext
- yes (66)
Year of publication
- 2022 (66) (remove)
Document Type
- Journal article (35)
- Working Paper (16)
- Doctoral Thesis (13)
- Bachelor Thesis (1)
- Study Thesis (term paper) (1)
Language
- English (66) (remove)
Keywords
- Datennetz (14)
- virtual reality (8)
- IoT (3)
- Kleinsatellit (3)
- deep learning (3)
- machine learning (3)
- CADe (2)
- Cloud Computing (2)
- P4 (2)
- SDN (2)
Institute
- Institut für Informatik (66) (remove)
Sonstige beteiligte Institutionen
Utilizing multiple access technologies such as 5G, 4G, and Wi-Fi within a coherent framework is currently standardized by 3GPP within 5G ATSSS. Indeed, distributing packets over multiple networks can lead to increased robustness, resiliency and capacity. A key part of such a framework is the multi-access proxy, which transparently distributes packets over multiple paths. As the proxy needs to serve thousands of customers, scalability and performance are crucial for operator deployments. In this paper, we leverage recent advancements in data plane programming, implement a multi-access proxy based on the MP-DCCP tunneling approach in P4 and hardware accelerate it by deploying the pipeline on a smartNIC. This is challenging due to the complex scheduling and congestion control operations involved. We present our pipeline and data structures design for congestion control and packet scheduling state management. Initial measurements in our testbed show that packet latency is in the range of 25 μs demonstrating the feasibility of our approach.
The rapid development of green and sustainable materials opens up new possibilities in the field of applied research. Such materials include nanocellulose composites that can integrate many components into composites and provide a good chassis for smart devices. In our study, we evaluate four approaches for turning a nanocellulose composite into an information storage or processing device: 1) nanocellulose can be a suitable carrier material and protect information stored in DNA. 2) Nucleotide-processing enzymes (polymerase and exonuclease) can be controlled by light after fusing them with light-gating domains; nucleotide substrate specificity can be changed by mutation or pH change (read-in and read-out of the information). 3) Semiconductors and electronic capabilities can be achieved: we show that nanocellulose is rendered electronic by iodine treatment replacing silicon including microstructures. Nanocellulose semiconductor properties are measured, and the resulting potential including single-electron transistors (SET) and their properties are modeled. Electric current can also be transported by DNA through G-quadruplex DNA molecules; these as well as classical silicon semiconductors can easily be integrated into the nanocellulose composite. 4) To elaborate upon miniaturization and integration for a smart nanocellulose chip device, we demonstrate pH-sensitive dyes in nanocellulose, nanopore creation, and kinase micropatterning on bacterial membranes as well as digital PCR micro-wells. Future application potential includes nano-3D printing and fast molecular processors (e.g., SETs) integrated with DNA storage and conventional electronics. This would also lead to environment-friendly nanocellulose chips for information processing as well as smart nanocellulose composites for biomedical applications and nano-factories.
Background
The efficiency of artificial intelligence as computer-aided detection (CADe) systems for colorectal polyps has been demonstrated in several randomized trials. However, CADe systems generate many distracting detections, especially during interventions such as polypectomies. Those distracting CADe detections are often induced by the introduction of snares or biopsy forceps as the systems have not been trained for such situations. In addition, there are a significant number of non-false but not relevant detections, since the polyp has already been previously detected. All these detections have the potential to disturb the examiner's work.
Objectives
Development and evaluation of a convolutional neuronal network that recognizes instruments in the endoscopic image, suppresses distracting CADe detections, and reliably detects endoscopic interventions.
Methods
A total of 580 different examination videos from 9 different centers using 4 different processor types were screened for instruments and represented the training dataset (519,856 images in total, 144,217 contained a visible instrument). The test dataset included 10 full-colonoscopy videos that were analyzed for the recognition of visible instruments and detections by a commercially available CADe system (GI Genius, Medtronic).
Results
The test dataset contained 153,623 images, 8.84% of those presented visible instruments (12 interventions, 19 instruments used). The convolutional neuronal network reached an overall accuracy in the detection of visible instruments of 98.59%. Sensitivity and specificity were 98.55% and 98.92%, respectively. A mean of 462.8 frames containing distracting CADe detections per colonoscopy were avoided using the convolutional neuronal network. This accounted for 95.6% of all distracting CADe detections.
Conclusions
Detection of endoscopic instruments in colonoscopy using artificial intelligence technology is reliable and achieves high sensitivity and specificity. Accordingly, the new convolutional neuronal network could be used to reduce distracting CADe detections during endoscopic procedures. Thus, our study demonstrates the great potential of artificial intelligence technology beyond mucosal assessment.
A new underwater 3D scanning device based on structured illumination and designed for continuous capture of object data in motion for deep sea inspection applications is introduced. The sensor permanently captures 3D data of the inspected surface and generates a 3D surface model in real time. Sensor velocities up to 0.7 m/s are directly compensated while capturing camera images for the 3D reconstruction pipeline. The accuracy results of static measurements of special specimens in a water basin with clear water show the high accuracy potential of the scanner in the sub-millimeter range. Measurement examples with a moving sensor show the significance of the proposed motion compensation and the ability to generate a 3D model by merging individual scans. Future application tests in offshore environments will show the practical potential of the sensor for the desired inspection tasks.
This thesis deals with the first part of a larger project that follows the ultimate goal of implementing a software tool that creates a Mission Control Room in Virtual Reality. The software is to be used for the operation of spacecrafts and is specially developed for the unique real-time requirements of unmanned satellite missions. Beginning from launch, throughout the whole mission up to the recovery or disposal of the satellite, all systems need to be monitored and controlled in continuous intervals, to ensure the mission’s success. Mission Operation is an essential part of every space mission and has been undertaken for decades. Recent technological advancements in the realm of immersive technologies pave the way for innovative methods to operate spacecrafts. Virtual Reality has the capability to resolve the physical constraints set by traditional Mission Control Rooms and thereby delivers novel opportunities. The paper highlights underlying theoretical aspects of Virtual Reality, Mission Control and IP Communication. However, the focus lies upon the practical part of this thesis which revolves around the first steps of the implementation of the virtual Mission Control Room in the Unity Game Engine. Overall, this paper serves as a demonstration of Virtual Reality technology and shows its possibilities with respect to the operation of spacecrafts.
Purpose
To determine whether 24-h IOP monitoring can be a predictor for glaucoma progression and to analyze the inter-eye relationship of IOP, perfusion, and progression parameters.
Methods
We extracted data from manually drawn IOP curves with HIOP-Reader, a software suite we developed. The relationship between measured IOPs and mean ocular perfusion pressures (MOPP) to retinal nerve fiber layer (RNFL) thickness was analyzed. We determined the ROC curves for peak IOP (T\(_{max}\)), average IOP(T\(_{avg}\)), IOP variation (IOP\(_{var}\)), and historical IOP cut-off levels to detect glaucoma progression (rate of RNFL loss). Bivariate analysis was also conducted to check for various inter-eye relationships.
Results
Two hundred seventeen eyes were included. The average IOP was 14.8 ± 3.5 mmHg, with a 24-h variation of 5.2 ± 2.9 mmHg. A total of 52% of eyes with RNFL progression data showed disease progression. There was no significant difference in T\(_{max}\), T\(_{avg}\), and IOP\(_{var}\) between progressors and non-progressors (all p > 0.05). Except for T\(_{avg}\) and the temporal RNFL, there was no correlation between disease progression in any quadrant and T\(_{max}\), T\(_{avg}\), and IOP\(_{var}\). Twenty-four-hour and outpatient IOP variables had poor sensitivities and specificities in detecting disease progression. The correlation of inter-eye parameters was moderate; correlation with disease progression was weak.
Conclusion
In line with our previous study, IOP data obtained during a single visit (outpatient or inpatient monitoring) make for a poor diagnostic tool, no matter the method deployed. Glaucoma progression and perfusion pressure in left and right eyes correlated weakly to moderately with each other.
Key messages
What is known:
● Our prior study showed that manually obtained 24-hour inpatient IOP measurements in right eyes are poor predictors for glaucoma progression. The inter-eye relationship of 24-hour IOP parameters and disease progression on optical coherence tomography (OCT) has not been examined.
What we found:
● 24-hour IOP profiles of left eyes from the same study were a poor diagnostic tool to detect worsening glaucoma.
● Significant inter-eye correlations of various strengths were found for all tested parameters
This paper gives an overview of our recent activities in the field of satellite communication networks, including an introduction to geostationary satellite systems and Low Earth Orbit megaconstellations. To mitigate the high latencies of geostationary satellite networks, TCP-splitting Performance Enhancing Proxies are deployed. However, these cannot be applied in the case of encrypted transport headers as it is the case for VPNs or QUIC. We summarize performance evaluation results from multiple measurement campaigns. In a recently concluded project, multipath communication was used to combine the advantages of very heterogeneous communication paths: low data rate, low latency (e.g., DSL light) and high data rate, high latency (e.g., geostationary satellite).
Since the first CubeSat launch in 2003, the hardware and software complexity of the nanosatellites was continuosly increasing.
To keep up with the continuously increasing mission complexity and to retain the primary advantages of a CubeSat mission, a new approach for the overall space and ground software architecture and protocol configuration is elaborated in this work.
The aim of this thesis is to propose a uniform software and protocol architecture as a basis for software development, test, simulation and operation of multiple pico-/nanosatellites based on ultra-low power components.
In contrast to single-CubeSat missions, current and upcoming nanosatellite formation missions require faster and more straightforward development, pre-flight testing and calibration procedures as well as simultaneous operation of multiple satellites.
A dynamic and decentral Compass mission network was established in multiple active CubeSat missions, consisting of uniformly accessible nodes.
Compass middleware was elaborated to unify the communication and functional interfaces between all involved mission-related software and hardware components.
All systems can access each other via dynamic routes to perform service-based M2M communication.
With the proposed model-based communication approach, all states, abilities and functionalities of a system are accessed in a uniform way.
The Tiny scripting language was designed to allow dynamic code execution on ultra-low power components as a basis for constraint-based in-orbit scheduler and experiment execution.
The implemented Compass Operations front-end enables far-reaching monitoring and control capabilities of all ground and space systems.
Its integrated constraint-based operations task scheduler allows the recording of complex satellite operations, which are conducted automatically during the overpasses.
The outcome of this thesis became an enabling technology for UWE-3, UWE-4 and NetSat CubeSat missions.
Social robots in applied settings: a long-term study on adaptive robotic tutors in higher education
(2022)
Learning in higher education scenarios requires self-directed learning and the challenging task of self-motivation while individual support is rare. The integration of social robots to support learners has already shown promise to benefit the learning process in this area. In this paper, we focus on the applicability of an adaptive robotic tutor in a university setting. To this end, we conducted a long-term field study implementing an adaptive robotic tutor to support students with exam preparation over three sessions during one semester. In a mixed design, we compared the effect of an adaptive tutor to a control condition across all learning sessions. With the aim to benefit not only motivation but also academic success and the learning experience in general, we draw from research in adaptive tutoring, social robots in education, as well as our own prior work in this field. Our results show that opting in for the robotic tutoring is beneficial for students. We found significant subjective knowledge gain and increases in intrinsic motivation regarding the content of the course in general. Finally, participation resulted in a significantly better exam grade compared to students not participating. However, the extended adaptivity of the robotic tutor in the experimental condition did not seem to enhance learning, as we found no significant differences compared to a non-adaptive version of the robot.
An enduring engineering problem is the creation of unreliable software leading to unreliable systems. One reason for this is source code is written in a complicated manner making it too hard for humans to review and understand. Complicated code leads to other issues beyond dependability, such as expanded development efforts and ongoing difficulties with maintenance, ultimately costing developers and users more money.
There are many ideas regarding where blame lies in the reation of buggy and unreliable systems. One prevalent idea is the selected life cycle model is to blame. The oft-maligned “waterfall” life cycle model is a particularly popular recipient of blame. In response, many organizations changed their life cycle model in hopes of addressing these issues. Agile life cycle models have become very popular, and they promote communication between team members and end users. In theory, this communication leads to fewer misunderstandings and should lead to less complicated and more reliable code.
Changing the life cycle model can indeed address communications ssues, which can resolve many problems with understanding requirements.
However, most life cycle models do not specifically address coding practices or software architecture. Since lifecycle models do not address the structure of the code, they are often ineffective at addressing problems related to code complicacy.
This dissertation answers several research questions concerning software complicacy, beginning with an investigation of traditional metrics and static analysis to evaluate their usefulness as measurement tools. This dissertation also establishes a new concept in applied linguistics by creating a measurement of software complicacy based on linguistic economy. Linguistic economy describes the efficiencies of speech, and this thesis shows the applicability of linguistic economy to software. Embedded in each topic is a discussion
of the ramifications of overly complicated software, including the relationship of complicacy to software faults. Image recognition using machine learning is also investigated as a potential method of identifying problematic source code.
The central part of the work focuses on analyzing the source code of hundreds of different projects from different areas. A static analysis was performed on the source code of each project, and traditional software metrics were calculated. Programs were also analyzed using techniques developed by linguists to measure expression and statement complicacy and identifier complicacy. Professional software engineers were also directly surveyed to understand mainstream perspectives.
This work shows it is possible to use traditional metrics as indicators of potential project bugginess. This work also discovered it is possible to use image recognition to identify problematic pieces of source code. Finally, this work discovered it is possible to use linguistic methods to determine which statements and expressions are least desirable and more complicated for programmers.
This work’s principle conclusion is that there are multiple ways to discover traits indicating a project or a piece of source code has characteristics of being buggy. Traditional metrics and static analysis can be used to gain some understanding of software complicacy and bugginess potential. Linguistic economy demonstrates a new tool for measuring software complicacy, and machine learning can predict where bugs may lie in source code. The significant implication of this work is developers can recognize when a project is becoming buggy and take practical steps to avoid creating buggy projects.