Refine
Has Fulltext
- yes (340) (remove)
Year of publication
Document Type
- Journal article (141)
- Doctoral Thesis (139)
- Working Paper (40)
- Conference Proceeding (9)
- Report (5)
- Bachelor Thesis (2)
- Master Thesis (2)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (340) (remove)
Keywords
- Leistungsbewertung (29)
- virtual reality (19)
- Datennetz (14)
- Quality of Experience (12)
- Netzwerk (10)
- Robotik (10)
- machine learning (9)
- Cloud Computing (7)
- Optimierung (7)
- Performance Evaluation (7)
- Autonomer Roboter (6)
- Kleinsatellit (6)
- Komplexitätstheorie (6)
- Maschinelles Lernen (6)
- Mobiler Roboter (6)
- Modellierung (6)
- SDN (6)
- Virtuelle Realität (6)
- artificial intelligence (6)
- deep learning (6)
- Graphenzeichnen (5)
- P4 (5)
- Rechnernetz (5)
- Routing (5)
- Software Defined Networking (5)
- Theoretische Informatik (5)
- Verteiltes System (5)
- graph drawing (5)
- Algorithmus (4)
- Approximationsalgorithmus (4)
- Crowdsourcing (4)
- Deep learning (4)
- Dienstgüte (4)
- Drahtloses Sensorsystem (4)
- Graph (4)
- IoT (4)
- Komplexität (4)
- Mensch-Maschine-Schnittstelle (4)
- Optimization (4)
- Overlay-Netz (4)
- QoE (4)
- Quadrocopter (4)
- Satellit (4)
- Simulation (4)
- Software Engineering (4)
- Telekommunikationsnetz (4)
- Virtualisierung (4)
- augmented reality (4)
- avatars (4)
- immersion (4)
- mapping (4)
- navigation (4)
- simulation (4)
- Algorithmische Geometrie (3)
- Autonomous UAV (3)
- Benchmarking (3)
- Computer Vision (3)
- CubeSat (3)
- Data Mining (3)
- Drahtloses lokales Netz (3)
- Echtzeitsystem (3)
- Energieeffizienz (3)
- Energy Efficiency (3)
- Information Extraction (3)
- Internet of Things (3)
- Latenz (3)
- LoRaWAN (3)
- Localization (3)
- Machine Learning (3)
- Mehrkriterielle Optimierung (3)
- Mensch-Maschine-Kommunikation (3)
- Mixed Reality (3)
- Netzwerkmanagement (3)
- Neuronales Netz (3)
- Peer-to-Peer-Netz (3)
- Punktwolke (3)
- Quadrotor (3)
- Ressourcenmanagement (3)
- Robotics (3)
- Software (3)
- Software-defined networking (3)
- UAV (3)
- Video Streaming (3)
- Videoübertragung (3)
- approximation algorithm (3)
- automation (3)
- complexity (3)
- crossing minimization (3)
- endoscopy (3)
- fully convolutional neural networks (3)
- gastroenterology (3)
- historical document analysis (3)
- human-computer interaction (3)
- information extraction (3)
- quality of experience (3)
- virtual environments (3)
- 3D model generation (2)
- 5G (2)
- Ausfallsicheres System (2)
- Ausfallsicherheit (2)
- Auto-Scaling (2)
- Benutzerschnittstelle (2)
- Berechnungskomplexität (2)
- Betriebssystem (2)
- Bildverarbeitung (2)
- CADe (2)
- Cloud Gaming (2)
- DNA storage (2)
- Deep Learning (2)
- Distributed computing (2)
- Dot-Depth Problem (2)
- Echtzeit (2)
- Effizienter Algorithmus (2)
- Entscheidbarkeit (2)
- Ethernet (2)
- Fernwartung (2)
- Forecasting (2)
- Framework <Informatik> (2)
- Future Internet (2)
- Hardware (2)
- Human-Robot-Interaction (2)
- IEEE 802.11 (2)
- Industrie 4.0 (2)
- Internet (2)
- Kommunikationsprotokoll (2)
- Komplexitätsklasse (2)
- Kreuzung (2)
- Künstliche Intelligenz (2)
- Lokalisation (2)
- MP-DCCP (2)
- Maschinelles Sehen (2)
- Mathematisches Modell (2)
- Mensch-Maschine-System (2)
- Mensch-Roboter-Interaktion (2)
- Metrics (2)
- Monitoring (2)
- NP-hardness (2)
- Ontologie <Wissensverarbeitung> (2)
- Optical Character Recognition (2)
- Optical Music Recognition (2)
- PROLOG <Programmiersprache> (2)
- Prognose (2)
- Raumfahrttechnik (2)
- Resilience (2)
- Resource Management (2)
- Self-Aware Computing (2)
- Sensor (2)
- Situation Awareness (2)
- Software Performance Engineering (2)
- Streaming <Kommunikationstechnik> (2)
- TSN (2)
- Teleoperation (2)
- Theoretical Computer Science (2)
- Travelling-salesman-Problem (2)
- Unmanned Aerial Vehicle (UAV) (2)
- User Interface (2)
- Venus (2)
- Verbotsmuster (2)
- Videospiel (2)
- Virtual Reality (2)
- Visualisierung (2)
- Wissensrepräsentation (2)
- XR (2)
- Zuverlässigkeit (2)
- agency (2)
- algorithms (2)
- autonomous (2)
- avatar embodiment (2)
- background knowledge (2)
- body weight modification (2)
- body weight perception (2)
- colonoscopy (2)
- communication networks (2)
- connected mobility applications (2)
- crowdsourcing (2)
- data warehouse (2)
- decidability (2)
- distance measurement (2)
- distributed control (2)
- dot-depth problem (2)
- education (2)
- educational tool (2)
- electronic health records (2)
- embodiment (2)
- emotions (2)
- endliche Automaten (2)
- endurance (2)
- exposure (2)
- finite automata (2)
- fog computing (2)
- forbidden patterns (2)
- formation control (2)
- games (2)
- graphs (2)
- immersive technologies (2)
- intrusion detection (2)
- jitter (2)
- knowledge acquisition (2)
- knowledge engineering (2)
- knowledge representation (2)
- knowledge-based systems (2)
- latency (2)
- locomotion (2)
- measurements (2)
- medieval manuscripts (2)
- mobile laser scanning (2)
- mobile networks (2)
- multipath (2)
- multipath scheduling (2)
- natural language processing (2)
- network calculus (2)
- neume notation (2)
- neural networks (2)
- object detection (2)
- ontology (2)
- optimization (2)
- performance evaluation (2)
- performance modeling (2)
- performance monitoring (2)
- pose estimation (2)
- prediction (2)
- real-time (2)
- regular languages (2)
- reguläre Sprachen (2)
- rehabilitation (2)
- satellite communication (2)
- self-adaptive systems (2)
- self-aware computing (2)
- sensor fusion (2)
- stroke (2)
- unmanned aerial vehicles (2)
- user experience (2)
- user study (2)
- virtual body ownership (2)
- wearable (2)
- 1655-1705> (1)
- 3D Laser Scanning (1)
- 3D Pointcloud (1)
- 3D Punktwolke (1)
- 3D Reconstruction (1)
- 3D Sensor (1)
- 3D Vision (1)
- 3D mapping (1)
- 3D object recognition (1)
- 3D point cloud (1)
- 3D thermal mapping (1)
- 3D-Rekonstruktion (1)
- 3D-reconstruction methods (1)
- 3DTK toolkit (1)
- 3d point clouds (1)
- 4D-GIS (1)
- 4G Networks (1)
- 5G core network (1)
- 5G-ATSSS (1)
- 5GC (1)
- 6DOF Pose Estimation (1)
- 6G (1)
- ATSSSS (1)
- AVA (1)
- Abhängigskeitsgraph (1)
- Adaptive Video Streaming (1)
- Adaptives System (1)
- Adaptives Videostreaming (1)
- Add-on-Miss (1)
- Admission Control (1)
- Algorithmik (1)
- Alps (1)
- Alter Druck (1)
- Angewandte Mathematik (1)
- Anomalieerkennung (1)
- Anwendungsfall (1)
- Apple Watch 7 (1)
- Application-Aware Resource Management (1)
- Approximation (1)
- Arterie (1)
- Artery (1)
- Attitude Determination and Control (1)
- Attitude Dynamics (1)
- Attitude Heading Reference System (AHRS) (1)
- Automat <Automatentheorie> (1)
- Automata Theory (1)
- Automatentheorie (1)
- Automatic Text Reconition (1)
- Automation (1)
- Automatische Texterkennung (ATR) (1)
- Autonomic Computing (1)
- Autonomous Robot (1)
- Autonomous multi-vehicle systems (1)
- Autoreduzierbarkeit (1)
- Autorotation (1)
- Außerschulische Bildung (1)
- Avatar <Informatik> (1)
- Avionik (1)
- Backbone-Netz (1)
- Background Knowledge (1)
- Balloon (1)
- Base composition (1)
- Baseline Constrained LAMBDA (1)
- Bayes analysis (1)
- Bayes-Verfahren (1)
- Bayesian model comparison (1)
- Benutzerinteraktion (1)
- Berechenbarkeit (1)
- Bernoulli (1)
- Bernoulli Raum (1)
- Bernoulli Space (1)
- Beschriftung (1)
- Beschriftung von Straßen (1)
- Bestärkendes Lernen (1)
- Bestärkendes Lernen <Künstliche Intelligenz> (1)
- Bewegungskompensation (1)
- Bewegungskoordination (1)
- Beweissystem (1)
- Biased gene conversion (1)
- Bioinformatik (1)
- BitTorrent (1)
- Bodenstation (1)
- Boolean Grammar (1)
- Boolean equivalence (1)
- Boolean functions (1)
- Boolean hierarchy (1)
- Boolean isomorphism (1)
- Boolesche Funktionen (1)
- Boolesche Grammatik (1)
- Boolesche Hierarchie (1)
- Broadcast Growth Codes (BCGC) (1)
- CASE (1)
- CDN-Netzwerk (1)
- CEF (1)
- CLIP (1)
- Call Graph (1)
- Cellular Networks (1)
- Character Networks (1)
- Character Reference Detection (1)
- Chord (1)
- Clinical Data Warehouse (1)
- Clones (1)
- Cloud (1)
- Cloud computing (1)
- Cloud-native (1)
- Communication (1)
- Communication Networks (1)
- Compass framework (1)
- Compiler (1)
- Complexity Theory (1)
- Complicacy (1)
- Compression (1)
- Computational Geometry (1)
- Computational complexity (1)
- Computer Science education (1)
- Computerkartografie (1)
- Computersicherheit (1)
- Computersimulation (1)
- Computerspiel (1)
- Computerunterstütztes Lernen (1)
- Conjunction analysis (1)
- Containerization (1)
- Content Delivery Network (1)
- Content Distribution (1)
- Control room (1)
- Convoy Protection (1)
- Cooperative UAV (1)
- Coreference (1)
- Cost-benefit analysis (1)
- Couch tracking (1)
- Crowd sourcing (1)
- Crowdsensing (1)
- CubeSat GNSS (1)
- Cyber-physisches System (1)
- DASH (1)
- DHT (1)
- Daedalus-Projekt (1)
- Danish hernia database (1)
- Data Fusion (1)
- Data Science (1)
- Data Warehouse (1)
- Data-Warehouse-Konzept (1)
- Datenkommunikationsnetz (1)
- Datenübertragung (1)
- Debugging (1)
- DecaWave (1)
- Decentralized formation control (1)
- Decision Support (1)
- Declarative Performance Engineering (1)
- Deep Georeferencing (1)
- Deep Reinforcement Learning (1)
- Deflection routing (1)
- Delay Tolerant Network (1)
- Dependency Graph (1)
- Design (1)
- Design and Development (1)
- Design patterns (1)
- Desynchronisation (1)
- Desynchronization (1)
- Dezentrale Regelung (1)
- Dichotomy (1)
- Didaktik der Informatik (1)
- Differential GPS (DGPS) (1)
- Digital Humanities (1)
- Digitale Karte (1)
- Dijkstra’s algorithm (1)
- Directed Flight (1)
- Disjoint pair (1)
- Diskrete Simulation (1)
- Distributed Control (1)
- Distributed Space Systems (1)
- Distributed System (1)
- Document Analysis (1)
- Domain Knowledge (1)
- Domänenspezifische Sprache (1)
- Dot-Depth-Hierarchie (1)
- Drahtloses Sensornetz (1)
- Drahtloses vermaschtes Netz (1)
- Dreidimensionale Bildverarbeitung (1)
- Dreidimensionale Rekonstruktion (1)
- Drohne <Flugkörper> (1)
- Dynamic Memory Management (1)
- Dynamische Speicherverwaltung (1)
- E-Learning (1)
- EHS classification (1)
- EPM (1)
- EUROASPIRE survey (1)
- Earth Observation (1)
- Echtzeit-Netzwerke (1)
- Echzeit (1)
- Edge-MEC-Cloud (1)
- Edge-based Intelligence (1)
- Educational robotics (1)
- Educational robotics competitions (1)
- Eindringerkennung (1)
- Eingebettetes System (1)
- Elasticity (1)
- Elasticity tensor (1)
- Elastizitätstensor (1)
- Elektrizitätsverbrauch (1)
- Embedded Systems (1)
- End-to-End Automation (1)
- Ende-zu-Ende Automatisierung (1)
- Endpoint Mobility (1)
- Energy efficiency (1)
- Enterprise application (1)
- Enterprise-Resource-Planning (1)
- Enthaltenseinproblem (1)
- Environmental (1)
- Erderkundungssatellit (1)
- Erfüllbarkeitsproblem (1)
- Error-State Extendend Kalman Filter (1)
- Erweiterte Realität (1)
- Erweiterte Realität <Informatik> (1)
- Euclidean plane (1)
- Euklidische Ebene (1)
- Euler equations (1)
- Euler-Lagrange-Gleichung (1)
- Evaluation (1)
- Evolution (1)
- Expected MOS (1)
- Expected QoE (1)
- Expert System (1)
- Expertensystem (1)
- Expresses genes (1)
- FIFO caching strategies (1)
- FPGA (1)
- FRAMEWORK <Programm> (1)
- Fachdidaktik (1)
- Failure Prediction (1)
- Fairness (1)
- Feature Based Registration (1)
- Feature Engineering & Extraction (1)
- Fehlertoleranz (1)
- Fehlervorhersage (1)
- Fernsteuerung (1)
- Field programmable gate array (1)
- Fitbit Sense (1)
- Flugkörper (1)
- Flugnavigation (1)
- Flugregelung (1)
- Forces (1)
- Formal analysis (1)
- Formal verification (1)
- Formale Sprache (1)
- Formation (1)
- Formation Flight (1)
- Formationsbewegung (1)
- Forschungssatellit (1)
- Fraud detection (1)
- Funkressourcenverwaltung (1)
- Funktechnik (1)
- GC-Content (1)
- GNSS/INS integrated navigation (1)
- GPS (1)
- GPS Reciever (1)
- Game mechanic (1)
- Gamification (1)
- Garmin Fenix 6 Pro (1)
- Geleitzug (1)
- Generalisierung <Kartografie> (1)
- Generation Problem (1)
- Generierungsproblem (1)
- Genetic Optimization (1)
- Genetische Optimierung (1)
- Geo-spatial behavior (1)
- Geoinformationssystem (1)
- Georeferenzierung (1)
- Geospatial (1)
- Geschäftsanwendung (1)
- Gimbaled tracking (1)
- Global Navigation Satellite System (GNSS) (1)
- Global Positioning System (GPS) (1)
- Good-or-Better (GoB) (1)
- Graphen (1)
- Gravitationsmodellunsicherheit (1)
- Gravity model uncertainty (1)
- Ground Station Networks (1)
- H-infinity (1)
- H.264 SVC (1)
- H.264/SVC (1)
- HMD (Head-Mounted Display) (1)
- HSPA (1)
- HTTP adaptive video streaming (1)
- Halbordnungen (1)
- Herzkatheter (1)
- Herzkathetereingriff (1)
- Higher rates (1)
- Hintergrundwissen (1)
- Historical Maps (1)
- Historical Printings (1)
- Historische Karte (1)
- Historische Landkarten (1)
- Human behavior (1)
- Human genome (1)
- Human-Computer Interaction (1)
- Humangenetik (1)
- Hyperbolische Differentialgleichung (1)
- Hypothesis comparison (1)
- ICD-coding of CKD (1)
- IEEE 802.11e (1)
- IEEE 802.15.4 (1)
- IEEE Std 802.15.4 (1)
- INS/LIDAR integrated navigation (1)
- IP (1)
- ISS <Raumfahrt> (1)
- IT security (1)
- Ignorance (1)
- Ignoranz (1)
- Image Aesthetic Assessment (1)
- Image Processing (1)
- Implementierung <Informatik> (1)
- In-Orbit demonstration (1)
- Industrial internet (1)
- Informatik (1)
- Information Retrieval (1)
- Instrument Control Toolbox (1)
- Integer Expression (1)
- Integer circuit (1)
- Intelligent Real-time Interactive System (1)
- Intelligent Realtime Interactive System (1)
- Intelligent Transportation Systems (1)
- Intelligent Virtual Agents (1)
- Intelligent Virtual Environment (1)
- Intelligent mobile system (1)
- InteractionSuitcase (1)
- Interaktion (1)
- Interaktive Karten (1)
- Interkulturelles Lernen (1)
- International Comparative Research (1)
- Internet Protokoll (1)
- Internet der Dinge (1)
- Intra-Spacecraft Communication (1)
- Isomorphie (1)
- Itinerare (1)
- Itineraries (1)
- JCAS (1)
- Jakob <Mathematiker (1)
- Java <Programmiersprache> (1)
- Java Message Service (1)
- K band ranging (KBR) (1)
- Kademlia (1)
- Kalman-Filter (1)
- Kanalzugriff (1)
- Karte (1)
- Kathará (1)
- Kerneldensity estimation (1)
- Kinetische Gleichung (1)
- Klassendiagramm (1)
- Klima (1)
- Klinisches Experiment (1)
- Knowledge Discovery (1)
- Knowledge Representation Layer (1)
- Knowledge encoding (1)
- Knowledge engineering (1)
- Knowledge-based Systems Engineering (1)
- Kombinatorik (1)
- Kommunikation (1)
- Kommunikationsnetze (1)
- Komplexitätsklasse NP (1)
- Konjunktionsanalyse (1)
- Konvexe Zeichnungen (1)
- Konvoi (1)
- Kooperierende mobile Roboter (1)
- Kosten-Nutzen-Analyse (1)
- Kreuzungsminimierung (1)
- Kurve (1)
- LFU (1)
- LRU (1)
- LUMEN (1)
- Lageregelung (1)
- Landkartenbeschriftung (1)
- Landnutzungskartierung (1)
- Laser scanning (1)
- Latency Bound (1)
- Lava (1)
- Lehrerbildung (1)
- Leistungsbedarf (1)
- Lernen (1)
- Lidar (1)
- Lightning (1)
- Link rate adaptation (1)
- Linked Data (1)
- Linkratenanpassung (1)
- Linux (1)
- LoRa (1)
- LoRaWan (1)
- Logging (1)
- Logic Programming (1)
- Logische Programmierung (1)
- Logistik (1)
- Loose Coupling (1)
- Low Earth Orbit (1)
- Lunar Caves (1)
- Lunar Exploration (1)
- MAC (1)
- MAC Protocol (1)
- MASim (1)
- MEMS IMU (1)
- MHD equations (1)
- MLC tracking (1)
- MSC: 49M37 (1)
- MSC: 65K05 (1)
- MSC: 90C30 (1)
- MSC: 90C40 (1)
- MTC (1)
- Magnetohydrodynamische Gleichung (1)
- Mammalian genomes (1)
- Mapping (1)
- Markov model (1)
- Markovian and Non-Markovian systems (1)
- Mars (1)
- Mathematische Modellierung (1)
- Matlab (1)
- Measurement-based Analysis (1)
- Media Access Control (1)
- Medical Image Analysis (1)
- Medienkompetenz (1)
- Medium <Physik> (1)
- Medizin (1)
- Mehragentensystem (1)
- Mehrfahrzeugsysteme (1)
- Mehrpfadübertragung (1)
- Mehrschichtnetze (1)
- Mehrschichtsystem (1)
- Mensch (1)
- Mesh Augmentation (1)
- Mesh Networks (1)
- Mesh Netze (1)
- Meta-modeling (1)
- Microservice (1)
- Middleware (1)
- Mikroservice (1)
- Mini Unmanned Aerial Vehicle (1)
- Miniaturisierung (1)
- Minimally invasive vascular intervention (1)
- Mitotizität (1)
- Mobile Sensor Network (1)
- Mobile Telekommunikation (1)
- Mobiles Internet (1)
- Mobilfunk (1)
- Mobility (1)
- Mobilität (1)
- Model based communication (1)
- Model based mission realization (1)
- Model comparison (1)
- Model extraction (1)
- Model transformation (1)
- Model-Agnostic (1)
- Model-based Performance Prediction (1)
- Modeling (1)
- Modell (1)
- Modellgetriebene Entwicklung (1)
- Modellierungstechniken (1)
- Modelling (1)
- Modul <Software> (1)
- Modularität (1)
- Moment <Stochastik> (1)
- Mond (1)
- Mondfahrzeug (1)
- Multi-Hop Topologie (1)
- Multi-Hop Topology (1)
- Multi-Layer (1)
- Multi-Network Service (1)
- Multi-Netzwerk Dienste (1)
- Multi-Paradigm Programming (1)
- Multi-Paradigm Programming Framework (1)
- Multi-Stakeholder (1)
- Multimodal Processing (1)
- Multimodal System (1)
- Multimodales System (1)
- Multipath Transmission (1)
- Mustererkennung (1)
- NP (1)
- NP-Vollständigkeit (1)
- NP-complete sets (1)
- NP-hard (1)
- NP-hartes Problem (1)
- NP-schweres Problem (1)
- NP-vollständiges Problem (1)
- Nano-Satellite (1)
- NanoFEEP (1)
- Nanosatellit (1)
- Navigation analysis (1)
- Network Emulator (1)
- Network Experiments (1)
- Network Function Virtualization (1)
- Network Functions Virtualisation (1)
- Network Management (1)
- Network Measurements (1)
- Network Virtualization (1)
- Network routing (1)
- Network-on-Chip (1)
- Netzplantechnik (1)
- Netzplanung (1)
- Netzvirtualisierung (1)
- Netzwerkanalyse <Soziologie> (1)
- Netzwerkplanung (1)
- Netzwerktopologie (1)
- Netzwerkverwaltung (1)
- Netzwerkvirtualisierung (1)
- Neume Notation (1)
- Neumennotation (1)
- Neumenschrift (1)
- Next Generation Networks (1)
- Nichtholonome Fahrzeuge (1)
- Nichtlineare Regelung (1)
- Nutzerstudie (1)
- Nutzerstudien (1)
- OMICS (1)
- Object Detection (1)
- Object-Oriented Programming (1)
- Objektorientierte Programmierung (1)
- Onboard (1)
- Onboard Software (1)
- Open Innovation (1)
- OpenFlow (1)
- Operator (1)
- Optical Flow (1)
- Optimal control (1)
- Optimale Kontrolle (1)
- Optimale Regelung (1)
- Optimalwertregelung (1)
- Optimiertung (1)
- Optimierungsproblem (1)
- Optische Musikerkennung (OMR) (1)
- Optische Zeichenerkennung (1)
- Optische Zeichenerkennung (OCR) (1)
- Orakel <Informatik> (1)
- Orbit determination (1)
- Orbitbestimung (1)
- Organ motion (1)
- Overlay (1)
- Overlay Netzwerke (1)
- Overlay networks (1)
- Overlays (1)
- P-optimal (1)
- P4-INT (1)
- PMD (1)
- Panorama Images (1)
- Partition <Mengenlehre> (1)
- Partitionen (1)
- Path Computation Element (1)
- Pattern Mining (1)
- Pattern Recognition (1)
- Peer-to-Peer (1)
- Performance (1)
- Performance Analysis (1)
- Performance Enhancing Proxies (1)
- Performance Management (1)
- Performance Modeling (1)
- Performance analysis (1)
- Pfadberechnungselement (1)
- Phasenmehrdeutigkeit (1)
- Picosatellite (1)
- Planare Graphen (1)
- Planung (1)
- Plasmaantrieb (1)
- Platooning (1)
- Platzierungsalgorithmen (1)
- Poisson surface reconstruction (1)
- Polyeder (1)
- Polygonzüge (1)
- Positioning (1)
- Post's Classes (1)
- Postsche Klassen (1)
- Power Consumption (1)
- Prediction (1)
- Prediction Procedure (1)
- Problemlösefähigkeiten (1)
- Propositional proof system (1)
- Prospect Theory (1)
- Psychische Gesundheit (1)
- Publish-Subscribe-System (1)
- Punktbeschriftungen (1)
- Q-Learning (1)
- QUIC (1)
- QoE Monitoring (1)
- QoE estimation (1)
- QoE fundamentals (1)
- QoE-Abschätzung (1)
- QoS (1)
- QoS-QoE mapping functions (1)
- Qualitative representation and reasoning (1)
- Quality of Experience (QoE) (1)
- Quality of Experience QoE (1)
- Quality of Service (1)
- Quality of Service (QoS) (1)
- Quality-of-Experience (1)
- Quality-of-Service (1)
- Quality-of-Service (QoS) (1)
- Quantor (1)
- Queueing theory (1)
- Quotation Attribution (1)
- RAS Evaluation (1)
- RGB-D (1)
- RINEX Format (1)
- RLNC (1)
- RNA-SEQ (1)
- RRM (1)
- Randomness (1)
- Raumdaten (1)
- Raumfahrt (1)
- Raumfahrzeug (1)
- Raumverhalten (1)
- Real-Time Operating Systems (1)
- Real-Time-Networks (1)
- Real-time (1)
- Real-time Kinematics (RTK) (1)
- Rechenzentrum (1)
- Refactoring (1)
- Refaktorisierung (1)
- Regelbasiertes Modell (1)
- Regelung (1)
- Registration (1)
- Registrierung (1)
- Regression (1)
- Reguläre Sprache (1)
- Reinforcement Learning (1)
- Relation Detection (1)
- Rendezvous (1)
- Reproducibility (1)
- Research Station (1)
- Resource and Performance Management (1)
- Ressourcen Management (1)
- Ressourcenallokation (1)
- Rettungsroboter (1)
- Roboterwettbewerbe (1)
- Robotic tracking (1)
- Rodents (1)
- Rodos (1)
- Route Choice (1)
- Route Entscheidung (1)
- Räumliches Verhalten (1)
- SBA (1)
- SDN Controllers (1)
- SDN Switches (1)
- SDN/NVF (1)
- SLAM (1)
- ST-elevation myocardial infarction (1)
- SVC (1)
- Satellite Ground Station (1)
- Satellite Network (1)
- Satellite formation (1)
- Satellitenfunk (1)
- Scheduling (1)
- Search-and-Rescue (1)
- Selbstkalibrierung (1)
- Selbstorganisation (1)
- Self-calibration (1)
- Semantic Entity Model (1)
- Semantic Search (1)
- Semantic Technologies (1)
- Semantic Web (1)
- Semantics (1)
- Semantik (1)
- Semantische Analyse (1)
- Sensing-aaS (1)
- Sensorfusion (1)
- Serious game (1)
- Server (1)
- Service Mobility (1)
- Service-level Quality Index (SQI) (1)
- Sichtbarkeit (1)
- Similarity Measure (1)
- Simulator (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skype (1)
- Small Satellites (1)
- Smart User Interaction (1)
- Snow Line Elevation (1)
- Social Media (1)
- Social Web (1)
- Software Architecture (1)
- Software Performance Modeling (1)
- Software Quality (1)
- Software-based Networks (1)
- Software-defined Networking (1)
- Softwareentwicklung (1)
- Softwaremetrie (1)
- Softwaresystem (1)
- Softwaretest (1)
- Softwarewartung (1)
- Softwarewiederverwendung (1)
- Softwarisierte Netze (1)
- Source Code Generation (1)
- Source Code Visualization (1)
- Soziale Software (1)
- Soziales Netzwerk (1)
- Space Debris (1)
- SpaceWire (1)
- Spacecrafts (1)
- Spam Detection (1)
- Spatial behavior (1)
- Spherical Robot (1)
- Spielmechanik (1)
- Standardisierung (1)
- Standortproblem (1)
- Statische Analyse (1)
- Statistische Hypothese (1)
- Sternfreie Sprache (1)
- Steuerung (1)
- Stiffness (1)
- Stochastik (1)
- Strahlentherapie (1)
- Straubing-Th´erien-Hierarchie (1)
- Strecken (1)
- Structure-from-Motion (1)
- Strukturelle Komplexität (1)
- Studie (1)
- Subgroup Discovery (1)
- Subgroup Mining (1)
- Subgruppenentdeckung (1)
- System-on-Chip (1)
- TETCs (1)
- TTL (1)
- TTL validation of data consistency (1)
- Tagging (1)
- Technical Documentation (1)
- Technische Unterlage (1)
- Telematik (1)
- Telemedizin (1)
- Telemetrie (1)
- Terramechanics (1)
- Testbed (1)
- Textanalyse (1)
- Theoretical computer science (1)
- Thermografie (1)
- Thermospheric density uncertainty (1)
- Thermosphärische Dichteunsicherheit (1)
- Thrust Vector Control (1)
- Time-Sensitive Networking (1)
- Time-Sensitive-Networking (1)
- Torque (1)
- Traffic (1)
- Traffic Management (1)
- Trainingssystem (1)
- Trajectory tracking (1)
- Transportsystem (1)
- Triangulation (1)
- Tumor motion (1)
- Tumorbewegung (1)
- U-Bahnlinienplan (1)
- UAP (1)
- UFO (1)
- UI and Interaction Design (1)
- UML Klassendiagramm (1)
- UML class diagram (1)
- UMTS (1)
- URL (1)
- URLLC (1)
- UWB (1)
- UWE-4 (1)
- Ultra-Wideband (UWB) radio ranging (1)
- Ultraweitband (1)
- Umfrage (1)
- Umwelt (1)
- Uncertainty (1)
- Uncertainty realism (1)
- Underwater Mapping (1)
- Underwater Scanning (1)
- Unified Monitoring (1)
- Unmanned Aerial Vehicle (1)
- Unsicherheit (1)
- Unsicherheitsrealismus (1)
- Unstetige Regelung (1)
- Usability (1)
- Use case (1)
- User Behavior (1)
- User Participation (1)
- User interfaces (1)
- User studies (1)
- VHDL (1)
- VNF (1)
- VPN (1)
- Validation (1)
- Vehicle Routing Problem (1)
- Veranstaltung (1)
- Verbotenes Muster (1)
- Verbände (1)
- Verifikation (1)
- Verkehrsleitsystem (1)
- Verkehrslenkung (1)
- Verkehrsmanagement (1)
- Verkehrsregelung (1)
- Verteiltes Datenbanksystem (1)
- Verteilung von Inhalten (1)
- Video Game QoS (1)
- Video Quality Monitoring (1)
- Virtuelles Netz (1)
- Virtuelles Netzwerk (1)
- Visibility (1)
- Vision Based (1)
- Visual Tracking (1)
- Visualization (1)
- Visualized Kathará (1)
- Voice-over-IP (VoIP) (1)
- Vorhersage (1)
- Vorhersagetheorie (1)
- Vorhersageverfahren (1)
- WLAN (1)
- Wahrscheinlichkeitsverteilung (1)
- Warteschlangentheorie (1)
- Wartung (1)
- Web navigation (1)
- Web2.0 (1)
- WhatsApp (1)
- Wheel (1)
- Winkel (1)
- Wire relaxation (1)
- Wireless LAN (1)
- Wireless Mesh Networks (1)
- Wireless Network (1)
- Wireless Sensor/Actuator Systems (1)
- Wissensakquisition (1)
- Wissensbasiertes System (1)
- Wissensencodierung (1)
- Wissensendeckung (1)
- Wissensentwicklung (1)
- Wissensextraktion (1)
- Wissenstechnik (1)
- Withings ScanWatch (1)
- Worterweiterungen (1)
- XR-artificial intelligence combination (1)
- XR-artificial intelligence continuum (1)
- YouTube (1)
- Zeichnen von Graphen (1)
- Zeitdiskretes System (1)
- Zeitreihe (1)
- Zeitreihenanalyse (1)
- Zeitreihenvorhersage (1)
- Zufall (1)
- Zugangskontrolle (1)
- Zugangsnetz (1)
- Zählprobleme (1)
- abdominal wall hernia (1)
- abdominal wall surgery (1)
- abgeschlossene Klassen (1)
- acrophobia (1)
- adaptation models (1)
- adaptive network coding (1)
- adaptive tutoring (1)
- administrative boundary (1)
- admission control (1)
- adult learning (1)
- aerodynamic drag reduction (1)
- aerodynamics (1)
- aerospace (1)
- aerospace engineering (1)
- affective appraisal (1)
- affective computing (1)
- agent-based models (1)
- agents (1)
- agile Prozesse (1)
- agile processes (1)
- ancillary services (1)
- angular schematization (1)
- annotation (1)
- anomaly detection (1)
- anomaly prediction (1)
- ant-colony optimization (1)
- antenna phase center calibration (1)
- anthropomorphism (1)
- antibiotic prophylaxis (1)
- anxiety (1)
- application design (1)
- approximation algorithms (1)
- arithmetic calculations (1)
- asymptotic preserving (1)
- attack-aware (1)
- attitude determination (1)
- auction based task assignment (1)
- authoring environment (1)
- authoring platform (1)
- automated map labeling (1)
- automatic Layout (1)
- automatische Beschriftungsplatzierung (1)
- automatisches Layout (1)
- autonomic orchestration (1)
- autonomous UAV (1)
- autorotation (1)
- availability (1)
- backpack mobile mapping (1)
- baseline detection (1)
- behaviometric (1)
- behavior perception (1)
- beyond planarity (1)
- binary tanglegram (1)
- bioelectronics (1)
- biomechanic (1)
- biomechanical engineering (1)
- biomimetics (1)
- biosignals (1)
- blood coagulation factor XIII (1)
- body awareness (1)
- body image distortion (1)
- body image disturbance (1)
- boundary labeling (1)
- building (1)
- calibration (1)
- camera orientation (1)
- car-like robots (1)
- carbon (1)
- cardiac magnetic resonance imaging (1)
- cardiac surgery (1)
- cardiac training group (1)
- cardiorespiratory fitness (1)
- cartographic requirements (1)
- certifying algorithm (1)
- chain cover (1)
- change detection (1)
- channel management (1)
- characterization (1)
- chronic kidney disease (1)
- circular layouts (1)
- circular-arc drawings (1)
- climate (1)
- clinical data warehouse (1)
- clinical measurement in health technology (1)
- clinical study (1)
- co-authorships (1)
- co-inventorships (1)
- coherence (1)
- collaboration (1)
- collaborative interaction (1)
- collision (1)
- collision avoidance (1)
- collision detection (1)
- communication models (1)
- communication network (1)
- competitive location (1)
- computational complexity (1)
- computer performance evaluation (1)
- computergestützte Softwaretechnik (1)
- concurrent design facility (1)
- congruence (1)
- consensus (1)
- constrained forest (1)
- contact representation (1)
- container virtualization (1)
- content-based image retrieval (1)
- continuous-time SLAM (1)
- controller failure recovery (1)
- convex bipartite graph (1)
- coronary heart disease (1)
- cost-sensitive learning (1)
- counting problems (1)
- crowdsensing (1)
- crowdsourced QoE measurements (1)
- crowdsourced measurements (1)
- crowdsourced network measurements (1)
- cultural and media studies (1)
- culturally aware (1)
- curves (1)
- cybersickness (1)
- cycling (1)
- d3web.Train (1)
- data fusion (1)
- data plane programming (1)
- data structure (1)
- dataplane programming (1)
- dataset (1)
- decision support (1)
- decision support system (1)
- decision-making (1)
- decoding error rate (1)
- deep metric learning (1)
- definite clause grammars (1)
- deformation-based method (1)
- delay QoS exponent (1)
- delay bound violation probability (1)
- delay constrained (1)
- denial of service (1)
- dependable software (1)
- descent (1)
- descriptors (1)
- design (1)
- design cycle (1)
- detection time simulation (1)
- dial a ride (1)
- digital twin (1)
- dimensions of proximity (1)
- discrete-time analysis (1)
- discrete-time models and analysis (1)
- disjoint multi-paths (1)
- distance compression (1)
- distraction (1)
- docker (1)
- document analysis (1)
- documents (1)
- drag area (1)
- dynamic adaptive streaming over http (1)
- dynamic flow migration (1)
- dynamic programming (1)
- dynamic protein-protein interactions (1)
- eHealth (1)
- eating and body weight disorders (1)
- edge labeled graphs (1)
- educational games (1)
- effective Bandwidth (1)
- efficient algorithm (1)
- electric propulsion (1)
- electric vehicles (1)
- electronic data capture (1)
- elevated plus-maze (1)
- embedding techniques (1)
- emulation (1)
- encryption (1)
- energy efficiency (1)
- environmental modeling (1)
- epigastric hernia (1)
- ethics (1)
- evaluation (1)
- event detection (1)
- exercise intensity (1)
- experience (1)
- experimental evaluation (1)
- expert systems (1)
- explainable AI (1)
- explanation complexity (1)
- extended Kalman filter (1)
- extended reality (XR) (1)
- failure prediction (1)
- fast reroute (1)
- fault detection (1)
- feature matching (1)
- federated learning (1)
- femoral hernia (1)
- few-shot learning (1)
- finite recurrent systems (1)
- fitness trackers (1)
- fixed-parameter tractability (1)
- food quality (1)
- force feedback (1)
- forecast (1)
- foreign language learning and teaching (1)
- formation driving (1)
- formation flight (1)
- fractionated spacecraft (1)
- fruit temperature (1)
- future Internet architecture (1)
- future energy grid exploration (1)
- gait disorder (1)
- gambling (1)
- game mechanics (1)
- gamification (1)
- genetic algorithm (1)
- glaucoma progression (1)
- global IPX network (1)
- granular (1)
- graph (1)
- graph algorithm (1)
- graph decomposition (1)
- group-based communication (1)
- hackathons (1)
- handwriting (1)
- haptic data (1)
- hardness (1)
- hardware-in-the-loop simulation (1)
- hardware-in-the-loop streaming system (1)
- harness free satellite (1)
- hazard avoidance (1)
- head-mounted display (1)
- healing and remodelling processes (1)
- health monitoring (1)
- health sciences (1)
- health tracker (1)
- healthcare (1)
- healthcare professionals (1)
- heart failure (1)
- heart failure training group (1)
- heat transfer (1)
- helicopters (1)
- hernia defect (1)
- hernia repair material (1)
- heterogeneous background (1)
- hierarchy (1)
- high-accuracy 3D measurements (1)
- higher education (1)
- historical images (1)
- historical printings (1)
- hit ratio analysis and simulation (1)
- hospital data (1)
- human behaviour (1)
- human body weight (1)
- human computer interaction (HCI) (1)
- human-artificial intelligence interaction (1)
- human-artificial intelligence interface (1)
- human-centered AI (1)
- human-centered design (1)
- human-centered, human-robot (1)
- human-robot interaction (1)
- human–computer interaction (1)
- hybrid access (1)
- hybrid avatar-agent systems (1)
- hyperbolic partial differential equations (1)
- illusion of self-motion (1)
- image classification (1)
- image processing (1)
- imbalanced regression (1)
- immersive classroom (1)
- immersive classroom management (1)
- immersive interfaces (1)
- immersive learning technologies (1)
- implicit association test (1)
- in-orbit experiments (1)
- incisional abdominal wall hernia (1)
- incisional hernia (1)
- independent crossing (1)
- individual differences (1)
- induced matching (1)
- informal education (1)
- information retrieval (1)
- information systems and information technology (1)
- infrared (1)
- infrared detectors (1)
- inguinal hernia (1)
- insect tracking (1)
- instrument (1)
- integer linear programming (1)
- intelligent transportation systems (1)
- intelligent vehicles (1)
- intelligent virtual agents (1)
- intelligent voice assistant (1)
- intelligente Applikationen (1)
- interactive authoring system (1)
- interactive maps (1)
- intercultural learning and teaching (1)
- interdisciplinary education (1)
- internet of things (1)
- internet protocol (1)
- internet traffic (1)
- intervention (1)
- invasive vascular interventions (1)
- iowa gambling task (1)
- isentropic Euler equations (1)
- k-d tree (1)
- key-insight extraction (1)
- kinect (1)
- kinetic equations (1)
- labeling (1)
- land-cover area (1)
- landing (1)
- language-image pre-training (1)
- laser ranging (1)
- laser scanner (1)
- laserscanner (1)
- latency cybersickness (1)
- laterality (1)
- lattices (1)
- layout recognition (1)
- learning environments (1)
- least cost (1)
- lidar (1)
- light-gated proteins (1)
- load balancing (1)
- local energy system (1)
- logic programming (1)
- logistics (1)
- long-term analysis (1)
- lunar rover (1)
- m exercise training (1)
- magnetometer (1)
- maintenance (1)
- man-portable mapping (1)
- map labeling (1)
- map projections (1)
- marine navigation (1)
- mathematical model (1)
- mechanical engineering (1)
- mechanics (1)
- media analysis (1)
- medical records (1)
- medication extraction (1)
- meditation (1)
- membership problem (1)
- mesh augmentation (1)
- mesh repair (1)
- metro map (1)
- micrometre level microwave ranging (1)
- mindfulness (1)
- minimal triangulations (1)
- minimale Triangulationen (1)
- misconceptions (1)
- mixed reality (1)
- mixed-cultural (1)
- mixed-cultural settings (1)
- mobile instant messaging (1)
- mobile messaging application (1)
- mobile robots (1)
- mobile streaming (1)
- model following (1)
- model output statistics (1)
- model predictive control (1)
- model-based diagnosis (1)
- modeling techniques (1)
- monotone drawing (1)
- morphing (1)
- motion compensation (1)
- motivation (1)
- mountains (1)
- movement ecology (1)
- multi-source multi-sink problem (1)
- multi-vehicle formations (1)
- multi-vehicle rendezvous (1)
- multidisciplinary (1)
- multimodal fusion (1)
- multimodal interface (1)
- multimodal learning (1)
- multipath communication (1)
- multipath packet scheduling (1)
- multiple myeloma (1)
- multiple sclerosis (1)
- multirotors (1)
- multiscale encoder (1)
- nano-satellite (1)
- nanocellulose (1)
- natural environment (1)
- natural interfaces (1)
- natural language processing · · · (1)
- natural user interfaces (1)
- negation detection (1)
- network (1)
- network design (1)
- network function virtualization (1)
- network planning (1)
- network simulation (1)
- network softwarization (1)
- network upgrade (1)
- network virtualization (1)
- networked predictive control (1)
- networked robotics (1)
- networking (1)
- networks (1)
- neural architecture (1)
- neural network (1)
- non-native accent (1)
- non-rigid registration (1)
- non-terrestrial networks (1)
- nonholonomic vehicles (1)
- normal distribution transform (1)
- nosocomial infection (1)
- nycthemeral intraocular pressure (1)
- object reconstruction (1)
- obstacle detection (1)
- octree (1)
- oncolytic virus (1)
- online survey (1)
- ontologies (1)
- optical character recognition (1)
- optical music recognition (1)
- optical underwater 3D sensor (1)
- optogenetics (1)
- orchestration (1)
- overprovisioning (1)
- packet reception method (1)
- parastomal hernia (1)
- partitions (1)
- passage of time (1)
- passive haptic feedback (1)
- path computation (1)
- patients’ awareness (1)
- perception (1)
- performance (1)
- performance analysis (1)
- performance parameters (1)
- performance prediction (1)
- personal laser scanning (1)
- personalized medicine (1)
- personalized training (1)
- phase unwrapping (1)
- photoplethysmography (1)
- physicians’ awareness (1)
- physiological dataset (1)
- physiology (1)
- place-illusion (1)
- plain orchestrating service (1)
- plausibility (1)
- plausibility-illusion (1)
- point cloud (1)
- point cloud compression (1)
- point cloud registration (1)
- point labeling (1)
- point-feature label placement (1)
- point-to-plane measure (1)
- point-to-point measure (1)
- pollution (1)
- polylines (1)
- polyp (1)
- pos (1)
- pose tracking (1)
- posets (1)
- positioning (1)
- power consumption (1)
- precision horticulture (1)
- precision training (1)
- presence (1)
- primary ventral hernia (1)
- private chat groups (1)
- problem solving skills (1)
- procedural content generation (1)
- procedural fusion methods (1)
- progressive download (1)
- prompt engineering (1)
- protein analysis (1)
- protein chip (1)
- psychomotor training (1)
- psychophyisology (1)
- public speaking (1)
- qoe (1)
- quadcopter (1)
- quadcopters (1)
- quadrocopter (1)
- quadrotor (1)
- quality assurance (1)
- quality evaluation (1)
- quality of experience prediction (1)
- quality of life (1)
- queueing theory (1)
- radio resource management (1)
- radiology (1)
- ransomware (1)
- real world evidence (1)
- real-world application (1)
- realism (1)
- receding horizon control (1)
- recommender agent (1)
- recommender system (1)
- reconfiguration (1)
- recurrent abdominal wall hernia (1)
- refactoring (1)
- regenerative cooling (1)
- registries (1)
- reinforcement learning (1)
- rekurrente Systeme (1)
- reload cost (1)
- remote control (1)
- rendezvous and docking (1)
- requirements management (1)
- research methods (1)
- resilience (1)
- rich vehicle routing problem (1)
- right angle crossing (1)
- right-left comparison (1)
- risks (1)
- robot-supported training (1)
- robotic (1)
- robotic tutor (1)
- robotics (1)
- robust control (1)
- robustness (1)
- rocket engine (1)
- rotorcraft (1)
- rotors (1)
- routing (1)
- rulebased analysis (1)
- sample weighting (1)
- sandfish (1)
- satellite formation flying (1)
- satellite technology (1)
- satisfiability problems (1)
- scalability (1)
- scalability evaluation (1)
- scalable quadcopter (1)
- scheduling (1)
- science, technology and society (1)
- secondary data usage (1)
- secure group communication (1)
- segmentation (1)
- self-adaptive (1)
- self-assembly (1)
- self-aware (1)
- self-aware computing systems (1)
- self-managing systems (1)
- self-organization (1)
- self-supervised learning (1)
- semantic fusion (1)
- semantic technologies (1)
- semantic understanding (1)
- semantic web (1)
- semantical aesthetic (1)
- semantische Ästhetik (1)
- sensor (1)
- sensor devices (1)
- sentinel (1)
- serious games (1)
- service-curve estimation (1)
- sesnsors (1)
- short block-length (1)
- shortest path routing (1)
- signaling traffic (1)
- signalling pathways (1)
- simulation system (1)
- simulator sickness (1)
- simultaneous embedding (1)
- single-electron transistors (1)
- site mapping (1)
- sketching (1)
- slip (1)
- smart charging (1)
- smart grid (1)
- smart meter data utilization (1)
- smart speaker (1)
- smartwatch (1)
- smooth orthogonal drawing (1)
- snow shoveling (1)
- social VR (1)
- social artificial intelligence (1)
- social interaction (1)
- social relationship (1)
- social robot (1)
- social robotics (1)
- social role (1)
- socially interactive agents (1)
- software defined network (1)
- software engineering (1)
- software performance (1)
- software-definded networking (1)
- space missions phases (1)
- spacecrarft control (1)
- space–terrestrial networks (1)
- spanning tree (1)
- spatial presence (1)
- specular reflective (1)
- sports technology (1)
- standardization (1)
- state management (1)
- stationary preserving (1)
- statistical methods (1)
- statistical validity (1)
- statistics and numerical data (1)
- stereotypes (1)
- stochastic processes (1)
- straight-line segments (1)
- street labeling (1)
- structural battery (1)
- structural complexity (1)
- structured illumination (1)
- structured light illumination (1)
- student simulation (1)
- study design (1)
- stylus (1)
- sunburn (1)
- supervised learning (1)
- surface model (1)
- surrogate model (1)
- survey (1)
- sustainability (1)
- switching navigation (1)
- system simulation (1)
- systematic literature review (1)
- systematic review (1)
- table extraction (1)
- table understanding (1)
- taxonomy (1)
- teacher education (1)
- technology acceptance (1)
- technology-supported education (1)
- technology-supported learning (1)
- telematics (1)
- telemedicine (1)
- text line detection (1)
- text supervision (1)
- theory (1)
- therapeutic application (1)
- therapy (1)
- thermal camera (1)
- thermal point cloud (1)
- thrust direction (1)
- thrust vector control (1)
- time calibration (1)
- time perception (1)
- time series (1)
- timestamping method (1)
- tools (1)
- topology (1)
- traffic damping (1)
- training systems (1)
- trait anxiety (1)
- trajectory planning (1)
- transformer (1)
- translational neuroscience (1)
- transparent (1)
- transport microenvironments (1)
- transport protocols (1)
- transportation (1)
- tree (1)
- ultrasonic autonomous aerial vehicles (1)
- umbilical hernia (1)
- uncooperative space rendezvous (1)
- underwater 3D scanning (1)
- unmanned aerial vehicle (1)
- usability evaluation (1)
- use cases (1)
- user identification (1)
- user interaction (1)
- user-generated content (1)
- v (1)
- validation (1)
- vection (1)
- vehicle dynamics (1)
- vehicular navigation (1)
- ventral hernia (1)
- ventral hernia model (1)
- verbal behaviour (1)
- vernetzte Roboter (1)
- video QoE (1)
- video game QoE (1)
- video game context factors (1)
- video object detection (1)
- video streaming (1)
- virtual agent (1)
- virtual agent interaction (1)
- virtual embodiment (1)
- virtual human (1)
- virtual humans (1)
- virtual queue (1)
- virtual reality training (1)
- virtual social interaction (1)
- virtual stimuli (1)
- virtual tunnel (1)
- virtual-reality-continuum (1)
- virtualized environments (1)
- virtuel reality (1)
- visualization (1)
- vom Nutzer erfahrene Dienstgüte QoE (1)
- voting location (1)
- waypoint parameter (1)
- wearable technologies (1)
- well-balanced scheme (1)
- wheel (1)
- wireless communication (1)
- wireless sensor network (1)
- wireless-bus (1)
- word clouds (1)
- word extensions (1)
- zooming (1)
- zukünftige Kommunikationsnetze (1)
- zukünftiges Internet (1)
- Ähnlichkeitsmaß (1)
- Änderungserkennung (1)
- Überwachungstechnik (1)
Institute
- Institut für Informatik (340) (remove)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- Airbus Defence and Space GmbH (1)
- Birmingham City University (1)
- California Institute of Technology (1)
- DLR (1)
- Deutsches Zentrum für Luft- und Raumfahrt e.V. (1)
The steadily increasing usage of smart meters generates a valuable amount of high-resolution data about the individual energy consumption and production of local energy systems. Private households install more and more photovoltaic systems, battery storage and big consumers like heat pumps. Thus, our vision is to augment these collected smart meter time series of a complete system (e.g., a city, town or complex institutions like airports) with simulatively added previously named components. We, therefore, propose a novel digital twin of such an energy system based solely on a complete set of smart meter data including additional building data. Based on the additional geospatial data, the twin is intended to represent the addition of the abovementioned components as realistically as possible. Outputs of the twin can be used as a decision support for either system operators where to strengthen the system or for individual households where and how to install photovoltaic systems and batteries. Meanwhile, the first local energy system operators had such smart meter data of almost all residential consumers for several years. We acquire those of an exemplary operator and discuss a case study presenting some features of our digital twin and highlighting the value of the combination of smart meter and geospatial data.
Purpose: A study of real-time adaptive radiotherapy systems was performed to test the hypothesis that, across delivery systems and institutions, the dosimetric accuracy is improved with adaptive treatments over non-adaptive radiotherapy in the presence of patient-measured tumor motion. Methods and materials: Ten institutions with robotic(2), gimbaled(2), MLC(4) or couch tracking(2) used common materials including CT and structure sets, motion traces and planning protocols to create a lung and a prostate plan. For each motion trace, the plan was delivered twice to a moving dosimeter; with and without real-time adaptation. Each measurement was compared to a static measurement and the percentage of failed points for gamma-tests recorded. Results: For all lung traces all measurement sets show improved dose accuracy with a mean 2%/2 mm gamma-fail rate of 1.6% with adaptation and 15.2% without adaptation (p < 0.001). For all prostate the mean 2%/2 mm gamma-fail rate was 1.4% with adaptation and 17.3% without adaptation (p < 0.001). The difference between the four systems was small with an average 2%/2 mm gamma-fail rate of <3% for all systems with adaptation for lung and prostate. Conclusions: The investigated systems all accounted for realistic tumor motion accurately and performed to a similar high standard, with real-time adaptation significantly outperforming non-adaptive delivery methods.
Immersive virtual environments provide users with the opportunity to escape from the real world, but scripted dialogues can disrupt the presence within the world the user is trying to escape within. Both Non-Playable Character (NPC) to Player and NPC to NPC dialogue can be non-natural and the reliance on responding with pre-defined dialogue does not always meet the players emotional expectations or provide responses appropriate to the given context or world states. This paper investigates the application of Artificial Intelligence (AI) and Natural Language Processing to generate dynamic human-like responses within a themed virtual world. Each thematic has been analysed against humangenerated responses for the same seed and demonstrates invariance of rating across a range of model sizes, but shows an effect of theme and the size of the corpus used for fine-tuning the context for the game world.
A number of public codes exist for GPS positioning and baseline determination in off-line mode. However, no software code exists for DGPS exploiting correction factors at base stations, without relying on double difference information. In order to accomplish it, a methodology is introduced in MATLAB environment for DGPS using C/A pseudoranges on single frequency L1 only to make it feasible for low-cost GPS receivers. Our base station is at accurately surveyed reference point. Pseudoranges and geometric ranges are compared at base station to compute the correction factors. These correction factors are then handed over to rover for all valid satellites observed during an epoch. The rover takes it into account for its own true position determination for corresponding epoch. In order to validate the proposed algorithm, our rover is also placed at a pre-determined location. The proposed code is an appropriate and simple to use tool for post-processing of GPS raw data for accurate position determination of a rover e.g. Unmanned Aerial Vehicle during post-mission analysis.
Today knowledge base authoring for the engineering of intelligent systems is performed mainly by using tools with graphical user interfaces. An alternative human-computer interaction para- digm is the maintenance and manipulation of electronic documents, which provides several ad- vantages with respect to the social aspects of knowledge acquisition. Until today it hardly has found any attention as a method for knowledge engineering.
This thesis provides a comprehensive discussion of document-centered knowledge acquisition with knowledge markup languages. There, electronic documents are edited by the knowledge authors and the executable knowledge base entities are captured by markup language expressions within the documents. The analysis of this approach reveals significant advantages as well as new challenges when compared to the use of traditional GUI-based tools.
Some advantages of the approach are the low barriers for domain expert participation, the simple integration of informal descriptions, and the possibility of incremental knowledge for- malization. It therefore provides good conditions for building up a knowledge acquisition pro- cess based on the mixed-initiative strategy, being a flexible combination of direct and indirect knowledge acquisition. Further it turns out that document-centered knowledge acquisition with knowledge markup languages provides high potential for creating customized knowledge au- thoring environments, tailored to the needs of the current knowledge engineering project and its participants. The thesis derives a process model to optimally exploit this customization po- tential, evolving a project specific authoring environment by an agile process on the meta level. This meta-engineering process continuously refines the three aspects of the document space: The employed markup languages, the scope of the informal knowledge, and the structuring and organization of the documents. The evolution of the first aspect, the markup languages, plays a key role, implying the design of project specific markup languages that are easily understood by the knowledge authors and that are suitable to capture the required formal knowledge precisely. The goal of the meta-engineering process is to create a knowledge authoring environment, where structure and presentation of the domain knowledge comply well to the users’ mental model of the domain. In that way, the approach can help to ease major issues of knowledge-based system development, such as high initial development costs and long-term maintenance problems.
In practice, the application of the meta-engineering approach for document-centered knowl- edge acquisition poses several technical challenges that need to be coped with by appropriate tool support. In this thesis KnowWE, an extensible document-centered knowledge acquisition environment is presented. The system is designed to support the technical tasks implied by the meta-engineering approach, as for instance design and implementation of new markup lan- guages, content refactoring, and authoring support. It is used to evaluate the approach in several real-world case-studies from different domains, such as medicine or engineering for instance.
We end the thesis by a summary and point out further interesting research questions consid- ering the document-centered knowledge acquisition approach.
A new innovative real-time tracking method for flying insects applicable under natural conditions
(2021)
Background
Sixty percent of all species are insects, yet despite global efforts to monitor animal movement patterns, insects are continuously underrepresented. This striking difference between species richness and the number of species monitored is not due to a lack of interest but rather to the lack of technical solutions. Often the accuracy and speed of established tracking methods is not high enough to record behavior and react to it experimentally in real-time, which applies in particular to small flying animals.
Results
Our new method of real-time tracking relates to frequencies of solar radiation which are almost completely absorbed by traveling through the atmosphere. For tracking, photoluminescent tags with a peak emission (1400 nm), which lays in such a region of strong absorption through the atmosphere, were attached to the animals. The photoluminescent properties of passivated lead sulphide quantum dots were responsible for the emission of light by the tags and provide a superb signal-to noise ratio. We developed prototype markers with a weight of 12.5 mg and a diameter of 5 mm. Furthermore, we developed a short wave infrared detection system which can record and determine the position of an animal in a heterogeneous environment with a delay smaller than 10 ms. With this method we were able to track tagged bumblebees as well as hawk moths in a flight arena that was placed outside on a natural meadow.
Conclusion
Our new method eliminates the necessity of a constant or predictable environment for many experimental setups. Furthermore, we postulate that the developed matrix-detector mounted to a multicopter will enable tracking of small flying insects, over medium range distances (>1000m) in the near future because: a) the matrix-detector equipped with an 70 mm interchangeable lens weighs less than 380 g, b) it evaluates the position of an animal in real-time and c) it can directly control and communicate with electronic devices.
A new underwater 3D scanning device based on structured illumination and designed for continuous capture of object data in motion for deep sea inspection applications is introduced. The sensor permanently captures 3D data of the inspected surface and generates a 3D surface model in real time. Sensor velocities up to 0.7 m/s are directly compensated while capturing camera images for the 3D reconstruction pipeline. The accuracy results of static measurements of special specimens in a water basin with clear water show the high accuracy potential of the scanner in the sub-millimeter range. Measurement examples with a moving sensor show the significance of the proposed motion compensation and the ability to generate a 3D model by merging individual scans. Future application tests in offshore environments will show the practical potential of the sensor for the desired inspection tasks.
Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark.
Mini Unmanned Aerial Vehicles (MUAVs) are becoming popular research platform and drawing considerable attention, particularly during the last decade due to their multi-dimensional applications in almost every walk of life. MUAVs range from simple toys found at electronic supermarkets for entertainment purpose to highly sophisticated commercial platforms performing novel assignments like offshore wind power station inspection and 3D modelling of buildings. This paper presents an overview of the main aspects in the domain of distributed control of cooperating MUAVs to facilitate the potential users in this fascinating field. Furthermore it gives an overview on state of the art in MUAV technologies e.g. Photonic Mixer Devices (PMD) camera, distributed control methods and on-going work and challenges, which is the motivation for many researchers all over the world to work in this field.
A simple test setup has been developed at Institute of Aerospace Information Technology, University of Würzburg, Germany to realize basic functionalities for formation flight of quadrocopters. The test environment is planned to be utilized for developing and validating the algorithms for formation flying capability in real environment as well as for education purpose. An already existing test bed for single quadrocopter was extended with necessary inter-communication and distributed control mechanism to test the algorithms for formation flights in 2 degrees of freedom (roll / pitch). This study encompasses the domain of communication, control engineering and embedded systems programming. Bluetooth protocol has been used for inter-communication between two quadrocopters. A simple approach of PID control in combination with Kalman filter has been exploited. MATLAB Instrument Control Toolbox has been used for data display, plotting and analysis. Plots can be drawn in real-time and received information can also be stored in the form of files for later use and analysis. The test setup has been developed indigenously and at considerably low cost. Emphasis has been placed on simplicity to facilitate students learning process. Several lessons have been learnt during the course of development of this setup. Proposed setup is quite flexible that can be modified as per changing requirements.
A key feature for Internet of Things (IoT) is to control what content is available to each user. To handle this access management, encryption schemes can be used. Due to the diverse usage of encryption schemes, there are various realizations of 1-to-1, 1-to-n, and n-to-n schemes in the literature. This multitude of encryption methods with a wide variety of properties presents developers with the challenge of selecting the optimal method for a particular use case, which is further complicated by the fact that there is no overview of existing encryption schemes. To fill this gap, we envision a cryptography encyclopedia providing such an overview of existing encryption schemes. In this survey paper, we take a first step towards such an encyclopedia by creating a sub-encyclopedia for secure group communication (SGC) schemes, which belong to the n-to-n category. We extensively surveyed the state-of-the-art and classified 47 different schemes. More precisely, we provide (i) a comprehensive overview of the relevant security features, (ii) a set of relevant performance metrics, (iii) a classification for secure group communication schemes, and (iv) workflow descriptions of the 47 schemes. Moreover, we perform a detailed performance and security evaluation of the 47 secure group communication schemes. Based on this evaluation, we create a guideline for the selection of secure group communication schemes.
This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.
Measurements of physiological parameters provide an objective, often non-intrusive, and (at least semi-)automatic evaluation and utilization of user behavior. In addition, specific hardware devices of Virtual Reality (VR) often ship with built-in sensors, i.e. eye-tracking and movements sensors. Hence, the combination of physiological measurements and VR applications seems promising. Several approaches have investigated the applicability and benefits of this combination for various fields of applications. However, the range of possible application fields, coupled with potentially useful and beneficial physiological parameters, types of sensor, target variables and factors, and analysis approaches and techniques is manifold. This article provides a systematic overview and an extensive state-of-the-art review of the usage of physiological measurements in VR. We identified 1,119 works that make use of physiological measurements in VR. Within these, we identified 32 approaches that focus on the classification of characteristics of experience, common in VR applications. The first part of this review categorizes the 1,119 works by field of application, i.e. therapy, training, entertainment, and communication and interaction, as well as by the specific target factors and variables measured by the physiological parameters. An additional category summarizes general VR approaches applicable to all specific fields of application since they target typical VR qualities. In the second part of this review, we analyze the target factors and variables regarding the respective methods used for an automatic analysis and, potentially, classification. For example, we highlight which measurement setups have been proven to be sensitive enough to distinguish different levels of arousal, valence, anxiety, stress, or cognitive workload in the virtual realm. This work may prove useful for all researchers wanting to use physiological data in VR and who want to have a good overview of prior approaches taken, their benefits and potential drawbacks.
Failure prediction is an important aspect of self-aware computing systems. Therefore, a multitude of different approaches has been proposed in the literature over the past few years. In this work, we propose a taxonomy for organizing works focusing on the prediction of Service Level Objective (SLO) failures. Our taxonomy classifies related work along the dimensions of the prediction target (e.g., anomaly detection, performance prediction, or failure prediction), the time horizon (e.g., detection or prediction, online or offline application), and the applied modeling type (e.g., time series forecasting, machine learning, or queueing theory). The classification is derived based on a systematic mapping of relevant papers in the area. Additionally, we give an overview of different techniques in each sub-group and address remaining challenges in order to guide future research.
In today's Internet, services are very different in their requirements on the underlying transport network. In the future, this diversity will increase and it will be more difficult to accommodate all services in a single network. A possible approach to cope with this diversity within future networks is the introduction of support for running isolated networks for different services on top of a single shared physical substrate. This would also enable easy network management and ensure an economically sound operation. End-customers will readily adopt this approach as it enables new and innovative services without being expensive. In order to arrive at a concept that enables this kind of network, it needs to be designed around and constantly checked against realistic use cases. In this contribution, we present three use cases for future networks. We describe functional blocks of a virtual network architecture, which are necessary to support these use cases within the network. Furthermore, we discuss the interfaces needed between the functional blocks and consider standardization issues that arise in order to achieve a global consistent control and management structure of virtual networks.
Utilizing multiple access technologies such as 5G, 4G, and Wi-Fi within a coherent framework is currently standardized by 3GPP within 5G ATSSS. Indeed, distributing packets over multiple networks can lead to increased robustness, resiliency and capacity. A key part of such a framework is the multi-access proxy, which transparently distributes packets over multiple paths. As the proxy needs to serve thousands of customers, scalability and performance are crucial for operator deployments. In this paper, we leverage recent advancements in data plane programming, implement a multi-access proxy based on the MP-DCCP tunneling approach in P4 and hardware accelerate it by deploying the pipeline on a smartNIC. This is challenging due to the complex scheduling and congestion control operations involved. We present our pipeline and data structures design for congestion control and packet scheduling state management. Initial measurements in our testbed show that packet latency is in the range of 25 μs demonstrating the feasibility of our approach.
Utilizing multiple access networks such as 5G, 4G, and Wi-Fi simultaneously can lead to increased robustness, resiliency, and capacity for mobile users. However, transparently implementing packet distribution over multiple paths within the core of the network faces multiple challenges including scalability to a large number of customers, low latency, and high-capacity packet processing requirements. In this paper, we offload congestion-aware multipath packet scheduling to a smartNIC. However, such hardware acceleration faces multiple challenges due to programming language and platform limitations. We implement different multipath schedulers in P4 with different complexity in order to cope with dynamically changing path capacities. Using testbed measurements, we show that our CMon scheduler, which monitors path congestion in the data plane and dynamically adjusts scheduling weights for the different paths based on path state information, can process more than 3.5 Mpps packets 25 μs latency.
Background: Over the recent years, technological advances of wrist-worn fitness trackers heralded a new era in the continuous monitoring of vital signs. So far, these devices have primarily been used for sports.
Objective: However, for using these technologies in health care, further validations of the measurement accuracy in hospitalized patients are essential but lacking to date.
Methods: We conducted a prospective validation study with 201 patients after moderate to major surgery in a controlled setting to benchmark the accuracy of heart rate measurements in 4 consumer-grade fitness trackers (Apple Watch 7, Garmin Fenix 6 Pro, Withings ScanWatch, and Fitbit Sense) against the clinical gold standard (electrocardiography).
Results: All devices exhibited high correlation (r≥0.95; P<.001) and concordance (rc≥0.94) coefficients, with a relative error as low as mean absolute percentage error <5% based on 1630 valid measurements. We identified confounders significantly biasing the measurement accuracy, although not at clinically relevant levels (mean absolute error<5 beats per minute).
Conclusions: Consumer-grade fitness trackers appear promising in hospitalized patients for monitoring heart rate.
The importance of Clinical Data Warehouses (CDW) has increased significantly in recent years as they support or enable many applications such as clinical trials, data mining, and decision making.
CDWs integrate Electronic Health Records which still contain a large amount of text data, such as discharge letters or reports on diagnostic findings in addition to structured and coded data like ICD-codes of diagnoses.
Existing CDWs hardly support features to gain information covered in texts.
Information extraction methods offer a solution for this problem but they have a high and long development effort, which can only be carried out by computer scientists.
Moreover, such systems only exist for a few medical domains.
This paper presents a method empowering clinicians to extract information from texts on their own. Medical concepts can be extracted ad hoc from e.g. discharge letters, thus physicians can work promptly and autonomously. The proposed system achieves these improvements by efficient data storage, preprocessing, and with powerful query features. Negations in texts are recognized and automatically excluded, as well as the context of information is determined and undesired facts are filtered, such as historical events or references to other persons (family history).
Context-sensitive queries ensure the semantic integrity of the concepts to be extracted.
A new feature not available in other CDWs is to query numerical concepts in texts and even filter them (e.g. BMI > 25).
The retrieved values can be extracted and exported for further analysis.
This technique is implemented within the efficient architecture of the PaDaWaN CDW and evaluated with comprehensive and complex tests.
The results outperform similar approaches reported in the literature.
Ad hoc IE determines the results in a few (milli-) seconds and a user friendly GUI enables interactive working, allowing flexible adaptation of the extraction.
In addition, the applicability of this system is demonstrated in three real-world applications at the Würzburg University Hospital (UKW).
Several drug trend studies are replicated: Findings of five studies on high blood pressure, atrial fibrillation and chronic renal failure can be partially or completely confirmed in the UKW. Another case study evaluates the prevalence of heart failure in inpatient hospitals using an algorithm that extracts information with ad hoc IE from discharge letters and echocardiogram report (e.g. LVEF < 45 ) and other sources of the hospital information system.
This study reveals that the use of ICD codes leads to a significant underestimation (31%) of the true prevalence of heart failure.
The third case study evaluates the consistency of diagnoses by comparing structured ICD-10-coded diagnoses with the diagnoses described in the diagnostic section of the discharge letter.
These diagnoses are extracted from texts with ad hoc IE, using synonyms generated with a novel method.
The developed approach can extract diagnoses from the discharge letter with a high accuracy and furthermore it can prove the degree of consistency between the coded and reported diagnoses.
The progress which has been made in semiconductor chip production in recent years enables a multitude of cores on a single die. However, due to further decreasing structure sizes, fault tolerance and energy consumption will represent key challenges. Furthermore, an efficient communication infrastructure is indispensable due to the high parallelism at those systems. The predominant communication system at such highly parallel systems is a Network on Chip (NoC). The focus of this thesis is on NoCs which are based on deflection routing. In this context, contributions are made to two domains, fault tolerance and dimensioning of the optimal link width. Both aspects are essential for the application of reliable, energy efficient, and deflection routing based NoCs.
It is expected that future semiconductor systems have to cope with high fault probabilities. The inherently given high connectivity of most NoC topologies can be exploited to tolerate the breakdown of links and other components. In this thesis, a fault-tolerant router architecture has been developed, which stands out for the deployed interconnection architecture and the method to overcome complex fault situations. The presented simulation results show, all data packets arrive at their destination, even at high fault probabilities. In contrast to routing table based architectures, the hardware costs of the herein presented architecture are lower and, in particular, independent of the number of components in the network.
Besides fault tolerance, hardware costs and energy efficiency are of great importance. The utilized link width has a decisive influence on these aspects. In particular, at deflection routing based NoCs, over- and under-sizing of the link width leads to unnecessary high hardware costs and bad performance, respectively. In the second part of this thesis, the optimal link width at deflection routing based NoCs is investigated. Additionally, a method to reduce the link width is introduced. Simulation and synthesis results show, the herein presented method allows a significant reduction of hardware costs at comparable performance.
This work takes a close look at several quite different research areas related to the design of networked embedded sensor/actuator systems. The variety of the topics illustrates the potential complexity of current sensor network applications; especially when enriched with actuators for proactivity and environmental interaction. Besides their conception, development, installation and long-term operation, we'll mainly focus on more "low-level" aspects: Compositional hardware and software design, task cooperation and collaboration, memory management, and real-time operation will be addressed from a local node perspective. In contrast, inter-node synchronization, communication, as well as sensor data acquisition, aggregation, and fusion will be discussed from a rather global network view. The diversity in the concepts was intentionally accepted to finally facilitate the reliable implementation of truly complex systems. In particular, these should go beyond the usual "sense and transmit of sensor data", but show how powerful today's networked sensor/actuator systems can be despite of their low computational performance and constrained hardware: If their resources are only coordinated efficiently!
An approach to aerodynamically optimizing cycling posture and reducing drag in an Ironman (IM) event was elaborated. Therefore, four commonly used positions in cycling were investigated and simulated for a flow velocity of 10 m/s and yaw angles of 0–20° using OpenFoam-based Nabla Flow CFD simulation software software. A cyclist was scanned using an IPhone 12, and a special-purpose meshing software BLENDER was used. Significant differences were observed by changing and optimizing the cyclist’s posture. Aerodynamic drag coefficient (CdA) varies by more than a factor of 2, ranging from 0.214 to 0.450. Within a position, the CdA tends to increase slightly at yaw angles of 5–10° and decrease at higher yaw angles compared to a straight head wind, except for the time trial (TT) position. The results were applied to the IM Hawaii bike course (180 km), estimating a constant power output of 300 W. Including the wind distributions, two different bike split models for performance prediction were applied. Significant time saving of roughly 1 h was found. Finally, a machine learning approach to deduce 3D triangulation for specific body shapes from 2D pictures was tested.
In the last years, visual methods have been introduced in industrial software production and teaching of software engineering. In particular, the international standardization of a graphical software engineering language, the Unified Modeling Language (UML) was a reason for this tendency. Unfortunately, various problems exist in concrete realizations of tools, e.g. due to a missing compliance to the standard. One problem is the automatic layout, which is required for a consistent automatic software design. The thesis derives reasons and criteria for an automatic layout method, which produces drawings of UML class diagrams according to the UML specification and issues of human computer interaction, e.g. readability. A unique set of aesthetic criteria is combined from four different disciplines involved in this topic. Based on these aethetic rules, a hierarchical layout algorithm is developed, analyzed, measured by specialized measuring techniques and compared to related work. Then, the realization of the algorithm as a Java framework is given as an architectural description. Finally, adaptions to anticipated future changes of the UML, improvements of the framework and example drawings of the implementation are given.
Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others’ appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material. Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one’s own body than for other virtual humans.
This document presents a networking latency measurement setup that focuses on affordability and universal applicability, and can provide sub-microsecond accuracy. It explains the prerequisites, hardware choices, and considerations to respect during measurement. In addition, it discusses the necessity for exhaustive latency measurements when dealing with high availability and low latency requirements. Preliminary results show that the accuracy is within ±0.02 μs when used with the Intel I350-T2 network adapter.
The success of diagnostic knowledge systems has been proved over the last decades. Nowadays, intelligent systems are embedded in machines within various domains or are used in interaction with a user for solving problems. However, although such systems have been applied very successfully the development of a knowledge system is still a critical issue. Similarly to projects dealing with customized software at a highly innovative level a precise specification often cannot be given in advance. Moreover, necessary requirements of the knowledge system can be defined not until the project has been started or are changing during the development phase. Many success factors depend on the feedback given by users, which can be provided if preliminary demonstrations of the system can be delivered as soon as possible, e.g., for interactive systems validation the duration of the system dialog. This thesis motivates that classical, document-centered approaches cannot be applied in such a setting. We cope with this problem by introducing an agile process model for developing diagnostic knowledge systems, mainly inspired by the ideas of the eXtreme Programming methodology known in software engineering. The main aim of the presented work is to simplify the engineering process for domain specialists formalizing the knowledge themselves. The engineering process is supported at a primary level by the introduction of knowledge containers, that define an organized view of knowledge contained in the system. Consequently, we provide structured procedures as a recommendation for filling these containers. The actual knowledge is acquired and formalized right from start, and the integration to runnable knowledge systems is done continuously in order to allow for an early and concrete feedback. In contrast to related prototyping approaches the validity and maintainability of the collected knowledge is ensured by appropriate test methods and restructuring techniques, respectively. Additionally, we propose learning methods to support the knowledge acquisition process sufficiently. The practical significance of the process model strongly depends on the available tools supporting the application of the process model. We present the system family d3web and especially the system d3web.KnowME as a highly integrated development environment for diagnostic knowledge systems. The process model and its activities, respectively, are evaluated in two real life applications: in a medical and in an environmental project the benefits of the agile development are clearly demonstrated.
We use algebraic closures and structures which are derived from these in complexity theory. We classify problems with Boolean circuits and Boolean constraints according to their complexity. We transfer algebraic structures to structural complexity. We use the generation problem to classify important complexity classes.
Mobile laser scanning puts high requirements on the accuracy of the positioning systems and the calibration of the measurement system. We present a novel algorithmic approach for calibration with the goal of improving the measurement accuracy of mobile laser scanners. We describe a general framework for calibrating mobile sensor platforms that estimates all configuration parameters for any arrangement of positioning sensors, including odometry. In addition, we present a novel semi-rigid Simultaneous Localization and Mapping (SLAM) algorithm that corrects the vehicle position at every point in time along its trajectory, while simultaneously improving the quality and precision of the entire acquired point cloud. Using this algorithm, the temporary failure of accurate external positioning systems or the lack thereof can be compensated for. We demonstrate the capabilities of the two newly proposed algorithms on a wide variety of datasets.
Graphs provide a key means to model relationships between entities.
They consist of vertices representing the entities,
and edges representing relationships between pairs of entities.
To make people conceive the structure of a graph,
it is almost inevitable to visualize the graph.
We call such a visualization a graph drawing.
Moreover, we have a straight-line graph drawing
if each vertex is represented as a point
(or a small geometric object, e.g., a rectangle)
and each edge is represented as a line segment between its two vertices.
A polyline is a very simple straight-line graph drawing,
where the vertices form a sequence according to which the vertices are connected by edges.
An example of a polyline in practice is a GPS trajectory.
The underlying road network, in turn, can be modeled as a graph.
This book addresses problems that arise
when working with straight-line graph drawings and polylines.
In particular, we study algorithms
for recognizing certain graphs representable with line segments,
for generating straight-line graph drawings,
and for abstracting polylines.
In the first part, we first examine,
how and in which time we can decide
whether a given graph is a stick graph,
that is, whether its vertices can be represented as
vertical and horizontal line segments on a diagonal line,
which intersect if and only if there is an edge between them.
We then consider the visual complexity of graphs.
Specifically, we investigate, for certain classes of graphs,
how many line segments are necessary for any straight-line graph drawing,
and whether three (or more) different slopes of the line segments
are sufficient to draw all edges.
Last, we study the question,
how to assign (ordered) colors to the vertices of a graph
with both directed and undirected edges
such that no neighboring vertices get the same color
and colors are ascending along directed edges.
Here, the special property of the considered graph is
that the vertices can be represented as intervals
that overlap if and only if there is an edge between them.
The latter problem is motivated by an application
in automated drawing of cable plans with vertical and horizontal line segments,
which we cover in the second part.
We describe an algorithm that
gets the abstract description of a cable plan as input,
and generates a drawing that takes into account
the special properties of these cable plans,
like plugs and groups of wires.
We then experimentally evaluate the quality of the resulting drawings.
In the third part, we study the problem of abstracting (or simplifying)
a single polyline and a bundle of polylines.
In this problem, the objective is to remove as many vertices as possible from the given polyline(s)
while keeping each resulting polyline sufficiently similar to its original course
(according to a given similarity measure).
Heat and excessive solar radiation can produce abiotic stresses during apple maturation, resulting fruit quality. Therefore, the monitoring of temperature on fruit surface (FST) over the growing period can allow to identify thresholds, above of which several physiological disorders such as sunburn may occur in apple.
The current approaches neglect spatial variation of FST and have reduced repeatability, resulting in unreliable predictions. In this study, LiDAR laser scanning and thermal imaging were employed to detect the temperature on fruit surface by means of 3D point cloud. A process for calibrating the two sensors based on an active board target and producing a 3D thermal point cloud was suggested. After calibration, the sensor system was utilised to scan the fruit trees, while temperature values assigned in the corresponding 3D point cloud were based on the extrinsic calibration. Whereas a fruit detection algorithm was performed to segment the FST from each apple.
• The approach allows the calibration of LiDAR laser scanner with thermal camera in order to produce a 3D thermal point cloud.
• The method can be applied in apple trees for segmenting FST in 3D. Whereas the approach can be utilised to predict several physiological disorders including sunburn on fruit surface.
With the introduction of OpenFlow by the Stanford University in 2008, a process began in the area of network research, which questions the predominant approach of fully distributed network control. OpenFlow is a communication protocol that allows the externalization of the network control plane from the network devices, such as a router, and to realize it as a logically-centralized entity in software. For this concept, the term "Software Defined Networking" (SDN) was coined during scientific discourse.
For the network operators, this concept has several advantages. The two most important can be summarized under the points cost savings and flexibility. Firstly, it is possible through the uniform interface for network hardware ("Southbound API"), as implemented by OpenFlow, to combine devices and software from different manufacturers, which increases the innovation and price pressure on them. Secondly, the realization of the network control plane as a freely programmable software with open interfaces ("Northbound API") provides the opportunity to adapt it to the individual circumstances of the operator's network and to exchange information with the applications it serves. This allows the network to be more flexible and to react more quickly to changing circumstances as well as transport the traffic more effectively and tailored to the user’s "Quality of Experience" (QoE).
The approach of a separate network control layer for packet-based networks is not new and has already been proposed several times in the past. Therefore, the SDN approach has raised many questions about its feasibility in terms of efficiency and applicability. These questions are caused to some extent by the fact that there is no generally accepted definition of the SDN concept to date. It is therefore a part of this thesis to derive such a definition. In addition, several of the open issues are investigated. This Investigations follow the three aspects: Performance Evaluation of Software Defined Networking, applications on the SDN control layer, and the usability of SDN Northbound-API for creation application-awareness in network operation.
Performance Evaluation of Software Defined Networking: The question of the efficiency of an SDN-based system was from the beginning one of the most important. In this thesis, experimental measurements of the performance of OpenFlow-enabled switch hardware and control software were conducted for the purpose of answering this question. The results of these measurements were used as input parameters for establishing an analytical model of the reactive SDN approach. Through the model it could be determined that the performance of the software control layer, often called "Controller", is crucial for the overall performance of the system, but that the approach is generally viable. Based on this finding a software for analyzing the performance of SDN controllers was developed. This software allows the emulation of the forwarding layer of an SDN network towards the control software and can thus determine its performance in different situations and configurations. The measurements with this software showed that there are quite significant differences in the behavior of different control software implementations. Among other things it has been shown that some show different characteristics for various switches, in particular in terms of message processing speed. Under certain circumstances this can lead to network failures.
Applications on the SDN control layer: The core piece of software defined networking are the intelligent network applications that operate on the control layer. However, their development is still in its infancy and little is known about the technical possibilities and their limitations. Therefore, the relationship between an SDN-based and classical implementation of a network function is investigated in this thesis. This function is the monitoring of network links and the traffic they carry. A typical approach for this task has been built based on Wiretapping and specialized measurement hardware and compared with an implementation based on OpenFlow switches and a special SDN control application. The results of the comparison show that the SDN version can compete in terms of measurement accuracy for bandwidth and delay estimation with the traditional measurement set-up. However, a compromise has to be found for measurements below the millisecond range.
Another question regarding the SDN control applications is whether and how well they can solve existing problems in networks. Two programs have been developed based on SDN in this thesis to solve two typical network issues. Firstly, the tool "IPOM", which enables considerably more flexibility in the study of effects of network structures for a researcher, who is confined to a fixed physical test network topology.
The second software provides an interface between the Cloud Orchestration Software "OpenNebula" and an OpenFlow controller. The purpose of this software was to investigate experimentally whether a pre-notification of the network of an impending relocation of a virtual service in a data center is sufficient to ensure the continuous operation of that service. This was demonstrated on the example of a video service.
Usability of the SDN Northbound API for creating application-awareness in network operation: Currently, the fact that the network and the applications that run on it are developed and operated separately leads to problems in network operation. SDN offers with the Northbound-API an open interface that enables the exchange between information of both worlds during operation. One aim of this thesis was to investigate whether this interface can be exploited so that the QoE experienced by the user can be maintained on high level. For this purpose, the QoE influence factors were determined on a challenging application by means of a subjective survey study. The application is cloud gaming, in which the calculation of video game environments takes place in the cloud and is transported via video over the network to the user. It was shown that apart from the most important factor influencing QoS, i.e., packet loss on the downlink, also the type of game type and its speed play a role. This demonstrates that in addition to QoS the application state is important and should be communicated to the network. Since an implementation of such a state conscious SDN for the example of Cloud Gaming was not possible due to its proprietary implementation, in this thesis the application “YouTube video streaming” was chosen as an alternative. For this application, status information is retrievable via the "Yomo" tool and can be used for network control. It was shown that an SDN-based implementation of an application-aware network has distinct advantages over traditional network management methods and the user quality can be obtained in spite of disturbances.
A procedure to control all six DOF (degrees of freedom) of a UAV (unmanned aerial vehicle) without an external reference system and to enable fully autonomous flight is presented here. For 2D positioning the principle of optical flow is used. Together with the output of height estimation, fusing ultrasonic, infrared and inertial and pressure sensor data, the 3D position of the UAV can be computed, controlled and steered. All data processing is done on the UAV. An external computer with a pathway planning interface is for commanding purposes only. The presented system is part of the AQopterI8 project, which aims to develop an autonomous flying quadrocopter for indoor application. The focus of this paper is 2D positioning using an optical flow sensor. As a result of the performed evaluation, it can be concluded that for position hold, the standard deviation of the position error is 10cm and after landing the position error is about 30cm.
An Intelligent Semi-Automatic Workflow for Optical Character Recognition of Historical Printings
(2020)
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years great progress has been made in the area of historical OCR resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, Automatic Text Recognition (ATR) and postcorrection. Their major drawback is that they only offer limited applicability by non-technical users like humanist scholars, in particular when it comes to the combined use of several tools in a workflow. Furthermore, depending on the material, these tools are usually not able to fully automatically achieve sufficiently low error rates, let alone perfect results, creating a demand for an interactive postcorrection functionality which, however, is generally not incorporated.
This thesis addresses these issues by presenting an open-source OCR software called OCR4all which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required Ground Truth (GT) for training stronger mixed models (for segmentation as well as text recognition) is not available, yet, neither in the desired quantity nor quality.
To deal with this issue in the short run, OCR4all offers better recognition capabilities in combination with a very comfortable Graphical User Interface (GUI) that allows error corrections not only in the final output, but already in early stages to minimize error propagation. In the long run this constant manual correction produces large quantities of valuable, high quality training material which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. The architecture of OCR4all allows for an easy integration (or substitution) of newly developed tools for its main components by supporting standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
In addition to OCR4all, several methodical extensions in the form of accuracy improving techniques for training and recognition are presented. Most notably an effective, sophisticated, and adaptable voting methodology using a single ATR engine, a pretraining procedure, and an Active Learning (AL) component are proposed. Experiments showed that combining pretraining and voting significantly improves the effectiveness of book-specific training, reducing the obtained Character Error Rates (CERs) by more than 50%.
The proposed extensions were further evaluated during two real world case studies: First, the voting and pretraining techniques are transferred to the task of constructing so-called mixed models which are trained on a variety of different fonts. This was done by using 19th century Fraktur script as an example, resulting in a considerable improvement over a variety of existing open-source and commercial engines and models. Second, the extension from ATR on raw text to the adjacent topic of typography recognition was successfully addressed by thoroughly indexing a historical lexicon that heavily relies on different font types in order to encode its complex semantic structure.
During the main experiments on very complex early printed books even users with minimal or no experience were able to not only comfortably deal with the challenges presented by the complex layout, but also to recognize the text with manageable effort and great quality, achieving excellent CERs below 0.5%. Furthermore, the fully automated application on 19th century novels showed that OCR4all (average CER of 0.85%) can considerably outperform the commercial state-of-the-art tool ABBYY Finereader (5.3%) on moderate layouts if suitably pretrained mixed ATR models are available.
Maps are the main tool to represent geographical information. Users often zoom in and out to access maps at different scales. Continuous map generalization tries to make the changes between different scales smooth, which is essential to provide users with comfortable zooming experience.
In order to achieve continuous map generalization with high quality, we optimize some important aspects of maps. In this book, we have used optimization in the generalization of land-cover areas, administrative boundaries, buildings, and coastlines. According to our experiments, continuous map generalization indeed benefits from optimization.
An Overview of Design Patterns for Self-Adaptive Systems in the Context of the Internet of Things
(2020)
The Internet of Things (IoT) requires the integration of all available, highly specialized, and heterogeneous devices, ranging from embedded sensor nodes to servers in the cloud. The self-adaptive research domain provides adaptive capabilities that can support the integration in IoT systems. However, developing such systems is a challenging, error-prone, and time-consuming task. In this context, design patterns propose already used and optimized solutions to specific problems in various contexts. Applying design patterns might help to reuse existing knowledge about similar development issues. However, so far, there is a lack of taxonomies on design patterns for self-adaptive systems. To tackle this issue, in this paper, we provide a taxonomy on design patterns for self-adaptive systems that can be transferred to support adaptivity in IoT systems. Besides describing the taxonomy and the design patterns, we discuss their applicability in an Industrial IoT case study.
Routing is one of the most important issues in any communication network. It defines on which path packets are transmitted from the source of a connection to the destination. It allows to control the distribution of flows between different locations in the network and thereby is a means to influence the load distribution or to reach certain constraints imposed by particular applications. As failures in communication networks appear regularly and cannot be completely avoided, routing is required to be resilient against such outages, i.e., routing still has to be able to forward packets on backup paths even if primary paths are not working any more.
Throughout the years, various routing technologies have been introduced that are very different in their control structure, in their way of working, and in their ability to handle certain failure cases. Each of the different routing approaches opens up their own specific questions regarding configuration, optimization, and inclusion of resilience issues. This monograph investigates, with the example of three particular routing technologies, some concrete issues regarding the analysis and optimization of resilience. It thereby contributes to a better general, technology-independent understanding of these approaches and of their diverse potential for the use in future network architectures.
The first considered routing type, is decentralized intra-domain routing based on administrative IP link costs and the shortest path principle. Typical examples are common today's intra-domain routing protocols OSPF and IS-IS. This type of routing includes automatic restoration abilities in case of failures what makes it in general very robust even in the case of severe network outages including several failed components. Furthermore, special IP-Fast Reroute mechanisms allow for a faster reaction on outages. For routing based on link costs, traffic engineering, e.g. the optimization of the maximum relative link load in the network, can be done indirectly by changing the administrative link costs to adequate values.
The second considered routing type, MPLS-based routing, is based on the a priori configuration of primary and backup paths, so-called Label Switched Paths. The routing layout of MPLS paths offers more freedom compared to IP-based routing as it is not restricted by any shortest path constraints but any paths can be setup. However, this in general involves a higher configuration effort.
Finally, in the third considered routing type, typically centralized routing using a Software Defined Networking (SDN) architecture, simple switches only forward packets according to routing decisions made by centralized controller units. SDN-based routing layouts offer the same freedom as for explicit paths configured using MPLS. In case of a failure, new rules can be setup by the controllers to continue the routing in the reduced topology. However, new resilience issues arise caused by the centralized architecture. If controllers are not reachable anymore, the forwarding rules in the single nodes cannot be adapted anymore. This might render a rerouting in case of connection problems in severe failure scenarios infeasible.
Knowledge about ransomware is important for protecting sensitive data and for participating in public debates about suitable regulation regarding its security. However, as of now, this topic has received little to no attention in most school curricula. As such, it is desirable to analyze what citizens can learn about this topic outside of formal education, e.g., from news articles. This analysis is both relevant to analyzing the public discourse about ransomware, as well as to identify what aspects of this topic should be included in the limited time available for this topic in formal education. Thus, this paper was motivated both by educational and media research. The central goal is to explore how the media reports on this topic and, additionally, to identify potential misconceptions that could stem from this reporting. To do so, we conducted an exploratory case study into the reporting of 109 media articles regarding a high-impact ransomware event: the shutdown of the Colonial Pipeline (located in the east of the USA). We analyzed how the articles introduced central terminology, what details were provided, what details were not, and what (mis-)conceptions readers might receive from them. Our results show that an introduction of the terminology and technical concepts of security is insufficient for a complete understanding of the incident. Most importantly, the articles may lead to four misconceptions about ransomware that are likely to lead to misleading conclusions about the responsibility for the incident and possible political and technical options to prevent such attacks in the future.
Graphs are a frequently used tool to model relationships among entities. A graph is a binary relation between objects, that is, it consists of a set of objects (vertices) and a set of pairs of objects (edges).
Networks are common examples of modeling data as a graph. For example, relationships between persons in a social network, or network links between computers in a telecommunication network can be represented by a graph.
The clearest way to illustrate the modeled data is to visualize the graphs. The field of Graph Drawing deals with the problem of finding algorithms to automatically generate graph visualizations. The task is to find a "good" drawing, which can be measured by different criteria such as number of crossings between edges or the used area. In this thesis, we study Angular Schematization in Graph Drawing. By this, we mean drawings
with large angles (for example, between the edges at common vertices or at crossing points).
The thesis consists of three parts. First, we deal with the placement of boxes. Boxes are axis-parallel rectangles that can, for example, contain text.
They can be placed on a map to label important sites, or can be used to describe semantic relationships between words in a word network. In the second part of the thesis, we consider graph drawings visually guide the
viewer. These drawings generally induce large angles between edges that meet at a vertex. Furthermore, the edges are drawn crossing-free and in a way that
makes them easy to follow for the human eye. The third and final part is devoted to crossings with large angles. In drawings with crossings, it is important to have large angles between edges at their crossing point, preferably right angles.
This thesis contributes to several issues in the context of SDN and NFV, with an emphasis on performance and management.
The main contributions are guide lines for operators migrating to software-based networks, as well as an analytical model for the packet processing in a Linux system using the Kernel NAPI.
In this thesis, we are interested in numerically preserving stationary solutions of balance laws. We start by developing finite volume well-balanced schemes for the system of Euler equations and the system of MHD equations with gravitational source term. Since fluid models and kinetic models are related, this leads us to investigate AP schemes for kinetic equations and their ability to preserve stationary solutions. Kinetic models typically have a stiff term, thus AP schemes are needed to capture good solutions of the model. For such kinetic models, equilibrium solutions are reached after large time. Thus we need a new technique to numerically preserve stationary solutions for AP schemes. We find a criterion for SP schemes for kinetic equations which states, that AP schemes under a particular discretization are also SP. In an attempt to mimic our result for kinetic equations in the context of fluid models, for the isentropic Euler equations we developed an AP scheme in the limit of the Mach number going to zero. Our AP scheme is proven to have a SP property under the condition that the pressure is a function of the density and the latter is obtained as a solution of an elliptic equation. The properties of the schemes we developed and its criteria are validated numerically by various test cases from the literature.
Asynchronous Traffic Shaping enabled bounded latency with low complexity for time sensitive networking without the need for time synchronization. However, its main focus is the guaranteed maximum delay. Jitter-sensitive applications may still be forced towards synchronization. This work proposes traffic damping to reduce end-to-end delay jitter. It discusses its application and shows that both the prerequisites and the guaranteed delay of traffic damping and ATS are very similar. Finally, it presents a brief evaluation of delay jitter in an example topology by means of a simulation and worst case estimation.
Over the last decades, cybersecurity has become an increasingly important issue. Between 2019 and 2011 alone, the losses from cyberattacks in the United States grew by 6217%. At the same time, attacks became not only more intensive but also more and more versatile and diverse. Cybersecurity has become everyone’s concern. Today, service providers require sophisticated and extensive security infrastructures comprising many security functions dedicated to various cyberattacks. Still, attacks become more violent to a level where infrastructures can no longer keep up. Simply scaling up is no longer sufficient. To address this challenge, in a whitepaper, the Cloud Security Alliance (CSA) proposed multiple work packages for security infrastructure, leveraging the possibilities of Software-defined Networking (SDN) and Network Function Virtualization (NFV).
Security functions require a more sophisticated modeling approach than regular network functions. Notably, the property to drop packets deemed malicious has a significant impact on Security Service Function Chains (SSFCs)—service chains consisting of multiple security functions to protect against multiple at- tack vectors. Under attack, the order of these chains influences the end-to-end system performance depending on the attack type. Unfortunately, it is hard to predict the attack composition at system design time. Thus, we make a case for dynamic attack-aware SSFC reordering. Also, we tackle the issues of the lack of integration between security functions and the surrounding network infrastructure, the insufficient use of short term CPU frequency boosting, and the lack of Intrusion Detection and Prevention Systems (IDPS) against database ransomware attacks.
Current works focus on characterizing the performance of security functions and their behavior under overload without considering the surrounding infrastructure. Other works aim at replacing security functions using network infrastructure features but do not consider integrating security functions within the network. Further publications deal with using SDN for security or how to deal with new vulnerabilities introduced through SDN. However, they do not take security function performance into account. NFV is a popular field for research dealing with frameworks, benchmarking methods, the combination with SDN, and implementing security functions as Virtualized Network
Functions (VNFs). Research in this area brought forth the concept of Service Function Chains (SFCs) that chain multiple network functions after one another. Nevertheless, they still do not consider the specifics of security functions. The mentioned CSA whitepaper proposes many valuable ideas but leaves their realization open to others.
This thesis presents solutions to increase the performance of single security functions using SDN, performance modeling, a framework for attack-aware SSFC reordering, a solution to make better use of CPU frequency boosting, and an IDPS against database ransomware.
Specifically, the primary contributions of this work are:
• We present approaches to dynamically bypass Intrusion Detection Systems (IDS) in order to increase their performance without reducing the security level. To this end, we develop and implement three SDN-based approaches (two dynamic and one static).
We evaluate the proposed approaches regarding security and performance and show that they significantly increase the performance com- pared to an inline IDS without significant security deficits. We show that using software switches can further increase the performance of the dynamic approaches up to a point where they can eliminate any throughput drawbacks when using the IDS.
• We design a DDoS Protection System (DPS) against TCP SYN flood at tacks in the form of a VNF that works inside an SDN-enabled network. This solution eliminates known scalability and performance drawbacks of existing solutions for this attack type.
Then, we evaluate this solution showing that it correctly handles the connection establishment and present solutions for an observed issue. Next, we evaluate the performance showing that our solution increases performance up to three times. Parallelization and parameter tuning yields another 76% performance boost. Based on these findings, we discuss optimal deployment strategies.
• We introduce the idea of attack-aware SSFC reordering and explain its impact in a theoretical scenario. Then, we discuss the required information to perform this process.
We validate our claim of the importance of the SSFC order by analyzing the behavior of single security functions and SSFCs. Based on the results, we conclude that there is a massive impact on the performance up to three orders of magnitude, and we find contradicting optimal orders
for different workloads. Thus, we demonstrate the need for dynamic reordering.
Last, we develop a model for SSFC regarding traffic composition and resource demands. We classify the traffic into multiple classes and model the effect of single security functions on the traffic and their generated resource demands as functions of the incoming network traffic. Based on our model, we propose three approaches to determine optimal orders for reordering.
• We implement a framework for attack-aware SSFC reordering based on this knowledge. The framework places all security functions inside an SDN-enabled network and reorders them using SDN flows.
Our evaluation shows that the framework can enforce all routes as desired. It correctly adapts to all attacks and returns to the original state after the attacks cease. We find possible security issues at the moment of reordering and present solutions to eliminate them.
• Next, we design and implement an approach to load balance servers while taking into account their ability to go into a state of Central Processing Unit (CPU) frequency boost. To this end, the approach collects temperature information from available hosts and places services on the host that can attain the boosted mode the longest.
We evaluate this approach and show its effectiveness. For high load scenarios, the approach increases the overall performance and the performance per watt. Even better results show up for low load workloads, where not only all performance metrics improve but also the temperatures and total power consumption decrease.
• Last, we design an IDPS protecting against database ransomware attacks that comprise multiple queries to attain their goal. Our solution models these attacks using a Colored Petri Net (CPN).
A proof-of-concept implementation shows that our approach is capable of detecting attacks without creating false positives for benign scenarios. Furthermore, our solution creates only a small performance impact.
Our contributions can help to improve the performance of security infrastructures. We see multiple application areas from data center operators over software and hardware developers to security and performance researchers. Most of the above-listed contributions found use in several research publications.
Regarding future work, we see the need to better integrate SDN-enabled security functions and SSFC reordering in data center networks. Future SSFC should discriminate between different traffic types, and security frameworks should support automatically learning models for security functions. We see the need to consider energy efficiency when regarding SSFCs and take CPU boosting technologies into account when designing performance models as well as placement, scaling, and deployment strategies. Last, for a faster adaptation against recent ransomware attacks, we propose machine-assisted learning for database IDPS signatures.
In this paper we study connectivity augmentation problems. Given a connected graph G with some desirable property, we want to make G 2-vertex connected (or 2-edge connected) by adding edges such that the resulting graph keeps the property. The aim is to add as few edges as possible. The property that we consider is planarity, both in an abstract graph-theoretic and in a geometric setting, where vertices correspond to points in the plane and edges to straight-line segments.
We show that it is NP-hard to nd a minimum-cardinality augmentation that makes a planar graph 2-edge connected. For making a planar graph 2-vertex connected this was known. We further show that both problems are hard in the geometric setting, even when restricted to trees. The problems remain hard for higher degrees of connectivity. On the other hand we give polynomial-time algorithms for the special case of convex geometric graphs.
We also study the following related problem. Given a planar (plane geometric) graph G, two vertices s and t of G, and an integer c, how many edges have to be added to G such that G is still planar (plane geometric) and contains c edge- (or vertex-) disjoint s{t paths? For the planar case we give a linear-time algorithm for c = 2. For the plane geometric case we give optimal worst-case bounds for c = 2; for c = 3 we characterize the cases that have a solution.
Background
Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification.
Methods
We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database.
Results
For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations.
Conclusion
Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.
These days, we are living in a digitalized world. Both our professional and private lives are pervaded by various IT services, which are typically operated using distributed computing systems (e.g., cloud environments). Due to the high level of digitalization, the operators of such systems are confronted with fast-paced and changing requirements. In particular, cloud environments have to cope with load fluctuations and respective rapid and unexpected changes in the computing resource demands. To face this challenge, so-called auto-scalers, such as the threshold-based mechanism in Amazon Web Services EC2, can be employed to enable elastic scaling of the computing resources. However, despite this opportunity, business-critical applications are still run with highly overprovisioned resources to guarantee a stable and reliable service operation. This strategy is pursued due to the lack of trust in auto-scalers and the concern that inaccurate or delayed adaptations may result in financial losses.
To adapt the resource capacity in time, the future resource demands must be "foreseen", as reacting to changes once they are observed introduces an inherent delay. In other words, accurate forecasting methods are required to adapt systems proactively. A powerful approach in this context is time series forecasting, which is also applied in many other domains. The core idea is to examine past values and predict how these values will evolve as time progresses. According to the "No-Free-Lunch Theorem", there is no algorithm that performs best for all scenarios. Therefore, selecting a suitable forecasting method for a given use case is a crucial task. Simply put, each method has its benefits and drawbacks, depending on the specific use case. The choice of the forecasting method is usually based on expert knowledge, which cannot be fully automated, or on trial-and-error. In both cases, this is expensive and prone to error.
Although auto-scaling and time series forecasting are established research fields, existing approaches cannot fully address the mentioned challenges: (i) In our survey on time series forecasting, we found that publications on time series forecasting typically consider only a small set of (mostly related) methods and evaluate their performance on a small number of time series with only a few error measures while providing no information on the execution time of the studied methods. Therefore, such articles cannot be used to guide the choice of an appropriate method for a particular use case; (ii) Existing open-source hybrid forecasting methods that take advantage of at least two methods to tackle the "No-Free-Lunch Theorem" are computationally intensive, poorly automated, designed for a particular data set, or they lack a predictable time-to-result. Methods exhibiting a high variance in the time-to-result cannot be applied for time-critical scenarios (e.g., auto-scaling), while methods tailored to a specific data set introduce restrictions on the possible use cases (e.g., forecasting only annual time series); (iii) Auto-scalers typically scale an application either proactively or reactively. Even though some hybrid auto-scalers exist, they lack sophisticated solutions to combine reactive and proactive scaling. For instance, resources are only released proactively while resource allocation is entirely done in a reactive manner (inherently delayed); (iv) The majority of existing mechanisms do not take the provider's pricing scheme into account while scaling an application in a public cloud environment, which often results in excessive charged costs. Even though some cost-aware auto-scalers have been proposed, they only consider the current resource demands, neglecting their development over time. For example, resources are often shut down prematurely, even though they might be required again soon.
To address the mentioned challenges and the shortcomings of existing work, this thesis presents three contributions: (i) The first contribution-a forecasting benchmark-addresses the problem of limited comparability between existing forecasting methods; (ii) The second contribution-Telescope-provides an automated hybrid time series forecasting method addressing the challenge posed by the "No-Free-Lunch Theorem"; (iii) The third contribution-Chamulteon-provides a novel hybrid auto-scaler for coordinated scaling of applications comprising multiple services, leveraging Telescope to forecast the workload intensity as a basis for proactive resource provisioning. In the following, the three contributions of the thesis are summarized:
Contribution I - Forecasting Benchmark
To establish a level playing field for evaluating the performance of forecasting methods in a broad setting, we propose a novel benchmark that automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. The data set was assembled from publicly available time series and was designed to exhibit much higher diversity than existing forecasting competitions. Besides proposing a new data set, we introduce two new measures that describe different aspects of a forecast. We applied the developed benchmark to evaluate Telescope.
Contribution II - Telescope
To provide a generic forecasting method, we introduce a novel machine learning-based forecasting approach that automatically retrieves relevant information from a given time series. More precisely, Telescope automatically extracts intrinsic time series features and then decomposes the time series into components, building a forecasting model for each of them. Each component is forecast by applying a different method and then the final forecast is assembled from the forecast components by employing a regression-based machine learning algorithm. In more than 1300 hours of experiments benchmarking 15 competing methods (including approaches from Uber and Facebook) on 400 time series, Telescope outperformed all methods, exhibiting the best forecast accuracy coupled with a low and reliable time-to-result. Compared to the competing methods that exhibited, on average, a forecast error (more precisely, the symmetric mean absolute forecast error) of 29%, Telescope exhibited an error of 20% while being 2556 times faster. In particular, the methods from Uber and Facebook exhibited an error of 48% and 36%, and were 7334 and 19 times slower than Telescope, respectively.
Contribution III - Chamulteon
To enable reliable auto-scaling, we present a hybrid auto-scaler that combines proactive and reactive techniques to scale distributed cloud applications comprising multiple services in a coordinated and cost-effective manner. More precisely, proactive adaptations are planned based on forecasts of Telescope, while reactive adaptations are triggered based on actual observations of the monitored load intensity. To solve occurring conflicts between reactive and proactive adaptations, a complex conflict resolution algorithm is implemented. Moreover, when deployed in public cloud environments, Chamulteon reviews adaptations with respect to the cloud provider's pricing scheme in order to minimize the charged costs. In more than 400 hours of experiments evaluating five competing auto-scaling mechanisms in scenarios covering five different workloads, four different applications, and three different cloud environments, Chamulteon exhibited the best auto-scaling performance and reliability while at the same time reducing the charged costs. The competing methods provided insufficient resources for (on average) 31% of the experimental time; in contrast, Chamulteon cut this time to 8% and the SLO (service level objective) violations from 18% to 6% while using up to 15% less resources and reducing the charged costs by up to 45%.
The contributions of this thesis can be seen as major milestones in the domain of time series forecasting and cloud resource management. (i) This thesis is the first to present a forecasting benchmark that covers a variety of different domains with a high diversity between the analyzed time series. Based on the provided data set and the automatic evaluation procedure, the proposed benchmark contributes to enhance the comparability of forecasting methods. The benchmarking results for different forecasting methods enable the selection of the most appropriate forecasting method for a given use case. (ii) Telescope provides the first generic and fully automated time series forecasting approach that delivers both accurate and reliable forecasts while making no assumptions about the analyzed time series. Hence, it eliminates the need for expensive, time-consuming, and error-prone procedures, such as trial-and-error searches or consulting an expert. This opens up new possibilities especially in time-critical scenarios, where Telescope can provide accurate forecasts with a short and reliable time-to-result.
Although Telescope was applied for this thesis in the field of cloud computing, there is absolutely no limitation regarding the applicability of Telescope in other domains, as demonstrated in the evaluation. Moreover, Telescope, which was made available on GitHub, is already used in a number of interdisciplinary data science projects, for instance, predictive maintenance in an Industry 4.0 context, heart failure prediction in medicine, or as a component of predictive models of beehive development. (iii) In the context of cloud resource management, Chamulteon is a major milestone for increasing the trust in cloud auto-scalers. The complex resolution algorithm enables reliable and accurate scaling behavior that reduces losses caused by excessive resource allocation or SLO violations. In other words, Chamulteon provides reliable online adaptations minimizing charged costs while at the same time maximizing user experience.
A deep integration of routine care and research remains challenging in many respects. We aimed to show the feasibility of an automated transformation and transfer process feeding deeply structured data with a high level of granularity collected for a clinical prospective cohort study from our hospital information system to the study's electronic data capture system, while accounting for study-specific data and visits. We developed a system integrating all necessary software and organizational processes then used in the study. The process and key system components are described together with descriptive statistics to show its feasibility in general and to identify individual challenges in particular. Data of 2051 patients enrolled between 2014 and 2020 was transferred. We were able to automate the transfer of approximately 11 million individual data values, representing 95% of all entered study data. These were recorded in n = 314 variables (28% of all variables), with some variables being used multiple times for follow-up visits. Our validation approach allowed for constant good data quality over the course of the study. In conclusion, the automated transfer of multi-dimensional routine medical data from HIS to study databases using specific study data and visit structures is complex, yet viable.
Automation in Software Performance Engineering Based on a Declarative Specification of Concerns
(2019)
Software performance is of particular relevance to software system design, operation, and evolution because it has a significant impact on key business indicators. During the life-cycle of a software system, its implementation, configuration, and deployment are subject to multiple changes that may affect the end-to-end performance characteristics. Consequently, performance analysts continually need to provide answers to and act based on performance-relevant concerns. To ensure a desired level of performance, software performance engineering provides a plethora of methods, techniques, and tools for measuring, modeling, and evaluating performance properties of software systems. However, the answering of performance concerns is subject to a significant semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted. Performance evaluation approaches come with different strengths and limitations concerning, for example, accuracy, time-to-result, or system overhead. For the involved stakeholders, it can be an elaborate process to reasonably select, parameterize and correctly apply performance evaluation approaches, and to filter and interpret the obtained results. An additional challenge is that available performance evaluation artifacts may change over time, which requires to switch between different measurement-based and model-based performance evaluation approaches during the system evolution. At model-based analysis, the effort involved in creating performance models can also outweigh their benefits.
To overcome the deficiencies and enable an automatic and holistic evaluation of performance throughout the software engineering life-cycle requires an approach that: (i) integrates multiple types of performance concerns and evaluation approaches, (ii) automates performance model creation, and (iii) automatically selects an evaluation methodology tailored to a specific scenario. This thesis presents a declarative approach —called Declarative Performance Engineering (DPE)— to automate performance evaluation based on a humanreadable specification of performance-related concerns. To this end, we separate the definition of performance concerns from their solution. The primary scientific contributions presented in this thesis are:
A declarative language to express performance-related concerns and a corresponding processing framework:
We provide a language to specify performance concerns independent of a concrete performance evaluation approach. Besides the specification of functional aspects, the language allows to include non-functional tradeoffs optionally. To answer these concerns, we provide a framework architecture and a corresponding reference implementation to process performance concerns automatically. It allows to integrate arbitrary performance evaluation approaches and is accompanied by reference implementations for model-based and measurement-based performance evaluation.
Automated creation of architectural performance models from execution traces:
The creation of performance models can be subject to significant efforts outweighing the benefits of model-based performance evaluation. We provide a model extraction framework that creates architectural performance models based on execution traces, provided by monitoring tools.The framework separates the derivation of generic information from model creation routines. To derive generic information, the framework combines state-of-the-art extraction and estimation techniques. We isolate object creation routines specified in a generic model builder interface based on concepts present in multiple performance-annotated architectural modeling formalisms. To create model extraction for a novel performance modeling formalism, developers only need to write object creation routines instead of creating model extraction software from scratch when reusing the generic framework.
Automated and extensible decision support for performance evaluation approaches:
We present a methodology and tooling for the automated selection of a performance evaluation approach tailored to the user concerns and application scenario. To this end, we propose to decouple the complexity of selecting a performance evaluation approach for a given scenario by providing solution approach capability models and a generic decision engine. The proposed capability meta-model enables to describe functional and non-functional capabilities of performance evaluation approaches and tools at different granularities. In contrast to existing tree-based decision support mechanisms, the decoupling approach allows to easily update characteristics of solution approaches as well as appending new rating criteria and thereby stay abreast of evolution in performance evaluation tooling and system technologies.
Time-to-result estimation for model-based performance prediction:
The time required to execute a model-based analysis plays an important role in different decision processes. For example, evaluation scenarios might require the prediction results to be available in a limited period of time such that the system can be adapted in time to ensure the desired quality of service. We propose a method to estimate the time-to-result for modelbased performance prediction based on model characteristics and analysis parametrization. We learn a prediction model using performancerelevant features thatwe determined using statistical tests. We implement the approach and demonstrate its practicability by applying it to analyze a simulation-based multi-step performance evaluation approach for a representative architectural performance modeling formalism.
We validate each of the contributions based on representative case studies. The evaluation of automatic performance model extraction for two case study systems shows that the resulting models can accurately predict the performance behavior. Prediction accuracy errors are below 3% for resource utilization and mostly less than 20% for service response time. The separate evaluation of the reusability shows that the presented approach lowers the implementation efforts for automated model extraction tools by up to 91%. Based on two case studies applying measurement-based and model-based performance evaluation techniques, we demonstrate the suitability of the declarative performance engineering framework to answer multiple kinds of performance concerns customized to non-functional goals. Subsequently, we discuss reduced efforts in applying performance analyses using the integrated and automated declarative approach. Also, the evaluation of the declarative framework reviews benefits and savings integrating performance evaluation approaches into the declarative performance engineering framework. We demonstrate the applicability of the decision framework for performance evaluation approaches by applying it to depict existing decision trees. Then, we show how we can quickly adapt to the evolution of performance evaluation methods which is challenging for static tree-based decision support systems. At this, we show how to cope with the evolution of functional and non-functional capabilities of performance evaluation software and explain how to integrate new approaches. Finally, we evaluate the accuracy of the time-to-result estimation for a set of machinelearning algorithms and different training datasets. The predictions exhibit a mean percentage error below 20%, which can be further improved by including performance evaluations of the considered model into the training data. The presented contributions represent a significant step towards an integrated performance engineering process that combines the strengths of model-based and measurement-based performance evaluation. The proposed performance concern language in conjunction with the processing framework significantly reduces the complexity of applying performance evaluations for all stakeholders. Thereby it enables performance awareness throughout the software engineering life-cycle. The proposed performance concern language removes the semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted by the user.
The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments.
This thesis is divided into two parts.
In the first part we contribute to a working program initiated by Pudlák (2017) who lists several major complexity theoretic conjectures relevant to proof complexity and asks for oracles that separate pairs of corresponding relativized conjectures. Among these conjectures are:
- \(\mathsf{CON}\) and \(\mathsf{SAT}\): coNP (resp., NP) does not contain complete sets that have P-optimal proof systems.
- \(\mathsf{CON}^{\mathsf{N}}\): coNP does not contain complete sets that have optimal proof systems.
- \(\mathsf{TFNP}\): there do not exist complete total polynomial search problems (also known as total NP search problems).
- \(\mathsf{DisjNP}\) and \(\mathsf{DisjCoNP}\): There do not exist complete disjoint NP pairs (coNP pairs).
- \(\mathsf{UP}\): UP does not contain complete problems.
- \(\mathsf{NP}\cap\mathsf{coNP}\): \(\mathrm{NP}\cap\mathrm{coNP}\) does not contain complete problems.
- \(\mathrm{P}\ne\mathrm{NP}\).
We construct several of the oracles that Pudlák asks for.
In the second part we investigate the computational complexity of balance problems for \(\{-,\cdot\}\)-circuits computing finite sets of natural numbers (note that \(-\) denotes the set difference). These problems naturally build on problems for integer expressions and integer circuits studied by Stockmeyer and Meyer (1973), McKenzie and Wagner (2007), and Glaßer et al. (2010).
Our work shows that the balance problem for \(\{-,\cdot\}\)-circuits is undecidable which is the first natural problem for integer circuits or related constraint satisfaction problems that admits only one arithmetic operation and is proven to be undecidable.
Starting from this result we precisely characterize the complexity of balance problems for proper subsets of \(\{-,\cdot\}\). These problems turn out to be complete for one of the classes L, NL, and NP.
Computer games are highly immersive, engaging, and motivating learning environments. By providing a tutorial at the start of a new game, players learn the basics of the game's underlying principles as well as practice how to successfully play the game. During the actual gameplay, players repetitively apply this knowledge, thus improving it due to repetition. Computer games also challenge players with a constant stream of new challenges which increase in difficulty over time. As a result, computer games even require players to transfer their knowledge to master these new challenges. A computer game consists of several game mechanics. Game mechanics are the rules of a computer game and encode the game's underlying principles. They create the virtual environments, generate a game's challenges and allow players to interact with the game. Game mechanics also can encode real world knowledge. This knowledge may be acquired by players via gameplay. However, the actual process of knowledge encoding and knowledge learning using game mechanics has not been thoroughly defined, yet. This thesis therefore proposes a theoretical model to define the knowledge learning using game mechanics: the Gamified Knowledge Encoding. The model is applied to design a serious game for affine transformations, i.e., GEtiT, and to predict the learning outcome of playing a computer game that encodes orbital mechanics in its game mechanics, i.e., Kerbal Space Program. To assess the effects of different visualization technologies on the overall learning outcome, GEtiT visualizes the gameplay in desktop-3D and immersive virtual reality. The model's applicability for effective game design as well as GEtiT's overall design are evaluated in a usability study. The learning outcome of playing GEtiT and Kerbal Space Program is assessed in four additional user studies. The studies' results validate the use of the Gamified Knowledge Encoding for the purpose of developing effective serious games and to predict the learning outcome of existing serious games. GEtiT and Kerbal Space Program yield a similar training effect but a higher motivation to tackle the assignments in comparison to a traditional learning method. In conclusion, this thesis expands the understanding of using game mechanics for an effective learning of knowledge. The presented results are of high importance for researches, educators, and developers as they also provide guidelines for the development of effective serious games.