Refine
Has Fulltext
- yes (341)
Year of publication
Document Type
- Journal article (142)
- Doctoral Thesis (139)
- Working Paper (40)
- Conference Proceeding (9)
- Report (5)
- Bachelor Thesis (2)
- Master Thesis (2)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (341) (remove)
Keywords
- Leistungsbewertung (29)
- virtual reality (19)
- Datennetz (14)
- Quality of Experience (12)
- Netzwerk (10)
- Robotik (10)
- machine learning (9)
- Cloud Computing (7)
- Optimierung (7)
- Performance Evaluation (7)
- Autonomer Roboter (6)
- Kleinsatellit (6)
- Komplexitätstheorie (6)
- Maschinelles Lernen (6)
- Mobiler Roboter (6)
- Modellierung (6)
- SDN (6)
- Virtuelle Realität (6)
- artificial intelligence (6)
- deep learning (6)
- Graphenzeichnen (5)
- P4 (5)
- Rechnernetz (5)
- Routing (5)
- Software Defined Networking (5)
- Theoretische Informatik (5)
- Verteiltes System (5)
- graph drawing (5)
- Algorithmus (4)
- Approximationsalgorithmus (4)
- Crowdsourcing (4)
- Deep learning (4)
- Dienstgüte (4)
- Drahtloses Sensorsystem (4)
- Graph (4)
- IoT (4)
- Komplexität (4)
- Mensch-Maschine-Schnittstelle (4)
- Optimization (4)
- Overlay-Netz (4)
- QoE (4)
- Quadrocopter (4)
- Satellit (4)
- Simulation (4)
- Software Engineering (4)
- Telekommunikationsnetz (4)
- Virtualisierung (4)
- augmented reality (4)
- avatars (4)
- immersion (4)
- mapping (4)
- navigation (4)
- simulation (4)
- Algorithmische Geometrie (3)
- Autonomous UAV (3)
- Benchmarking (3)
- Computer Vision (3)
- CubeSat (3)
- Data Mining (3)
- Drahtloses lokales Netz (3)
- Echtzeitsystem (3)
- Energieeffizienz (3)
- Energy Efficiency (3)
- Information Extraction (3)
- Internet of Things (3)
- Latenz (3)
- LoRaWAN (3)
- Localization (3)
- Machine Learning (3)
- Mehrkriterielle Optimierung (3)
- Mensch-Maschine-Kommunikation (3)
- Mixed Reality (3)
- Netzwerkmanagement (3)
- Neuronales Netz (3)
- Peer-to-Peer-Netz (3)
- Punktwolke (3)
- Quadrotor (3)
- Ressourcenmanagement (3)
- Robotics (3)
- Software (3)
- Software-defined networking (3)
- UAV (3)
- Video Streaming (3)
- Videoübertragung (3)
- approximation algorithm (3)
- automation (3)
- complexity (3)
- crossing minimization (3)
- endoscopy (3)
- fully convolutional neural networks (3)
- gastroenterology (3)
- historical document analysis (3)
- human-computer interaction (3)
- information extraction (3)
- quality of experience (3)
- virtual environments (3)
- 3D model generation (2)
- 5G (2)
- Ausfallsicheres System (2)
- Ausfallsicherheit (2)
- Auto-Scaling (2)
- Benutzerschnittstelle (2)
- Berechnungskomplexität (2)
- Betriebssystem (2)
- Bildverarbeitung (2)
- CADe (2)
- Cloud Gaming (2)
- DNA storage (2)
- Deep Learning (2)
- Distributed computing (2)
- Dot-Depth Problem (2)
- Echtzeit (2)
- Effizienter Algorithmus (2)
- Entscheidbarkeit (2)
- Ethernet (2)
- Fernwartung (2)
- Forecasting (2)
- Framework <Informatik> (2)
- Future Internet (2)
- Hardware (2)
- Human-Robot-Interaction (2)
- IEEE 802.11 (2)
- Industrie 4.0 (2)
- Internet (2)
- Kommunikationsprotokoll (2)
- Komplexitätsklasse (2)
- Kreuzung (2)
- Künstliche Intelligenz (2)
- Lokalisation (2)
- MP-DCCP (2)
- Maschinelles Sehen (2)
- Mathematisches Modell (2)
- Mensch-Maschine-System (2)
- Mensch-Roboter-Interaktion (2)
- Metrics (2)
- Monitoring (2)
- NP-hardness (2)
- Ontologie <Wissensverarbeitung> (2)
- Optical Character Recognition (2)
- Optical Music Recognition (2)
- PROLOG <Programmiersprache> (2)
- Prognose (2)
- Raumfahrttechnik (2)
- Resilience (2)
- Resource Management (2)
- Self-Aware Computing (2)
- Sensor (2)
- Situation Awareness (2)
- Software Performance Engineering (2)
- Streaming <Kommunikationstechnik> (2)
- TSN (2)
- Teleoperation (2)
- Theoretical Computer Science (2)
- Travelling-salesman-Problem (2)
- Unmanned Aerial Vehicle (UAV) (2)
- User Interface (2)
- Venus (2)
- Verbotsmuster (2)
- Videospiel (2)
- Virtual Reality (2)
- Visualisierung (2)
- Wissensrepräsentation (2)
- XR (2)
- Zuverlässigkeit (2)
- agency (2)
- algorithms (2)
- autonomous (2)
- avatar embodiment (2)
- background knowledge (2)
- body weight modification (2)
- body weight perception (2)
- colonoscopy (2)
- communication networks (2)
- connected mobility applications (2)
- crowdsourcing (2)
- data warehouse (2)
- decidability (2)
- distance measurement (2)
- distributed control (2)
- dot-depth problem (2)
- education (2)
- educational tool (2)
- electronic health records (2)
- embodiment (2)
- emotions (2)
- endliche Automaten (2)
- endurance (2)
- exposure (2)
- finite automata (2)
- fog computing (2)
- forbidden patterns (2)
- formation control (2)
- games (2)
- graphs (2)
- immersive technologies (2)
- intrusion detection (2)
- jitter (2)
- knowledge acquisition (2)
- knowledge engineering (2)
- knowledge representation (2)
- knowledge-based systems (2)
- latency (2)
- locomotion (2)
- measurements (2)
- medieval manuscripts (2)
- mobile laser scanning (2)
- mobile networks (2)
- multipath (2)
- multipath scheduling (2)
- natural language processing (2)
- network calculus (2)
- neume notation (2)
- neural networks (2)
- object detection (2)
- ontology (2)
- optimization (2)
- performance (2)
- performance evaluation (2)
- performance modeling (2)
- performance monitoring (2)
- pose estimation (2)
- prediction (2)
- real-time (2)
- regular languages (2)
- reguläre Sprachen (2)
- rehabilitation (2)
- satellite communication (2)
- self-adaptive systems (2)
- self-aware computing (2)
- sensor fusion (2)
- stroke (2)
- unmanned aerial vehicles (2)
- user experience (2)
- user study (2)
- virtual body ownership (2)
- wearable (2)
- 1655-1705> (1)
- 3D Laser Scanning (1)
- 3D Pointcloud (1)
- 3D Punktwolke (1)
- 3D Reconstruction (1)
- 3D Sensor (1)
- 3D Vision (1)
- 3D mapping (1)
- 3D object recognition (1)
- 3D point cloud (1)
- 3D thermal mapping (1)
- 3D-Rekonstruktion (1)
- 3D-reconstruction methods (1)
- 3DTK toolkit (1)
- 3d point clouds (1)
- 4D-GIS (1)
- 4G Networks (1)
- 5G core network (1)
- 5G-ATSSS (1)
- 5GC (1)
- 6DOF Pose Estimation (1)
- 6G (1)
- ATSSSS (1)
- AVA (1)
- Abhängigskeitsgraph (1)
- Adaptive Video Streaming (1)
- Adaptives System (1)
- Adaptives Videostreaming (1)
- Add-on-Miss (1)
- Admission Control (1)
- Algorithmik (1)
- Alps (1)
- Alter Druck (1)
- Angewandte Mathematik (1)
- Anomalieerkennung (1)
- Anwendungsfall (1)
- Apple Watch 7 (1)
- Application-Aware Resource Management (1)
- Approximation (1)
- Arterie (1)
- Artery (1)
- Attitude Determination and Control (1)
- Attitude Dynamics (1)
- Attitude Heading Reference System (AHRS) (1)
- Automat <Automatentheorie> (1)
- Automata Theory (1)
- Automatentheorie (1)
- Automatic Text Reconition (1)
- Automation (1)
- Automatische Texterkennung (ATR) (1)
- Autonomic Computing (1)
- Autonomous Robot (1)
- Autonomous multi-vehicle systems (1)
- Autoreduzierbarkeit (1)
- Autorotation (1)
- Außerschulische Bildung (1)
- Avatar <Informatik> (1)
- Avionik (1)
- Backbone-Netz (1)
- Background Knowledge (1)
- Balloon (1)
- Base composition (1)
- Baseline Constrained LAMBDA (1)
- Bayes analysis (1)
- Bayes-Verfahren (1)
- Bayesian model comparison (1)
- Benutzerinteraktion (1)
- Berechenbarkeit (1)
- Bernoulli (1)
- Bernoulli Raum (1)
- Bernoulli Space (1)
- Beschriftung (1)
- Beschriftung von Straßen (1)
- Bestärkendes Lernen (1)
- Bestärkendes Lernen <Künstliche Intelligenz> (1)
- Bewegungskompensation (1)
- Bewegungskoordination (1)
- Beweissystem (1)
- Biased gene conversion (1)
- Bioinformatik (1)
- BitTorrent (1)
- Bodenstation (1)
- Boolean Grammar (1)
- Boolean equivalence (1)
- Boolean functions (1)
- Boolean hierarchy (1)
- Boolean isomorphism (1)
- Boolesche Funktionen (1)
- Boolesche Grammatik (1)
- Boolesche Hierarchie (1)
- Broadcast Growth Codes (BCGC) (1)
- CASE (1)
- CDN-Netzwerk (1)
- CEF (1)
- CLIP (1)
- Call Graph (1)
- Cellular Networks (1)
- Character Networks (1)
- Character Reference Detection (1)
- Chord (1)
- Clinical Data Warehouse (1)
- Clones (1)
- Cloud (1)
- Cloud computing (1)
- Cloud-native (1)
- Communication (1)
- Communication Networks (1)
- Compass framework (1)
- Compiler (1)
- Complexity Theory (1)
- Complicacy (1)
- Compression (1)
- Computational Geometry (1)
- Computational complexity (1)
- Computer Science education (1)
- Computerkartografie (1)
- Computersicherheit (1)
- Computersimulation (1)
- Computerspiel (1)
- Computerunterstütztes Lernen (1)
- Conjunction analysis (1)
- Containerization (1)
- Content Delivery Network (1)
- Content Distribution (1)
- Control room (1)
- Convoy Protection (1)
- Cooperative UAV (1)
- Coreference (1)
- Cost-benefit analysis (1)
- Couch tracking (1)
- Crowd sourcing (1)
- Crowdsensing (1)
- CubeSat GNSS (1)
- Cyber-physisches System (1)
- DASH (1)
- DHT (1)
- Daedalus-Projekt (1)
- Danish hernia database (1)
- Data Fusion (1)
- Data Science (1)
- Data Warehouse (1)
- Data-Warehouse-Konzept (1)
- Datenkommunikationsnetz (1)
- Datenübertragung (1)
- Debugging (1)
- DecaWave (1)
- Decentralized formation control (1)
- Decision Support (1)
- Declarative Performance Engineering (1)
- Deep Georeferencing (1)
- Deep Reinforcement Learning (1)
- Deflection routing (1)
- Delay Tolerant Network (1)
- Dependency Graph (1)
- Design (1)
- Design and Development (1)
- Design patterns (1)
- Desynchronisation (1)
- Desynchronization (1)
- Dezentrale Regelung (1)
- Dichotomy (1)
- Didaktik der Informatik (1)
- Differential GPS (DGPS) (1)
- Digital Humanities (1)
- Digitale Karte (1)
- Dijkstra’s algorithm (1)
- Directed Flight (1)
- Disjoint pair (1)
- Diskrete Simulation (1)
- Distributed Control (1)
- Distributed Space Systems (1)
- Distributed System (1)
- Document Analysis (1)
- Domain Knowledge (1)
- Domänenspezifische Sprache (1)
- Dot-Depth-Hierarchie (1)
- Drahtloses Sensornetz (1)
- Drahtloses vermaschtes Netz (1)
- Dreidimensionale Bildverarbeitung (1)
- Dreidimensionale Rekonstruktion (1)
- Drohne <Flugkörper> (1)
- Dynamic Memory Management (1)
- Dynamische Speicherverwaltung (1)
- E-Learning (1)
- EHS classification (1)
- EPM (1)
- EUROASPIRE survey (1)
- Earth Observation (1)
- Echtzeit-Netzwerke (1)
- Echzeit (1)
- Edge-MEC-Cloud (1)
- Edge-based Intelligence (1)
- Educational robotics (1)
- Educational robotics competitions (1)
- Eindringerkennung (1)
- Eingebettetes System (1)
- Elasticity (1)
- Elasticity tensor (1)
- Elastizitätstensor (1)
- Elektrizitätsverbrauch (1)
- Embedded Systems (1)
- End-to-End Automation (1)
- Ende-zu-Ende Automatisierung (1)
- Endpoint Mobility (1)
- Energy efficiency (1)
- Enterprise application (1)
- Enterprise-Resource-Planning (1)
- Enthaltenseinproblem (1)
- Environmental (1)
- Erderkundungssatellit (1)
- Erfüllbarkeitsproblem (1)
- Error-State Extendend Kalman Filter (1)
- Erweiterte Realität (1)
- Erweiterte Realität <Informatik> (1)
- Euclidean plane (1)
- Euklidische Ebene (1)
- Euler equations (1)
- Euler-Lagrange-Gleichung (1)
- Evaluation (1)
- Evolution (1)
- Expected MOS (1)
- Expected QoE (1)
- Expert System (1)
- Expertensystem (1)
- Expresses genes (1)
- FIFO caching strategies (1)
- FPGA (1)
- FRAMEWORK <Programm> (1)
- Fachdidaktik (1)
- Failure Prediction (1)
- Fairness (1)
- Feature Based Registration (1)
- Feature Engineering & Extraction (1)
- Fehlertoleranz (1)
- Fehlervorhersage (1)
- Fernsteuerung (1)
- Field programmable gate array (1)
- Fitbit Sense (1)
- Flugkörper (1)
- Flugnavigation (1)
- Flugregelung (1)
- Forces (1)
- Formal analysis (1)
- Formal verification (1)
- Formale Sprache (1)
- Formation (1)
- Formation Flight (1)
- Formationsbewegung (1)
- Forschungssatellit (1)
- Fraud detection (1)
- Funkressourcenverwaltung (1)
- Funktechnik (1)
- GC-Content (1)
- GNSS/INS integrated navigation (1)
- GPS (1)
- GPS Reciever (1)
- Game mechanic (1)
- Gamification (1)
- Garmin Fenix 6 Pro (1)
- Gastroenterologische Endoskopie (1)
- Geleitzug (1)
- Generalisierung <Kartografie> (1)
- Generation Problem (1)
- Generierungsproblem (1)
- Genetic Optimization (1)
- Genetische Optimierung (1)
- Geo-spatial behavior (1)
- Geoinformationssystem (1)
- Georeferenzierung (1)
- Geospatial (1)
- Geschäftsanwendung (1)
- Gimbaled tracking (1)
- Global Navigation Satellite System (GNSS) (1)
- Global Positioning System (GPS) (1)
- Good-or-Better (GoB) (1)
- Graphen (1)
- Gravitationsmodellunsicherheit (1)
- Gravity model uncertainty (1)
- Ground Station Networks (1)
- H-infinity (1)
- H.264 SVC (1)
- H.264/SVC (1)
- HMD (Head-Mounted Display) (1)
- HSPA (1)
- HTTP adaptive video streaming (1)
- Halbordnungen (1)
- Herzkatheter (1)
- Herzkathetereingriff (1)
- Higher rates (1)
- Hintergrundwissen (1)
- Historical Maps (1)
- Historical Printings (1)
- Historische Karte (1)
- Historische Landkarten (1)
- Human behavior (1)
- Human genome (1)
- Human-Computer Interaction (1)
- Humangenetik (1)
- Hyperbolische Differentialgleichung (1)
- Hypothesis comparison (1)
- ICD-coding of CKD (1)
- IEEE 802.11e (1)
- IEEE 802.15.4 (1)
- IEEE Std 802.15.4 (1)
- INS/LIDAR integrated navigation (1)
- IP (1)
- ISS <Raumfahrt> (1)
- IT security (1)
- Ignorance (1)
- Ignoranz (1)
- Image Aesthetic Assessment (1)
- Image Processing (1)
- Implementierung <Informatik> (1)
- In-Orbit demonstration (1)
- Industrial internet (1)
- Informatik (1)
- Information Retrieval (1)
- Instrument Control Toolbox (1)
- Integer Expression (1)
- Integer circuit (1)
- Intelligent Real-time Interactive System (1)
- Intelligent Realtime Interactive System (1)
- Intelligent Transportation Systems (1)
- Intelligent Virtual Agents (1)
- Intelligent Virtual Environment (1)
- Intelligent mobile system (1)
- InteractionSuitcase (1)
- Interaktion (1)
- Interaktive Karten (1)
- Interkulturelles Lernen (1)
- International Comparative Research (1)
- Internet Protokoll (1)
- Internet der Dinge (1)
- Intra-Spacecraft Communication (1)
- Isomorphie (1)
- Itinerare (1)
- Itineraries (1)
- JCAS (1)
- Jakob <Mathematiker (1)
- Java <Programmiersprache> (1)
- Java Message Service (1)
- K band ranging (KBR) (1)
- Kademlia (1)
- Kalman-Filter (1)
- Kanalzugriff (1)
- Karte (1)
- Kathará (1)
- Kerneldensity estimation (1)
- Kinetische Gleichung (1)
- Klassendiagramm (1)
- Klima (1)
- Klinisches Experiment (1)
- Knowledge Discovery (1)
- Knowledge Representation Layer (1)
- Knowledge encoding (1)
- Knowledge engineering (1)
- Knowledge-based Systems Engineering (1)
- Kombinatorik (1)
- Kommunikation (1)
- Kommunikationsnetze (1)
- Komplexitätsklasse NP (1)
- Konjunktionsanalyse (1)
- Konvexe Zeichnungen (1)
- Konvoi (1)
- Kooperierende mobile Roboter (1)
- Kosten-Nutzen-Analyse (1)
- Kreuzungsminimierung (1)
- Kurve (1)
- LFU (1)
- LRU (1)
- LUMEN (1)
- Lageregelung (1)
- Landkartenbeschriftung (1)
- Landnutzungskartierung (1)
- Laser scanning (1)
- Latency Bound (1)
- Lava (1)
- Lehrerbildung (1)
- Leistungsbedarf (1)
- Lernen (1)
- Lidar (1)
- Lightning (1)
- Link rate adaptation (1)
- Linked Data (1)
- Linkratenanpassung (1)
- Linux (1)
- LoRa (1)
- LoRaWan (1)
- Logging (1)
- Logic Programming (1)
- Logische Programmierung (1)
- Logistik (1)
- Loose Coupling (1)
- Low Earth Orbit (1)
- Lunar Caves (1)
- Lunar Exploration (1)
- MAC (1)
- MAC Protocol (1)
- MASim (1)
- MEMS IMU (1)
- MHD equations (1)
- MLC tracking (1)
- MSC: 49M37 (1)
- MSC: 65K05 (1)
- MSC: 90C30 (1)
- MSC: 90C40 (1)
- MTC (1)
- Magnetohydrodynamische Gleichung (1)
- Mammalian genomes (1)
- Mapping (1)
- Markov model (1)
- Markovian and Non-Markovian systems (1)
- Mars (1)
- Mathematische Modellierung (1)
- Matlab (1)
- Measurement-based Analysis (1)
- Media Access Control (1)
- Medical Image Analysis (1)
- Medienkompetenz (1)
- Medium <Physik> (1)
- Medizin (1)
- Mehragentensystem (1)
- Mehrfahrzeugsysteme (1)
- Mehrpfadübertragung (1)
- Mehrschichtnetze (1)
- Mehrschichtsystem (1)
- Mensch (1)
- Mesh Augmentation (1)
- Mesh Networks (1)
- Mesh Netze (1)
- Meta-modeling (1)
- Microservice (1)
- Middleware (1)
- Mikroservice (1)
- Mini Unmanned Aerial Vehicle (1)
- Miniaturisierung (1)
- Minimally invasive vascular intervention (1)
- Mitotizität (1)
- Mobile Sensor Network (1)
- Mobile Telekommunikation (1)
- Mobiles Internet (1)
- Mobilfunk (1)
- Mobility (1)
- Mobilität (1)
- Model based communication (1)
- Model based mission realization (1)
- Model comparison (1)
- Model extraction (1)
- Model transformation (1)
- Model-Agnostic (1)
- Model-based Performance Prediction (1)
- Modeling (1)
- Modell (1)
- Modellgetriebene Entwicklung (1)
- Modellierungstechniken (1)
- Modelling (1)
- Modul <Software> (1)
- Modularität (1)
- Moment <Stochastik> (1)
- Mond (1)
- Mondfahrzeug (1)
- Multi-Hop Topologie (1)
- Multi-Hop Topology (1)
- Multi-Layer (1)
- Multi-Network Service (1)
- Multi-Netzwerk Dienste (1)
- Multi-Paradigm Programming (1)
- Multi-Paradigm Programming Framework (1)
- Multi-Stakeholder (1)
- Multimodal Processing (1)
- Multimodal System (1)
- Multimodales System (1)
- Multipath Transmission (1)
- Mustererkennung (1)
- NP (1)
- NP-Vollständigkeit (1)
- NP-complete sets (1)
- NP-hard (1)
- NP-hartes Problem (1)
- NP-schweres Problem (1)
- NP-vollständiges Problem (1)
- Nano-Satellite (1)
- NanoFEEP (1)
- Nanosatellit (1)
- Navigation analysis (1)
- Network Emulator (1)
- Network Experiments (1)
- Network Function Virtualization (1)
- Network Functions Virtualisation (1)
- Network Management (1)
- Network Measurements (1)
- Network Virtualization (1)
- Network routing (1)
- Network-on-Chip (1)
- Netzplantechnik (1)
- Netzplanung (1)
- Netzvirtualisierung (1)
- Netzwerkanalyse <Soziologie> (1)
- Netzwerkplanung (1)
- Netzwerktopologie (1)
- Netzwerkverwaltung (1)
- Netzwerkvirtualisierung (1)
- Neume Notation (1)
- Neumennotation (1)
- Neumenschrift (1)
- Next Generation Networks (1)
- Nichtholonome Fahrzeuge (1)
- Nichtlineare Regelung (1)
- Nutzerstudie (1)
- Nutzerstudien (1)
- OMICS (1)
- Object Detection (1)
- Object-Oriented Programming (1)
- Objektorientierte Programmierung (1)
- Onboard (1)
- Onboard Software (1)
- Open Innovation (1)
- OpenFlow (1)
- Operator (1)
- Optical Flow (1)
- Optimal control (1)
- Optimale Kontrolle (1)
- Optimale Regelung (1)
- Optimalwertregelung (1)
- Optimiertung (1)
- Optimierungsproblem (1)
- Optische Musikerkennung (OMR) (1)
- Optische Zeichenerkennung (1)
- Optische Zeichenerkennung (OCR) (1)
- Orakel <Informatik> (1)
- Orbit determination (1)
- Orbitbestimung (1)
- Organ motion (1)
- Overlay (1)
- Overlay Netzwerke (1)
- Overlay networks (1)
- Overlays (1)
- P-optimal (1)
- P4-INT (1)
- PMD (1)
- Panorama Images (1)
- Partition <Mengenlehre> (1)
- Partitionen (1)
- Path Computation Element (1)
- Pattern Mining (1)
- Pattern Recognition (1)
- Peer-to-Peer (1)
- Performance (1)
- Performance Analysis (1)
- Performance Enhancing Proxies (1)
- Performance Management (1)
- Performance Modeling (1)
- Performance analysis (1)
- Pfadberechnungselement (1)
- Phasenmehrdeutigkeit (1)
- Picosatellite (1)
- Planare Graphen (1)
- Planung (1)
- Plasmaantrieb (1)
- Platooning (1)
- Platzierungsalgorithmen (1)
- Poisson surface reconstruction (1)
- Polyeder (1)
- Polygonzüge (1)
- Polypektomie (1)
- Positioning (1)
- Post's Classes (1)
- Postsche Klassen (1)
- Power Consumption (1)
- Prediction (1)
- Prediction Procedure (1)
- Problemlösefähigkeiten (1)
- Propositional proof system (1)
- Prospect Theory (1)
- Psychische Gesundheit (1)
- Publish-Subscribe-System (1)
- Punktbeschriftungen (1)
- Q-Learning (1)
- QUIC (1)
- QoE Monitoring (1)
- QoE estimation (1)
- QoE fundamentals (1)
- QoE-Abschätzung (1)
- QoS (1)
- QoS-QoE mapping functions (1)
- Qualitative representation and reasoning (1)
- Quality of Experience (QoE) (1)
- Quality of Experience QoE (1)
- Quality of Service (1)
- Quality of Service (QoS) (1)
- Quality-of-Experience (1)
- Quality-of-Service (1)
- Quality-of-Service (QoS) (1)
- Quantor (1)
- Queueing theory (1)
- Quotation Attribution (1)
- RAS Evaluation (1)
- RGB-D (1)
- RINEX Format (1)
- RLNC (1)
- RNA-SEQ (1)
- RRM (1)
- Randomness (1)
- Raumdaten (1)
- Raumfahrt (1)
- Raumfahrzeug (1)
- Raumverhalten (1)
- Real-Time Operating Systems (1)
- Real-Time-Networks (1)
- Real-time (1)
- Real-time Kinematics (RTK) (1)
- Rechenzentrum (1)
- Refactoring (1)
- Refaktorisierung (1)
- Regelbasiertes Modell (1)
- Regelung (1)
- Registration (1)
- Registrierung (1)
- Regression (1)
- Reguläre Sprache (1)
- Reinforcement Learning (1)
- Relation Detection (1)
- Rendezvous (1)
- Reproducibility (1)
- Research Station (1)
- Resource and Performance Management (1)
- Ressourcen Management (1)
- Ressourcenallokation (1)
- Rettungsroboter (1)
- Roboterwettbewerbe (1)
- Robotic tracking (1)
- Rodents (1)
- Rodos (1)
- Route Choice (1)
- Route Entscheidung (1)
- Räumliches Verhalten (1)
- SBA (1)
- SDN Controllers (1)
- SDN Switches (1)
- SDN/NVF (1)
- SLAM (1)
- ST-elevation myocardial infarction (1)
- SVC (1)
- SWOT (1)
- Satellite Ground Station (1)
- Satellite Network (1)
- Satellite formation (1)
- Satellitenfunk (1)
- Scheduling (1)
- Search-and-Rescue (1)
- Selbstkalibrierung (1)
- Selbstorganisation (1)
- Self-calibration (1)
- Semantic Entity Model (1)
- Semantic Search (1)
- Semantic Technologies (1)
- Semantic Web (1)
- Semantics (1)
- Semantik (1)
- Semantische Analyse (1)
- Sensing-aaS (1)
- Sensorfusion (1)
- Serious game (1)
- Server (1)
- Service Mobility (1)
- Service-level Quality Index (SQI) (1)
- Sichtbarkeit (1)
- Similarity Measure (1)
- Simulator (1)
- Situationsbewusstsein (1)
- Skalierbarkeit (1)
- Skype (1)
- Small Satellites (1)
- Smart User Interaction (1)
- Snow Line Elevation (1)
- Social Media (1)
- Social Web (1)
- Software Architecture (1)
- Software Performance Modeling (1)
- Software Quality (1)
- Software-based Networks (1)
- Software-defined Networking (1)
- Softwareentwicklung (1)
- Softwaremetrie (1)
- Softwaresystem (1)
- Softwaretest (1)
- Softwarewartung (1)
- Softwarewiederverwendung (1)
- Softwarisierte Netze (1)
- Source Code Generation (1)
- Source Code Visualization (1)
- Soziale Software (1)
- Soziales Netzwerk (1)
- Space Debris (1)
- SpaceWire (1)
- Spacecrafts (1)
- Spam Detection (1)
- Spatial behavior (1)
- Spherical Robot (1)
- Spielmechanik (1)
- Standardisierung (1)
- Standortproblem (1)
- Statische Analyse (1)
- Statistische Hypothese (1)
- Sternfreie Sprache (1)
- Steuerung (1)
- Stiffness (1)
- Stochastik (1)
- Strahlentherapie (1)
- Straubing-Th´erien-Hierarchie (1)
- Strecken (1)
- Structure-from-Motion (1)
- Strukturelle Komplexität (1)
- Studie (1)
- Subgroup Discovery (1)
- Subgroup Mining (1)
- Subgruppenentdeckung (1)
- System-on-Chip (1)
- TETCs (1)
- TTL (1)
- TTL validation of data consistency (1)
- Tagging (1)
- Technical Documentation (1)
- Technische Unterlage (1)
- Telematik (1)
- Telemedizin (1)
- Telemetrie (1)
- Terramechanics (1)
- Testbed (1)
- Textanalyse (1)
- Theoretical computer science (1)
- Thermografie (1)
- Thermospheric density uncertainty (1)
- Thermosphärische Dichteunsicherheit (1)
- Thrust Vector Control (1)
- Time-Sensitive Networking (1)
- Time-Sensitive-Networking (1)
- Torque (1)
- Traffic (1)
- Traffic Management (1)
- Trainingssystem (1)
- Trajectory tracking (1)
- Transportsystem (1)
- Triangulation (1)
- Tumor motion (1)
- Tumorbewegung (1)
- U-Bahnlinienplan (1)
- UAP (1)
- UFO (1)
- UI and Interaction Design (1)
- UML Klassendiagramm (1)
- UML class diagram (1)
- UMTS (1)
- URL (1)
- URLLC (1)
- UWB (1)
- UWE-4 (1)
- Ultra-Wideband (UWB) radio ranging (1)
- Ultraweitband (1)
- Umfrage (1)
- Umwelt (1)
- Uncertainty (1)
- Uncertainty realism (1)
- Underwater Mapping (1)
- Underwater Scanning (1)
- Unified Monitoring (1)
- Unmanned Aerial Vehicle (1)
- Unsicherheit (1)
- Unsicherheitsrealismus (1)
- Unstetige Regelung (1)
- Usability (1)
- Use case (1)
- User Behavior (1)
- User Participation (1)
- User interfaces (1)
- User studies (1)
- VHDL (1)
- VNF (1)
- VPN (1)
- Validation (1)
- Vehicle Routing Problem (1)
- Veranstaltung (1)
- Verbotenes Muster (1)
- Verbände (1)
- Verifikation (1)
- Verkehrsleitsystem (1)
- Verkehrslenkung (1)
- Verkehrsmanagement (1)
- Verkehrsregelung (1)
- Verteiltes Datenbanksystem (1)
- Verteilung von Inhalten (1)
- Video Game QoS (1)
- Video Quality Monitoring (1)
- Virtuelles Netz (1)
- Virtuelles Netzwerk (1)
- Visibility (1)
- Vision Based (1)
- Visual Tracking (1)
- Visualization (1)
- Visualized Kathará (1)
- Voice-over-IP (VoIP) (1)
- Vorhersage (1)
- Vorhersagetheorie (1)
- Vorhersageverfahren (1)
- WLAN (1)
- Wahrscheinlichkeitsverteilung (1)
- Warteschlangentheorie (1)
- Wartung (1)
- Web navigation (1)
- Web2.0 (1)
- WhatsApp (1)
- Wheel (1)
- Winkel (1)
- Wire relaxation (1)
- Wireless LAN (1)
- Wireless Mesh Networks (1)
- Wireless Network (1)
- Wireless Sensor/Actuator Systems (1)
- Wissensakquisition (1)
- Wissensbasiertes System (1)
- Wissensencodierung (1)
- Wissensendeckung (1)
- Wissensentwicklung (1)
- Wissensextraktion (1)
- Wissenstechnik (1)
- Withings ScanWatch (1)
- Worterweiterungen (1)
- XAI (1)
- XAI and explainable artificial intelligence (1)
- XR-artificial intelligence combination (1)
- XR-artificial intelligence continuum (1)
- YouTube (1)
- Zeichnen von Graphen (1)
- Zeitdiskretes System (1)
- Zeitreihe (1)
- Zeitreihenanalyse (1)
- Zeitreihenvorhersage (1)
- Zufall (1)
- Zugangskontrolle (1)
- Zugangsnetz (1)
- Zählprobleme (1)
- abdominal wall hernia (1)
- abdominal wall surgery (1)
- abgeschlossene Klassen (1)
- acrophobia (1)
- adaptation models (1)
- adaptive network coding (1)
- adaptive tutoring (1)
- administrative boundary (1)
- admission control (1)
- adult learning (1)
- aerodynamic drag reduction (1)
- aerodynamics (1)
- aerospace (1)
- aerospace engineering (1)
- affective appraisal (1)
- affective computing (1)
- agent-based models (1)
- agents (1)
- agile Prozesse (1)
- agile processes (1)
- ancillary services (1)
- angular schematization (1)
- annotation (1)
- anomaly detection (1)
- anomaly prediction (1)
- ant-colony optimization (1)
- antenna phase center calibration (1)
- anthropomorphism (1)
- antibiotic prophylaxis (1)
- anxiety (1)
- application design (1)
- approximation algorithms (1)
- arithmetic calculations (1)
- artifical inteligence (1)
- asymptotic preserving (1)
- attack-aware (1)
- attitude determination (1)
- auction based task assignment (1)
- authoring environment (1)
- authoring platform (1)
- automated map labeling (1)
- automatic Layout (1)
- automatische Beschriftungsplatzierung (1)
- automatisches Layout (1)
- autonomic orchestration (1)
- autonomous UAV (1)
- autorotation (1)
- availability (1)
- backpack mobile mapping (1)
- baseline detection (1)
- behaviometric (1)
- behavior perception (1)
- beyond planarity (1)
- binary tanglegram (1)
- bioelectronics (1)
- biomechanic (1)
- biomechanical engineering (1)
- biomimetics (1)
- biosignals (1)
- blood coagulation factor XIII (1)
- body awareness (1)
- body image distortion (1)
- body image disturbance (1)
- boundary labeling (1)
- building (1)
- calibration (1)
- camera orientation (1)
- car-like robots (1)
- carbon (1)
- cardiac magnetic resonance imaging (1)
- cardiac surgery (1)
- cardiac training group (1)
- cardiorespiratory fitness (1)
- cartographic requirements (1)
- certifying algorithm (1)
- chain cover (1)
- change detection (1)
- channel management (1)
- characterization (1)
- chronic kidney disease (1)
- circular layouts (1)
- circular-arc drawings (1)
- climate (1)
- clinical data warehouse (1)
- clinical measurement in health technology (1)
- clinical study (1)
- co-authorships (1)
- co-inventorships (1)
- coherence (1)
- collaboration (1)
- collaborative interaction (1)
- collision (1)
- collision avoidance (1)
- collision detection (1)
- communication models (1)
- communication network (1)
- competitive location (1)
- computational complexity (1)
- computer performance evaluation (1)
- computergestützte Softwaretechnik (1)
- concurrent design facility (1)
- congruence (1)
- consensus (1)
- constrained forest (1)
- contact representation (1)
- container virtualization (1)
- content-based image retrieval (1)
- continuous-time SLAM (1)
- controller failure recovery (1)
- convex bipartite graph (1)
- coronary heart disease (1)
- cost-sensitive learning (1)
- counting problems (1)
- crowdsensing (1)
- crowdsourced QoE measurements (1)
- crowdsourced measurements (1)
- crowdsourced network measurements (1)
- cultural and media studies (1)
- culturally aware (1)
- curves (1)
- cybersickness (1)
- cycling (1)
- d3web.Train (1)
- data fusion (1)
- data plane programming (1)
- data structure (1)
- dataplane programming (1)
- dataset (1)
- decision support (1)
- decision support system (1)
- decision-making (1)
- decoding error rate (1)
- deep metric learning (1)
- definite clause grammars (1)
- deformation-based method (1)
- delay QoS exponent (1)
- delay bound violation probability (1)
- delay constrained (1)
- denial of service (1)
- dependable software (1)
- descent (1)
- descriptors (1)
- design (1)
- design cycle (1)
- detection time simulation (1)
- dial a ride (1)
- digital twin (1)
- dimensions of proximity (1)
- discrete-time analysis (1)
- discrete-time models and analysis (1)
- disjoint multi-paths (1)
- distance compression (1)
- distraction (1)
- docker (1)
- document analysis (1)
- documents (1)
- drag area (1)
- dynamic adaptive streaming over http (1)
- dynamic flow migration (1)
- dynamic programming (1)
- dynamic protein-protein interactions (1)
- eHealth (1)
- eating and body weight disorders (1)
- edge labeled graphs (1)
- educational games (1)
- effective Bandwidth (1)
- efficient algorithm (1)
- electric propulsion (1)
- electric vehicles (1)
- electronic data capture (1)
- elevated plus-maze (1)
- elite sport (1)
- embedding techniques (1)
- emulation (1)
- encryption (1)
- energy efficiency (1)
- environmental modeling (1)
- epigastric hernia (1)
- ethics (1)
- evaluation (1)
- event detection (1)
- exercise intensity (1)
- exercise science (1)
- experience (1)
- experimental evaluation (1)
- expert systems (1)
- explainable AI (1)
- explanation complexity (1)
- extended Kalman filter (1)
- extended reality (XR) (1)
- failure prediction (1)
- fast reroute (1)
- fault detection (1)
- feature matching (1)
- federated learning (1)
- femoral hernia (1)
- few-shot learning (1)
- finite recurrent systems (1)
- fitness trackers (1)
- fixed-parameter tractability (1)
- food quality (1)
- force feedback (1)
- forecast (1)
- foreign language learning and teaching (1)
- formation driving (1)
- formation flight (1)
- fractionated spacecraft (1)
- fruit temperature (1)
- future Internet architecture (1)
- future energy grid exploration (1)
- gait disorder (1)
- gambling (1)
- game mechanics (1)
- gamification (1)
- genetic algorithm (1)
- glaucoma progression (1)
- global IPX network (1)
- granular (1)
- graph (1)
- graph algorithm (1)
- graph decomposition (1)
- group-based communication (1)
- hackathons (1)
- handwriting (1)
- haptic data (1)
- hardness (1)
- hardware-in-the-loop simulation (1)
- hardware-in-the-loop streaming system (1)
- harness free satellite (1)
- hazard avoidance (1)
- head-mounted display (1)
- healing and remodelling processes (1)
- health monitoring (1)
- health sciences (1)
- health tracker (1)
- healthcare (1)
- healthcare professionals (1)
- heart failure (1)
- heart failure training group (1)
- heat transfer (1)
- helicopters (1)
- hernia defect (1)
- hernia repair material (1)
- heterogeneous background (1)
- hierarchy (1)
- high-accuracy 3D measurements (1)
- higher education (1)
- historical images (1)
- historical printings (1)
- hit ratio analysis and simulation (1)
- hospital data (1)
- human behaviour (1)
- human body weight (1)
- human computer interaction (HCI) (1)
- human-artificial intelligence interaction (1)
- human-artificial intelligence interface (1)
- human-centered AI (1)
- human-centered design (1)
- human-centered, human-robot (1)
- human-robot interaction (1)
- human–computer interaction (1)
- hybrid access (1)
- hybrid avatar-agent systems (1)
- hyperbolic partial differential equations (1)
- illusion of self-motion (1)
- image classification (1)
- image processing (1)
- imbalanced regression (1)
- immersive classroom (1)
- immersive classroom management (1)
- immersive interfaces (1)
- immersive learning technologies (1)
- implicit association test (1)
- in-orbit experiments (1)
- incisional abdominal wall hernia (1)
- incisional hernia (1)
- independent crossing (1)
- individual differences (1)
- induced matching (1)
- informal education (1)
- information retrieval (1)
- information systems and information technology (1)
- infrared (1)
- infrared detectors (1)
- inguinal hernia (1)
- insect tracking (1)
- instrument (1)
- integer linear programming (1)
- intelligent transportation systems (1)
- intelligent vehicles (1)
- intelligent virtual agents (1)
- intelligent voice assistant (1)
- intelligente Applikationen (1)
- interactive authoring system (1)
- interactive maps (1)
- intercultural learning and teaching (1)
- interdisciplinary education (1)
- internet of things (1)
- internet protocol (1)
- internet traffic (1)
- intervention (1)
- invasive vascular interventions (1)
- iowa gambling task (1)
- isentropic Euler equations (1)
- k-d tree (1)
- key-insight extraction (1)
- kinect (1)
- kinetic equations (1)
- labeling (1)
- land-cover area (1)
- landing (1)
- language-image pre-training (1)
- laser ranging (1)
- laser scanner (1)
- laserscanner (1)
- latency cybersickness (1)
- laterality (1)
- lattices (1)
- layout recognition (1)
- learning environments (1)
- least cost (1)
- lidar (1)
- light-gated proteins (1)
- load balancing (1)
- local energy system (1)
- logic programming (1)
- logistics (1)
- long-term analysis (1)
- lunar rover (1)
- m exercise training (1)
- magnetometer (1)
- maintenance (1)
- man-portable mapping (1)
- map labeling (1)
- map projections (1)
- marine navigation (1)
- mathematical model (1)
- mechanical engineering (1)
- mechanics (1)
- media analysis (1)
- medical records (1)
- medication extraction (1)
- meditation (1)
- membership problem (1)
- mesh augmentation (1)
- mesh repair (1)
- metro map (1)
- micrometre level microwave ranging (1)
- mindfulness (1)
- minimal triangulations (1)
- minimale Triangulationen (1)
- misconceptions (1)
- mixed reality (1)
- mixed-cultural (1)
- mixed-cultural settings (1)
- mobile instant messaging (1)
- mobile messaging application (1)
- mobile robots (1)
- mobile streaming (1)
- model following (1)
- model output statistics (1)
- model predictive control (1)
- model-based diagnosis (1)
- modeling techniques (1)
- monotone drawing (1)
- morphing (1)
- motion compensation (1)
- motivation (1)
- mountains (1)
- movement ecology (1)
- multi-source multi-sink problem (1)
- multi-vehicle formations (1)
- multi-vehicle rendezvous (1)
- multidisciplinary (1)
- multimodal fusion (1)
- multimodal interface (1)
- multimodal learning (1)
- multipath communication (1)
- multipath packet scheduling (1)
- multiple myeloma (1)
- multiple sclerosis (1)
- multirotors (1)
- multiscale encoder (1)
- nano-satellite (1)
- nanocellulose (1)
- natural environment (1)
- natural interfaces (1)
- natural language processing · · · (1)
- natural user interfaces (1)
- negation detection (1)
- network (1)
- network design (1)
- network function virtualization (1)
- network planning (1)
- network simulation (1)
- network softwarization (1)
- network upgrade (1)
- network virtualization (1)
- networked predictive control (1)
- networked robotics (1)
- networking (1)
- networks (1)
- neural architecture (1)
- neural network (1)
- non-native accent (1)
- non-rigid registration (1)
- non-terrestrial networks (1)
- nonholonomic vehicles (1)
- normal distribution transform (1)
- nosocomial infection (1)
- nycthemeral intraocular pressure (1)
- object reconstruction (1)
- obstacle detection (1)
- octree (1)
- oncolytic virus (1)
- online survey (1)
- ontologies (1)
- optical character recognition (1)
- optical music recognition (1)
- optical underwater 3D sensor (1)
- optogenetics (1)
- orchestration (1)
- overprovisioning (1)
- packet reception method (1)
- parastomal hernia (1)
- partitions (1)
- passage of time (1)
- passive haptic feedback (1)
- path computation (1)
- patients’ awareness (1)
- perception (1)
- performance analysis (1)
- performance parameters (1)
- performance prediction (1)
- personal laser scanning (1)
- personalized medicine (1)
- personalized training (1)
- phase unwrapping (1)
- photoplethysmography (1)
- physicians’ awareness (1)
- physiological dataset (1)
- physiology (1)
- place-illusion (1)
- plain orchestrating service (1)
- plausibility (1)
- plausibility-illusion (1)
- point cloud (1)
- point cloud compression (1)
- point cloud registration (1)
- point labeling (1)
- point-feature label placement (1)
- point-to-plane measure (1)
- point-to-point measure (1)
- pollution (1)
- polylines (1)
- polyp (1)
- pos (1)
- pose tracking (1)
- posets (1)
- positioning (1)
- power consumption (1)
- precision horticulture (1)
- precision training (1)
- presence (1)
- primary ventral hernia (1)
- private chat groups (1)
- problem solving skills (1)
- procedural content generation (1)
- procedural fusion methods (1)
- progressive download (1)
- prompt engineering (1)
- protein analysis (1)
- protein chip (1)
- psychomotor training (1)
- psychophyisology (1)
- public speaking (1)
- qoe (1)
- quadcopter (1)
- quadcopters (1)
- quadrocopter (1)
- quadrotor (1)
- quality assurance (1)
- quality evaluation (1)
- quality of experience prediction (1)
- quality of life (1)
- queueing theory (1)
- radio resource management (1)
- radiology (1)
- ransomware (1)
- real world evidence (1)
- real-world application (1)
- realism (1)
- receding horizon control (1)
- recommender agent (1)
- recommender system (1)
- reconfiguration (1)
- recurrent abdominal wall hernia (1)
- refactoring (1)
- regenerative cooling (1)
- registries (1)
- reinforcement learning (1)
- rekurrente Systeme (1)
- reload cost (1)
- remote control (1)
- rendezvous and docking (1)
- requirements management (1)
- research methods (1)
- resilience (1)
- rich vehicle routing problem (1)
- right angle crossing (1)
- right-left comparison (1)
- risks (1)
- robot-supported training (1)
- robotic (1)
- robotic tutor (1)
- robotics (1)
- robust control (1)
- robustness (1)
- rocket engine (1)
- rotorcraft (1)
- rotors (1)
- routing (1)
- rulebased analysis (1)
- sample weighting (1)
- sandfish (1)
- satellite formation flying (1)
- satellite technology (1)
- satisfiability problems (1)
- scalability (1)
- scalability evaluation (1)
- scalable quadcopter (1)
- scheduling (1)
- science, technology and society (1)
- secondary data usage (1)
- secure group communication (1)
- segmentation (1)
- self-adaptive (1)
- self-assembly (1)
- self-aware (1)
- self-aware computing systems (1)
- self-managing systems (1)
- self-organization (1)
- self-supervised learning (1)
- semantic fusion (1)
- semantic technologies (1)
- semantic understanding (1)
- semantic web (1)
- semantical aesthetic (1)
- semantische Ästhetik (1)
- sensor (1)
- sensor devices (1)
- sentinel (1)
- serious games (1)
- service-curve estimation (1)
- sesnsors (1)
- short block-length (1)
- shortest path routing (1)
- signaling traffic (1)
- signalling pathways (1)
- simulation system (1)
- simulator sickness (1)
- simultaneous embedding (1)
- single-electron transistors (1)
- site mapping (1)
- sketching (1)
- slip (1)
- smart charging (1)
- smart grid (1)
- smart meter data utilization (1)
- smart speaker (1)
- smartwatch (1)
- smooth orthogonal drawing (1)
- snow shoveling (1)
- social VR (1)
- social artificial intelligence (1)
- social interaction (1)
- social relationship (1)
- social robot (1)
- social robotics (1)
- social role (1)
- socially interactive agents (1)
- software defined network (1)
- software engineering (1)
- software performance (1)
- software-definded networking (1)
- space missions phases (1)
- spacecrarft control (1)
- space–terrestrial networks (1)
- spanning tree (1)
- spatial presence (1)
- specular reflective (1)
- sports technology (1)
- standardization (1)
- state management (1)
- stationary preserving (1)
- statistical methods (1)
- statistical validity (1)
- statistics and numerical data (1)
- stereotypes (1)
- stochastic processes (1)
- straight-line segments (1)
- street labeling (1)
- structural battery (1)
- structural complexity (1)
- structured illumination (1)
- structured light illumination (1)
- student simulation (1)
- study design (1)
- stylus (1)
- sunburn (1)
- supervised learning (1)
- surface model (1)
- surrogate model (1)
- survey (1)
- sustainability (1)
- switching navigation (1)
- system simulation (1)
- systematic literature review (1)
- systematic review (1)
- table extraction (1)
- table understanding (1)
- taxonomy (1)
- teacher education (1)
- technology acceptance (1)
- technology-supported education (1)
- technology-supported learning (1)
- telematics (1)
- telemedicine (1)
- text line detection (1)
- text supervision (1)
- theory (1)
- therapeutic application (1)
- therapy (1)
- thermal camera (1)
- thermal point cloud (1)
- thrust direction (1)
- thrust vector control (1)
- time calibration (1)
- time perception (1)
- time series (1)
- timestamping method (1)
- tools (1)
- topology (1)
- traffic damping (1)
- training systems (1)
- trait anxiety (1)
- trajectory planning (1)
- transformer (1)
- translational neuroscience (1)
- transparent (1)
- transport microenvironments (1)
- transport protocols (1)
- transportation (1)
- tree (1)
- ultrasonic autonomous aerial vehicles (1)
- umbilical hernia (1)
- uncooperative space rendezvous (1)
- underwater 3D scanning (1)
- unmanned aerial vehicle (1)
- usability evaluation (1)
- use cases (1)
- user identification (1)
- user interaction (1)
- user-generated content (1)
- v (1)
- validation (1)
- vection (1)
- vehicle dynamics (1)
- vehicular navigation (1)
- ventral hernia (1)
- ventral hernia model (1)
- verbal behaviour (1)
- vernetzte Roboter (1)
- video QoE (1)
- video game QoE (1)
- video game context factors (1)
- video object detection (1)
- video streaming (1)
- virtual agent (1)
- virtual agent interaction (1)
- virtual embodiment (1)
- virtual human (1)
- virtual humans (1)
- virtual queue (1)
- virtual reality training (1)
- virtual social interaction (1)
- virtual stimuli (1)
- virtual tunnel (1)
- virtual-reality-continuum (1)
- virtualized environments (1)
- virtuel reality (1)
- visualization (1)
- vom Nutzer erfahrene Dienstgüte QoE (1)
- voting location (1)
- waypoint parameter (1)
- wearable technologies (1)
- well-balanced scheme (1)
- wheel (1)
- wireless communication (1)
- wireless sensor network (1)
- wireless-bus (1)
- word clouds (1)
- word extensions (1)
- zooming (1)
- zukünftige Kommunikationsnetze (1)
- zukünftiges Internet (1)
- Ähnlichkeitsmaß (1)
- Änderungserkennung (1)
- Überwachungstechnik (1)
Institute
- Institut für Informatik (341) (remove)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- Airbus Defence and Space GmbH (1)
- Birmingham City University (1)
- California Institute of Technology (1)
- DLR (1)
- Deutsches Zentrum für Luft- und Raumfahrt e.V. (1)
Latency is an inherent problem of computing systems. Each computation takes time until the result is available. Virtual reality systems use elaborated computer resources to create virtual experiences. The latency of those systems is often ignored or assumed as small enough to provide a good experience.
This cumulative thesis is comprised of published peer reviewed research papers exploring the behaviour and effects of latency. Contrary to the common description of time invariant latency, latency is shown to fluctuate. Few other researchers have looked into this time variant behaviour. This thesis explores time variant latency with a focus on randomly occurring latency spikes. Latency spikes are observed both for small algorithms and as end to end latency in complete virtual reality systems. Most latency measurements gather close to the mean latency with potentially multiple smaller clusters of larger latency values and rare extreme outliers. The latency behaviour differs for different implementations of an algorithm. Operating system schedulers and programming language environments such as garbage collectors contribute to the overall latency behaviour. The thesis demonstrates these influences on the example of different implementations of message passing.
The plethora of latency sources result in an unpredictable latency behaviour. Measuring and reporting it in scientific experiments is important. This thesis describes established approaches to measuring latency and proposes an enhanced setup to gather detailed information. The thesis proposes to dissect the measured data with a stacked z-outlier-test to separate the clusters of latency measurements for better reporting.
Latency in virtual reality applications can degrade the experience in multiple ways. The thesis focuses on cybersickness as a major detrimental effect. An approach to simulate time variant latency is proposed to make latency available as an independent variable in experiments to understand latency's effects. An experiment with modified latency shows that latency spikes can contribute to cybersickness. A review of related research shows that different time invariant latency behaviour also contributes to cybersickness.
In the future Internet, the people-centric communication paradigm will be complemented by a ubiquitous communication among people and devices, or even a communication between devices. This comes along with the need for a more flexible, cheap, widely available Internet access. Two types of wireless networks are considered most appropriate for attaining those goals. While wireless sensor networks (WSNs) enhance the Internet’s reach by providing data about the properties of the environment, wireless mesh networks (WMNs) extend the Internet access possibilities beyond the wired backbone. This monograph contains four chapters which present modeling and optimization methods for WSNs and WMNs. Minimizing energy consumptions is the most important goal of WSN optimization and the literature consequently provides countless energy consumption models. The first part of the monograph studies to what extent the used energy consumption model influences the outcome of analytical WSN optimizations. These considerations enable the second contribution, namely overcoming the problems on the way to a standardized energy-efficient WSN communication stack based on IEEE 802.15.4 and ZigBee. For WMNs both problems are of minor interest whereas the network performance has a higher weight. The third part of the work, therefore, presents algorithms for calculating the max-min fair network throughput in WMNs with multiple link rates and Internet gateway. The last contribution of the monograph investigates the impact of the LRA concept which proposes to systematically assign more robust link rates than actually necessary, thereby allowing to exploit the trade-off between spatial reuse and per-link throughput. A systematical study shows that a network-wide slightly more conservative LRA than necessary increases the throughput of a WMN where max-min fairness is guaranteed. It moreover turns out that LRA is suitable for increasing the performance of a contention-based WMN and is a valuable optimization tool.
We consider competitive location problems where two competing providers place their facilities sequentially and users can decide between the competitors. We assume that both competitors act non-cooperatively and aim at maximizing their own benefits. We investigate the complexity and approximability of such problems on graphs, in particular on simple graph classes such as trees and paths. We also develop fast algorithms for single competitive location problems where each provider places a single facilty. Voting location, in contrast, aims at identifying locations that meet social criteria. The provider wants to satisfy the users (customers) of the facility to be opened. In general, there is no location that is favored by all users. Therefore, a satisfactory compromise has to be found. To this end, criteria arising from voting theory are considered. The solution of the location problem is understood as the winner of a virtual election among the users of the facilities, in which the potential locations play the role of the candidates and the users represent the voters. Competitive and voting location problems turn out to be closely related.
Enterprise applications in virtualized data centers are often subject to time-varying workloads, i.e., the load intensity and request mix change over time, due to seasonal patterns and trends, or unpredictable bursts in user requests. Varying workloads result in frequently changing resource demands to the underlying hardware infrastructure. Virtualization technologies enable sharing and on-demand allocation of hardware resources between multiple applications. In this context, the resource allocations to virtualized applications should be continuously adapted in an elastic fashion, so that "at each point in time the available resources match the current demand as closely as possible" (Herbst el al., 2013). Autonomic approaches to resource management promise significant increases in resource efficiency while avoiding violations of performance and availability requirements during peak workloads.
Traditional approaches for autonomic resource management use threshold-based rules (e.g., Amazon EC2) that execute pre-defined reconfiguration actions when a metric reaches a certain threshold (e.g., high resource utilization or load imbalance). However, many business-critical applications are subject to Service-Level-Objectives defined on an application performance metric (e.g., response time or throughput). To determine thresholds so that the end-to-end application SLO is fulfilled poses a major challenge due to the complex relationship between the resource allocation to an application and the application performance. Furthermore, threshold-based approaches are inherently prone to an oscillating behavior resulting in unnecessary reconfigurations.
In order to overcome the deficiencies of threshold-based
approaches and enable a fully automated approach to dynamically control the resource allocations of virtualized applications, model-based approaches are required that can predict the impact of a reconfiguration on the application performance in advance. However, existing model-based approaches are severely limited in their learning capabilities. They either require complete performance models of the application as input, or use a pre-identified model structure and only learn certain model parameters from empirical data at run-time. The former requires high manual efforts and deep system knowledge to create the performance models. The latter does not provide the flexibility to capture the specifics of complex and heterogeneous system architectures.
This thesis presents a self-aware approach to the resource management in virtualized data centers. In this context, self-aware means that it automatically learns performance models of the application and the virtualized infrastructure and reasons based on these models to autonomically adapt the resource allocations in accordance with given application SLOs. Learning a performance model requires the extraction of the model structure representing the system architecture as well as the estimation of model parameters, such as resource demands. The estimation of resource demands is a key challenge as they cannot be observed directly in most systems.
The major scientific contributions of this thesis are:
- A reference architecture for online model learning in virtualized systems. Our reference architecture is based on a set of model extraction agents. Each agent focuses on specific tasks to automatically create and update model skeletons capturing its local knowledge of the system and collaborates with other agents to extract the structural parts of a global performance model of the system. We define different agent roles in the reference architecture and propose a model-based collaboration mechanism for the agents. The agents may be bundled within virtual appliances and may be tailored to include knowledge about the software stack deployed in a specific virtual appliance.
- An online method for the statistical estimation of resource demands. For a given request processed by an application, the resource time consumed for a specified resource within the system (e.g., CPU or I/O device), referred to as resource demand, is the total average time the resource is busy processing the request. A request could be any unit of work (e.g., web page request, database transaction, batch job) processed by the system. We provide a systematization of existing statistical approaches to resource demand estimation and conduct an extensive experimental comparison to evaluate the accuracy of these approaches. We propose a novel method to automatically select estimation approaches and demonstrate that it increases the robustness and accuracy of the estimated resource demands significantly.
- Model-based controllers for autonomic vertical scaling of virtualized applications. We design two controllers based on online model-based reasoning techniques in order to vertically scale applications at run-time in accordance with application SLOs. The controllers exploit the knowledge from the automatically extracted performance models when determining necessary reconfigurations. The first controller adds and removes virtual CPUs to an application depending on the current demand. It uses a layered performance model to also consider the physical resource contention when determining the required resources. The second controller adapts the resource allocations proactively to ensure the availability of the application during workload peaks and avoid reconfiguration during phases of high workload.
We demonstrate the applicability of our approach in current virtualized environments and show its effectiveness leading to significant increases in resource efficiency and improvements of the application performance and availability under time-varying workloads. The evaluation of our approach is based on two case studies representative of widely used enterprise applications in virtualized data centers. In our case studies, we were able to reduce the amount of required CPU resources by up to 23% and the number of reconfigurations by up to 95% compared to a rule-based approach while ensuring full compliance with application SLO. Furthermore, using workload forecasting techniques we were able to schedule expensive reconfigurations (e.g., changes to the memory size) during phases of load load and thus were able to reduce their impact on application availability by over 80% while significantly improving application performance compared to a reactive controller. The methods and techniques for resource demand estimation and vertical application scaling were developed and evaluated in close collaboration with VMware and Google.
Here, we performed a non-systematic analysis of the strength, weaknesses, opportunities, and threats (SWOT) associated with the application of artificial intelligence to sports research, coaching and optimization of athletic performance. The strength of AI with regards to applied sports research, coaching and athletic performance involve the automation of time-consuming tasks, processing and analysis of large amounts of data, and recognition of complex patterns and relationships. However, it is also essential to be aware of the weaknesses associated with the integration of AI into this field. For instance, it is imperative that the data employed to train the AI system be both diverse and complete, in addition to as unbiased as possible with respect to factors such as the gender, level of performance, and experience of an athlete. Other challenges include e.g., limited adaptability to novel situations and the cost and other resources required. Opportunities include the possibility to monitor athletes both long-term and in real-time, the potential discovery of novel indicators of performance, and prediction of risk for future injury. Leveraging these opportunities can transform athletic development and the practice of sports science in general. Threats include over-dependence on technology, less involvement of human expertise, risks with respect to data privacy, breaching of the integrity and manipulation of data, and resistance to adopting such new technology. Understanding and addressing these SWOT factors is essential for maximizing the benefits of AI while mitigating its risks, thereby paving the way for its successful integration into sport science research, coaching, and optimization of athletic performance.
State Management at line rate is crucial for critical applications in next-generation networks. P4 is a language used in software-defined networking to program the data plane. The data plane can profit in many circumstances when it is allowed to manage its state without any detour over a controller. This work is based on a previous study by investigating the potential and performance of add-on-miss insertions of state by the data plane. The state keeping capabilities of P4 are limited regarding the amount of data and the update frequency. We follow the tentative specification of an upcoming portable-NIC-architecture and implement these changes into the software P4 target T4P4S. We show that insertions are possible with only a slight overhead compared to lookups and evaluate the influence of the rate of insertions on their latency.
Streaming of videos has become the major traffic generator in today's Internet and the video traffic share is still increasing. According to Cisco's annual Visual Networking Index report, in 2012, 60% of the global Internet IP traffic was generated by video streaming services. Furthermore, the study predicts further increase to 73% by 2017. At the same time, advances in the fields of mobile communications and embedded devices lead to a widespread adoption of Internet video enabled mobile and wireless devices (e.g. Smartphones). The report predicts that by 2017, the traffic originating from mobile and wireless devices will exceed the traffic from wired devices and states that mobile video traffic was the source of roughly half of the mobile IP traffic at the end of 2012.
With the increasing importance of Internet video streaming in today's world, video content provider find themselves in a highly competitive market where user expectations are high and customer loyalty depends strongly on the user's satisfaction with the provided service. In particular paying customers expect their viewing experience to be the same across all their viewing devices and independently of their currently utilized Internet access technology. However, providing video streaming services is costly in terms of storage space, required bandwidth and generated traffic. Therefore, content providers face a trade-off between the user perceived Quality of Experience (QoE) and the costs for providing the service.
Today, a variety of transport and application protocols exist for providing video streaming services, but the one utilized depends on the scenario in mind. Video streaming services can be divided up in three categories: Video conferencing, IPTV and Video-on-Demand services. IPTV and video-conferencing have severe real-time constraints and thus utilize mostly datagram-based protocols like the RTP/UDP protocol for the video transmission. Video-on-Demand services in contrast can profit from pre-encoded content, buffers at the end user's device, and mostly utilize TCP-based protocols in combination with progressive streaming for the media delivery.
In recent years, the HTTP protocol on top of the TCP protocol gained widespread popularity as a cost-efficient way to distribute pre-encoded video content to customers via progressive streaming. This is due to the fact that HTTP-based video streaming profits from a well-established infrastructure which was originally implemented to efficiently satisfy the increasing demand for web browsing and file downloads. Large Content Delivery Networks (CDN) are the key components of that distribution infrastructure. CDNs prevent expensive long-haul data traffic and delays by distributing HTTP content to world-wide locations close to the customers. As of 2012, already 53% of the global video traffic in the Internet originates from Content Delivery Networks and that percentage is expected to increase to 65% by the year 2017. Furthermore, HTTP media streaming profits from existing HTTP caching infrastructure, ease of NAT and proxy traversal and firewall friendliness.
Video delivery through heterogeneous wired and wireless communications networks is prone to distortions due to insufficient network resources. This is especially true in wireless scenarios, where user mobility and insufficient signal strength can result in a very poor transport service performance (e.g. high packet loss, delays and low and varying bandwidth). A poor performance of the transport in turn may degrade the Quality of Experience as perceived by the user, either due to buffer underruns (i.e. playback interruptions) for TCP-based delivery or image distortions for datagram-based real-time video delivery.
In order to overcome QoE degradations due to insufficient network resources, content provider have to consider adaptive video streaming. One possibility to implement this for HTTP/TCP streaming is by partitioning the content into small segments, encode the segments into different quality levels and provide access to the segments and the quality level details (e.g. resolution, average bitrate). During the streaming session, a client-centric adaptation algorithm can use the supplied details to adapt the playback to the current environment. However, a lack of a common HTTP adaptive streaming standard led to multiple proprietary solutions developed by major Internet companies like Microsoft (Smooth Streaming), Apple (HTTP Live Streaming) and Adobe (HTTP Dynamic Streaming) loosely based on the aforementioned principle. In 2012, the ISO/IEC published the Dynamic Adaptive Streaming over HTTP (MPEG-DASH) standard. As of today, DASH is becoming widely accepted with major companies announcing their support or having already implemented the standard into their products. MPEG-DASH is typically used with single layer codecs like H.264/AVC, but recent publications show that scalable video coding can use the existing HTTP infrastructure more efficiently. Furthermore, the layered approach of scalable video coding extends the adaptation options for the client, since already downloaded segments can be enhanced at a later time.
The influence of distortions on the perceived QoE for non-adaptive video streaming are well reviewed and published. For HTTP streaming, the QoE of the user is influenced by the initial delay (i.e. the time the client pre-buffers video data) and the length and frequency of playback interruptions due to a depleted video playback buffer. Studies highlight that even low stalling times and frequencies have a negative impact on the QoE of the user and should therefore be avoided. The first contribution of this thesis is the identification of QoE influence factors of adaptive video streaming by the means of crowd-sourcing and a laboratory study.
MPEG-DASH does not specify how to adapt the playback to the available bandwidth and therefore the design of a download/adaptation algorithm is left to the developer of the client logic. The second contribution of this thesis is the design of a novel user-centric adaption logic for DASH with SVC. Other download algorithms for segmented HTTP streaming with single layer and scalable video coding have been published lately. However, there is little information about the behavior of these algorithms regarding the identified QoE-influence factors. The third contribution is a user-centric performance evaluation of three existing adaptation algorithms and a comparison to the proposed algorithm. In the performance evaluation we also evaluate the fairness of the algorithms. In one fairness scenario, two clients deploy the same adaptation algorithm and share one Internet connection. For a fair adaptation algorithm, we expect the behavior of the two clients to be identical. In a second fairness scenario, one client shares the Internet connection with a large HTTP file download and we expect an even bandwidth distribution between the video streaming and the file download. The forth contribution of this thesis is an evaluation of the behavior of the algorithms in a two-client and HTTP cross traffic scenario.
The remainder of this thesis is structured as follows. Chapter II gives a brief introduction to video coding with H.264, the HTTP adaptive streaming standard MPEG-DASH, the investigated adaptation algorithms and metrics of Quality of Experience (QoE) for video streaming. Chapter III presents the methodology and results of the subjective studies conducted in the course of this thesis to identify the QoE influence factors of adaptive video streaming. In Chapter IV, we introduce the proposed adaptation algorithm and the methodology of the performance evaluation. Chapter V highlights the results of the performance evaluation and compares the investigated adaptation algorithms. Section VI summarizes the main findings and gives an outlook towards QoE-centric management of DASH with SVC.
The thesis focuses on Quality of Experience (QoE) of HTTP adaptive video streaming (HAS) and traffic management in access networks to improve the QoE of HAS. First, the QoE impact of adaptation parameters and time on layer was investigated with subjective crowdsourcing studies. The results were used to compute a QoE-optimal adaptation strategy for given video and network conditions. This allows video service providers to develop and benchmark improved adaptation logics for HAS. Furthermore, the thesis investigated concepts to monitor video QoE on application and network layer, which can be used by network providers in the QoE-aware traffic management cycle. Moreover, an analytic and simulative performance evaluation of QoE-aware traffic management on a bottleneck link was conducted. Finally, the thesis investigated socially-aware traffic management for HAS via Wi-Fi offloading of mobile HAS flows. A model for the distribution of public Wi-Fi hotspots and a platform for socially-aware traffic management on private home routers was presented. A simulative performance evaluation investigated the impact of Wi-Fi offloading on the QoE and energy consumption of mobile HAS.
Due to biased assumptions on the underlying ordinal rating scale in subjective Quality of Experience (QoE) studies, Mean Opinion Score (MOS)-based evaluations provide results, which are hard to interpret and can be misleading. This paper proposes to consider the full QoE distribution for evaluating, reporting, and modeling QoE results instead of relying on MOS-based metrics derived from results based on ordinal rating scales. The QoE distribution can be represented in a concise way by using the parameters of a multinomial distribution without losing any information about the underlying QoE ratings, and even keeps backward compatibility with previous, biased MOS-based results. Considering QoE results as a realization of a multinomial distribution allows to rely on a well-established theoretical background, which enables meaningful evaluations also for ordinal rating scales. Moreover, QoE models based on QoE distributions keep detailed information from the results of a QoE study of a technical system, and thus, give an unprecedented richness of insights into the end users’ experience with the technical system. In this work, existing and novel statistical methods for QoE distributions are summarized and exemplary evaluations are outlined. Furthermore, using the novel concept of quality steps, simulative and analytical QoE models based on QoE distributions are presented and showcased. The goal is to demonstrate the fundamental advantages of considering QoE distributions over MOS-based evaluations if the underlying rating data is ordinal in nature.
To deliver the best user experience (UX), the human-centered design cycle (HCDC) serves as a well-established guideline to application developers. However, it does not yet cover network-specific requirements, which become increasingly crucial, as most applications deliver experience over the Internet. The missing network-centric view is provided by Quality of Experience (QoE), which could team up with UX towards an improved overall experience. By considering QoE aspects during the development process, it can be achieved that applications become network-aware by design. In this paper, the Quality of Experience Centered Design Cycle (QoE-CDC) is proposed, which provides guidelines on how to design applications with respect to network-specific requirements and QoE. Its practical value is showcased for popular application types and validated by outlining the design of a new smartphone application. We show that combining HCDC and QoE-CDC will result in an application design, which reaches a high UX and avoids QoE degradation.
Group-based communication is a highly popular communication paradigm, which is especially prominent in mobile instant messaging (MIM) applications, such as WhatsApp. Chat groups in MIM applications facilitate the sharing of various types of messages (e.g., text, voice, image, video) among a large number of participants. As each message has to be transmitted to every other member of the group, which multiplies the traffic, this has a massive impact on the underlying communication networks. However, most chat groups are private and network operators cannot obtain deep insights into MIM communication via network measurements due to end-to-end encryption. Thus, the generation of traffic is not well understood, given that it depends on sizes of communication groups, speed of communication, and exchanged message types. In this work, we provide a huge data set of 5,956 private WhatsApp chat histories, which contains over 76 million messages from more than 117,000 users. We describe and model the properties of chat groups and users, and the communication within these chat groups, which gives unprecedented insights into private MIM communication. In addition, we conduct exemplary measurements for the most popular message types, which empower the provided models to estimate the traffic over time in a chat group.
The strict restrictions introduced by the COVID-19 lockdowns, which started from March 2020, changed people’s daily lives and habits on many different levels. In this work, we investigate the impact of the lockdown on the communication behavior in the mobile instant messaging application WhatsApp. Our evaluations are based on a large dataset of 2577 private chat histories with 25,378,093 messages from 51,973 users. The analysis of the one-to-one and group conversations confirms that the lockdown severely altered the communication in WhatsApp chats compared to pre-pandemic time ranges. In particular, we observe short-term effects, which caused an increased message frequency in the first lockdown months and a shifted communication activity during the day in March and April 2020. Moreover, we also see long-term effects of the ongoing pandemic situation until February 2021, which indicate a change of communication behavior towards more regular messaging, as well as a persisting change in activity during the day. The results of our work show that even anonymized chat histories can tell us a lot about people’s behavior and especially behavioral changes during the COVID-19 pandemic and thus are of great relevance for behavioral researchers. Furthermore, looking at the pandemic from an Internet provider perspective, these insights can be used during the next pandemic, or if the current COVID-19 situation worsens, to adapt communication networks to the changed usage behavior early on and thus avoid network congestion.
In time-sensitive networks (TSN) based on 802.1Qbv, i.e., the time-aware Shaper (TAS) protocol, precise transmission schedules and, paths are used to ensure end-to-end deterministic communication. Such resource reservations for data flows are usually established at the startup time of an application and remain untouched until the flow ends. There is no way to migrate existing flows easily to alternative paths without inducing additional delay or wasting resources. Therefore, some of the new flows cannot be embedded due to capacity limitations on certain links which leads to sub-optimal flow assignment. As future networks will need to support a large number of lowlatency flows, accommodating new flows at runtime and adapting existing flows accordingly becomes a challenging problem. In this extended abstract we summarize a previously published paper of us [1]. We combine software-defined networking (SDN), which provides better control of network flows, with TSN to be able to seamlessly migrate time-sensitive flows. For that, we formulate an optimization problem and propose different dynamic path configuration strategies under deterministic communication requirements. Our simulation results indicate that regularly reconfiguring the flow assignments can improve the latency of time-sensitive flows and can increase the number of flows embedded in the network around 4% in worst-case scenarios while still satisfying individual flow deadlines.
Today's Internet is no longer only controlled by a single stakeholder, e.g. a standard body or a telecommunications company.
Rather, the interests of a multitude of stakeholders, e.g. application developers, hardware vendors, cloud operators, and network operators, collide during the development and operation of applications in the Internet.
Each of these stakeholders considers different KPIs to be important and attempts to optimise scenarios in its favour.
This results in different, often opposing views and can cause problems for the complete network ecosystem.
One example of such a scenario are Signalling Storms in the mobile Internet, with one of the largest occurring in Japan in 2012 due to the release and high popularity of a free instant messaging application.
The network traffic generated by the application caused a high number of connections to the Internet being established and terminated.
This resulted in a similarly high number of signalling messages in the mobile network, causing overload and a loss of service for 2.5 million users over 4 hours.
While the network operator suffers the largest impact of this signalling overload, it does not control the application.
Thus, the network operator can not change the application traffic characteristics to generate less network signalling traffic.
The stakeholders who could prevent, or at least reduce, such behaviour, i.e. application developers or hardware vendors, have no direct benefit from modifying their products in such a way.
This results in a clash of interests which negatively impacts the network performance for all participants.
The goal of this monograph is to provide an overview over the complex structures of stakeholder relationships in today's Internet applications in mobile networks.
To this end, we study different scenarios where such interests clash and suggest methods where tradeoffs can be optimised for all participants.
If such an optimisation is not possible or attempts at it might lead to adverse effects, we discuss the reasons.
The general map-labeling problem is as follows: given a set of geometric objects to be labeled, or features, in the plane, and for each feature a set of label positions, maximize the number of placed labels such that there is at most one label per feature and no two labels overlap. There are three types of features in a map: point, line, and area features. Unfortunately, one cannot expect to find efficient algorithms that solve the labeling problem optimally.
Interactive maps are digital maps that only show a small part of the entire map whereas the user can manipulate the shown part, the view, by continuously panning, zooming, rotating, and tilting (that is, changing the perspective between a top and a bird view). An example for the application of interactive maps is in navigational devices. Interactive maps are challenging in that the labeling must be updated whenever labels leave the view and, while zooming, the label size must be constant on the screen (which either makes space for further labels or makes labels overlap when zooming in or out, respectively). These updates must be computed in real time, that is, the computation must be so fast that the user does not notice that we spend time on the computation. Additionally, labels must not jump or flicker, that is, labels must not suddenly change their positions or, while zooming out, a vanished label must not appear again.
In this thesis, we present efficient algorithms that dynamically label point and line features in interactive maps. We try to label as many features as possible while we prohibit labels that overlap, jump, and flicker. We have implemented all our approaches and tested them on real-world data. We conclude that our algorithms are indeed real-time capable.
The thesis looks at the question asking for the computability of the dot-depth of star-free regular languages. Here one has to determine for a given star-free regular language the minimal number of alternations between concatenation on one hand, and intersection, union, complement on the other hand. This question was first raised in 1971 (Brzozowski/Cohen) and besides the extended star-heights problem usually refered to as one of the most difficult open questions on regular languages. The dot-depth problem can be captured formally by hierarchies of classes of star-free regular languages B(0), B(1/2), B(1), B(3/2),... and L(0), L(1/2), L(1), L(3/2),.... which are defined via alternating the closure under concatenation and Boolean operations, beginning with single alphabet letters. Now the question of dot-depth is the question whether these hierarchy classes have decidable membership problems. The thesis makes progress on this question using the so-called forbidden pattern approach: Classes of regular languages are characterized in terms of patterns in finite automata (subgraphs in the transition graph) that are not allowed. Such a characterization immediately implies the decidability of the respective class, since the absence of a certain pattern in a given automaton can be effectively verified. Before this work, the decidability of B(0), B(1/2), B(1) and L(0), L(1/2), L(1), L(3/2) were known. Here a detailed study of these classes with help of forbidden patterns is given which leads to new insights into their inner structure. Furthermore, the decidability of B(3/2) is proven. Based on these results a theory of pattern iteration is developed which leads to the introduction of two new hierarchies of star-free regular languages. These hierarchies are decidable on one hand, on the other hand they are in close connection to the classes B(n) and L(n). It remains an open question here whether they may in fact coincide. Some evidence is given in favour of this conjecture which opens a new way to attack the dot-depth problem. Moreover, it is shown that the class L(5/2) is decidable in the restricted case of a two-letter alphabet.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
The field of small satellite formations and constellations attracted growing attention, based on recent advances in small satellite engineering. The utilization of distributed space systems allows the realization of innovative applications and will enable improved temporal and spatial resolution in observation scenarios. On the other side, this new paradigm imposes a variety of research challenges. In this monograph new networking concepts for space missions are presented, using networks of ground stations. The developed approaches combine ground station resources in a coordinated way to achieve more robust and efficient communication links. Within this thesis, the following topics were elaborated to improve the performance in distributed space missions: Appropriate scheduling of contact windows in a distributed ground system is a necessary process to avoid low utilization of ground stations. The theoretical basis for the novel concept of redundant scheduling was elaborated in detail. Additionally to the presented algorithm was a scheduling system implemented, its performance was tested extensively with real world scheduling problems. In the scope of data management, a system was developed which autonomously synchronizes data frames in ground station networks and uses this information to detect and correct transmission errors. The system was validated with hardware in the loop experiments, demonstrating the benefits of the developed approach.
Neural networks have to capture mathematical relationships in order to learn various tasks. They approximate these relations implicitly and therefore often do not generalize well. The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication. Although NALUs have been shown to perform well on various downstream tasks, an in-depth analysis reveals practical shortcomings by design, such as the inability to multiply or divide negative input values or training stability issues for deeper networks. We address these issues and propose an improved model architecture. We evaluate our model empirically in various settings from learning basic arithmetic operations to more complex functions. Our experiments indicate that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence.
Detecting anomalies in transaction data is an important task with a high potential to avoid financial loss due to irregularities deliberately or inadvertently carried out, such as credit card fraud, occupational fraud in companies or ordering and accounting errors. With ongoing digitization of our world, data-driven approaches, including machine learning, can draw benefit from data with less manual effort and feature engineering. A large variety of machine learning-based anomaly detection methods approach this by learning a precise model of normality from which anomalies can be distinguished. Modeling normality in transactional data, however, requires to capture distributions and dependencies within the data precisely with special attention to numerical dependencies such as quantities, prices or amounts.
To implicitly model numerical dependencies, Neural Arithmetic Logic Units have been proposed as neural architecture. In practice, however, these have stability and precision issues.
Therefore, we first develop an improved neural network architecture, iNALU, which is designed to better model numerical dependencies as found in transaction data. We compare this architecture to the previous approach and show in several experiments of varying complexity that our novel architecture provides better precision and stability.
We integrate this architecture into two generative neural network models adapted for transaction data and investigate how well normal behavior is modeled. We show that both architectures can successfully model normal transaction data, with our neural architecture improving generative performance for one model.
Since categorical and numerical variables are common in transaction data, but many machine learning methods only process numerical representations, we explore different representation learning techniques to transform categorical transaction data into dense numerical vectors. We extend this approach by proposing an outlier-aware discretization, thus incorporating numerical attributes into the computation of categorical embeddings, and investigate latent spaces, as well as quantitative performance for anomaly detection.
Next, we evaluate different scenarios for anomaly detection on transaction data. We extend our iNALU architecture to a neural layer that can model both numerical and non-numerical dependencies and evaluate it in a supervised and one-class setting. We investigate the stability and generalizability of our approach and show that it outperforms a variety of models in the balanced supervised setting and performs comparably in the one-class setting. Finally, we evaluate three approaches to using a generative model as an anomaly detector and compare the anomaly detection performance.
Besides the integration of renewable energies, electric vehicles pose an additional challenge to modern power grids. However, electric vehicles can also be a flexibility source and contribute to the power system stability. Today, the power system still heavily relies on conventional technologies to stay stable. In order to operate a future power system based on renewable energies only, we need to understand the flexibility potential of assets such as electric vehicles and become able to use their flexibility. In this paper, we analyzed how vast amounts of coordinated charging processes can be used to provide frequency containment reserve power, one of the most important ancillary services for system stability. Therefore, we used an extensive simulation model of a virtual power plant of millions of electric vehicles. The model considers not only technical components but also the stochastic behavior of electric vehicle drivers based on real data. Our results show that, in 2030, electric vehicles have the potential to serve the whole frequency containment reserve power market in Germany. We differentiate between using unidirectional and bidirectional chargers. Bidirectional chargers have a larger potential but also result in unwanted battery degradation. Unidirectional chargers are more constrained in terms of flexibility, but do not lead to additional battery degradation. We conclude that using a mix of both can combine the advantages of both worlds. Thereby, average private cars can provide the service without any notable additional battery degradation and achieve yearly earnings between EUR 200 and EUR 500, depending on the volatile market prices. Commercial vehicles have an even higher potential, as the results increase with vehicle utilization and consumption.
In today's Internet, services are very different in their requirements on the underlying transport network. In the future, this diversity will increase and it will be more difficult to accommodate all services in a single network. A possible approach to cope with this diversity within future networks is the introduction of support for running isolated networks for different services on top of a single shared physical substrate. This would also enable easy network management and ensure an economically sound operation. End-customers will readily adopt this approach as it enables new and innovative services without being expensive. In order to arrive at a concept that enables this kind of network, it needs to be designed around and constantly checked against realistic use cases. In this contribution, we present three use cases for future networks. We describe functional blocks of a virtual network architecture, which are necessary to support these use cases within the network. Furthermore, we discuss the interfaces needed between the functional blocks and consider standardization issues that arise in order to achieve a global consistent control and management structure of virtual networks.
Currently, we observe a strong growth of services and applications, which use the Internet for data transport. However, the network requirements of these applications differ significantly. This makes network management difficult, since it complicated to separate network flows into application classes without inspecting application layer data. Network virtualization is a promising solution to this problem. It enables running different virtual network on the same physical substrate. Separating networks based on the service supported within allows controlling each network according to the specific needs of the application. The aim of such a network control is to optimize the user perceived quality as well as the cost efficiency of the data transport. Furthermore, network virtualization abstracts the network functionality from the underlying implementation and facilitates the split of the currently tightly integrated roles of Internet Service Provider and network owner. Additionally, network virtualization guarantees that different virtual networks run on the same physical substrate do not interfere with each other. This thesis discusses different aspects of the network virtualization topic. It is focused on how to manage and control a virtual network to guarantee the best Quality of Experience for the user. Therefore, a top-down approach is chosen. Starting with use cases of virtual networks, a possible architecture is derived and current implementation options based on hardware virtualization are explored. In the following, this thesis focuses on assessing the Quality of Experience perceived by the user and how it can be optimized on application layer. Furthermore, options for measuring and monitoring significant network parameters of virtual networks are considered.
The safety of future spaceflight depends on space surveillance and space traffic management, as the density of objects in Earth orbit has reached a level that requires collision avoidance maneuvers to be performed on a regular basis to avoid a mission or, in the context of human space flight, life-endangering threat. Driven by enhanced sensor systems capable of detecting centimeter-sized debris, megaconstellations and satellite miniaturization, the space debris problem has revealed many parallels to the plastic waste in our oceans, however with much less visibility to the eye. Future catalog sizes are expected to increase drastically, making it even more important to detect potentially dangerous encounters as early as possible.
Due to the limited number of monitoring sensors, continuous observation of all objects is impossible, resulting in the need to predict the orbital paths and their uncertainty via models to perform collision risk assessment and space object catalog maintenance. For many years the uncertainty models used for orbit determination neglected any uncertainty in the astrodynamic force models, thereby implicitly assuming them to be flawless descriptions of the true space environment. This assumption is known to result in overly optimistic uncertainty estimates, which in turn complicate collision risk analysis.
The keynote of this doctoral thesis is to establish uncertainty realism for low Earth orbiting satellites via a physically connected quantification of the dominant force model uncertainties, particularly multiple sources of atmospheric density uncertainty and orbital gravity uncertainty.
The resulting process noise models are subsequently integrated into classical and state of the art orbit determination algorithms. Their positive impact is demonstrated via numerical orbit determination simulations and a collision risk assessment study using all non-restricted objects in the official United States space catalogs. It is shown that the consideration of atmospheric density uncertainty and gravity uncertainty significantly improves the quality of the orbit determination and thus makes a contribution to future spaceflight safety by increasing the reliability of the uncertainty estimates used for collision risk assessment.
Affordable prices for 3D laser range finders and mature software solutions for registering multiple point clouds in a common coordinate system paved the way for new areas of application for 3D point clouds. Nowadays we see 3D laser scanners being used not only by digital surveying experts but also by law enforcement officials, construction workers or archaeologists. Whether the purpose is digitizing factory production lines, preserving historic sites as digital heritage or recording environments for gaming or virtual reality applications -- it is hard to imagine a scenario in which the final point cloud must also contain the points of "moving" objects like factory workers, pedestrians, cars or flocks of birds. For most post-processing tasks, moving objects are undesirable not least because moving objects will appear in scans multiple times or are distorted due to their motion relative to the scanner rotation.
The main contributions of this work are two postprocessing steps for already registered 3D point clouds. The first method is a new change detection approach based on a voxel grid which allows partitioning the input points into static and dynamic points using explicit change detection and subsequently remove the latter for a "cleaned" point cloud. The second method uses this cleaned point cloud as input for detecting collisions between points of the environment point cloud and a point cloud of a model that is moved through the scene.
Our approach on explicit change detection is compared to the state of the art using multiple datasets including the popular KITTI dataset. We show how our solution achieves similar or better F1-scores than an existing solution while at the same time being faster.
To detect collisions we do not produce a mesh but approximate the raw point cloud data by spheres or cylindrical volumes. We show how our data structures allow efficient nearest neighbor queries that make our CPU-only approach comparable to a massively-parallel algorithm running on a GPU. The utilized algorithms and data structures are discussed in detail. All our software is freely available for download under the terms of the GNU General Public license. Most of the datasets used in this thesis are freely available as well. We provide shell scripts that allow one to directly reproduce the quantitative results shown in this thesis for easy verification of our findings.
Combining Distributed Consensus with Robust H-infinity-Control for Satellite Formation Flying
(2019)
Control methods that guarantee stability in the presence of uncertainties are mandatory in space applications. Further, distributed control approaches are beneficial in terms of scalability and to achieve common goals, especially in multi-agent setups like formation control. This paper presents a combination of robust H-infinity control and distributed control using the consensus approach by deriving a distributed consensus-based generalized plant description that can be used in H-infinity synthesis. Special focus was set towards space applications, namely satellite formation flying. The presented results show the applicability of the developed distributed robust control method to a simple, though realistic space scenario, namely a spaceborne distributed telescope. By using this approach, an arbitrary number of satellites/agents can be controlled towards an arbitrary formation geometry. Because of the combination with robust H-infinity control, the presented method satisfies the high stability and robustness demands as found e.g., in space applications.
An approach to aerodynamically optimizing cycling posture and reducing drag in an Ironman (IM) event was elaborated. Therefore, four commonly used positions in cycling were investigated and simulated for a flow velocity of 10 m/s and yaw angles of 0–20° using OpenFoam-based Nabla Flow CFD simulation software software. A cyclist was scanned using an IPhone 12, and a special-purpose meshing software BLENDER was used. Significant differences were observed by changing and optimizing the cyclist’s posture. Aerodynamic drag coefficient (CdA) varies by more than a factor of 2, ranging from 0.214 to 0.450. Within a position, the CdA tends to increase slightly at yaw angles of 5–10° and decrease at higher yaw angles compared to a straight head wind, except for the time trial (TT) position. The results were applied to the IM Hawaii bike course (180 km), estimating a constant power output of 300 W. Including the wind distributions, two different bike split models for performance prediction were applied. Significant time saving of roughly 1 h was found. Finally, a machine learning approach to deduce 3D triangulation for specific body shapes from 2D pictures was tested.
The increased occurrence of Software-Defined-Networking (SDN) not only improves the dynamics and maintenance of network architectures, but also opens up new use cases and application possibilities. Based on these observations, we propose a new network topology consisting of a star and a ring topology. This hybrid topology will be called wheel topology in this paper. We have considered the static characteristics of the wheel topology and compare them with known other topologies.
With the progress in robotics research the human machine interfaces reach more and more the status of being the major limiting factor for the overall system performance of a system for remote navigation and coordination of robots. In this monograph it is elaborated how mixed reality technologies can be applied for the user interfaces in order to increase the overall system performance. Concepts, technologies, and frameworks are developed and evaluated in user studies which enable for novel user-centered approaches to the design of mixed-reality user interfaces for remote robot operation. Both the technological requirements and the human factors are considered to achieve a consistent system design. Novel technologies like 3D time-of-flight cameras are investigated for the application in the navigation tasks and for the application in the developed concept of a generic mixed reality user interface. In addition it is shown how the network traffic of a video stream can be shaped on application layer in order to reach a stable frame rate in dynamic networks. The elaborated generic mixed reality framework enables an integrated 3D graphical user interface. The realized spatial integration and visualization of available information reduces the demand for mental transformations for the human operator and supports the use of immersive stereo devices. The developed concepts make also use of the fact that local robust autonomy components can be realized and thus can be incorporated as assistance systems for the human operators. A sliding autonomy concept is introduced combining force and visual augmented reality feedback. The force feedback component allows rendering the robot's current navigation intention to the human operator, such that a real sliding autonomy with seamless transitions is achieved. The user-studies prove the significant increase in navigation performance by application of this concept. The generic mixed reality user interface together with robust local autonomy enables a further extension of the teleoperation system to a short-term predictive mixed reality user interface. With the presented concept of operation, it is possible to significantly reduce the visibility of system delays for the human operator. In addition, both advantageous characteristics of a 3D graphical user interface for robot teleoperation- an exocentric view and an augmented reality view – can be combined.
In this thesis, we present novel approaches for formation driving of nonholonomic robots and optimal trajectory planning to reach a target region. The methods consider a static known map of the environment as well as unknown and dynamic obstacles detected by sensors of the formation. The algorithms are based on leader following techniques, where the formation of car-like robots is maintained in a shape determined by curvilinear coordinates. Beyond this, the general methods of formation driving are specialized and extended for an application of airport snow shoveling. Detailed descriptions of the algorithms complemented by relevant stability and convergence studies will be provided in the following chapters. Furthermore, discussions of the applicability will be verified by various simulations in existing robotic environments and also by a hardware experiment.
Modern immersive multimodal technologies enable the learners to completely get immersed in various learning situations in a way that feels like experiencing an authentic learning environment. These environments also allow the collection of multimodal data, which can be used with artificial intelligence to further improve the immersion and learning outcomes. The use of artificial intelligence has been widely explored for the interpretation of multimodal data collected from multiple sensors, thus giving insights to support learners’ performance by providing personalised feedback. In this paper, we present a conceptual approach for creating immersive learning environments, integrated with multi-sensor setup to help learners improve their psychomotor skills in a remote setting.
PRO-Simat is a simulation tool for analysing protein interaction networks, their dynamic change and pathway engineering. It provides GO enrichment, KEGG pathway analyses, and network visualisation from an integrated database of more than 8 million protein-protein interactions across 32 model organisms and the human proteome. We integrated dynamical network simulation using the Jimena framework, which quickly and efficiently simulates Boolean genetic regulatory networks. It enables simulation outputs with in-depth analysis of the type, strength, duration and pathway of the protein interactions on the website. Furthermore, the user can efficiently edit and analyse the effect of network modifications and engineering experiments. In case studies, applications of PRO-Simat are demonstrated: (i) understanding mutually exclusive differentiation pathways in Bacillus subtilis, (ii) making Vaccinia virus oncolytic by switching on its viral replication mainly in cancer cells and triggering cancer cell apoptosis and (iii) optogenetic control of nucleotide processing protein networks to operate DNA storage. Multilevel communication between components is critical for efficient network switching, as demonstrated by a general census on prokaryotic and eukaryotic networks and comparing design with synthetic networks using PRO-Simat. The tool is available at https://prosimat.heinzelab.de/ as a web-based query server.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.
The application of Wireless Sensor Networks (WSNs) with a large number of tiny, cost-efficient, battery-powered sensor nodes that are able to communicate directly with each other poses many challenges.
Due to the large number of communicating objects and despite a used CSMA/CA MAC protocol, there may be many signal collisions.
In addition, WSNs frequently operate under harsh conditions and nodes are often prone to failure, for example, due to a depleted battery or unreliable components.
Thus, nodes or even large parts of the network can fail.
These aspects lead to reliable data dissemination and data storage being a key issue.
Therefore, these issues are addressed herein while keeping latency low, throughput high, and energy consumption reduced.
Furthermore, simplicity as well as robustness to changes in conditions are essential here.
In order to achieve these aims, a certain amount of redundancy has to be included.
This can be realized, for example, by using network coding.
Existing approaches, however, often only perform well under certain conditions or for a specific scenario, have to perform a time-consuming initialization, require complex calculations, or do not provide the possibility of early decoding.
Therefore, we developed a network coding procedure called Broadcast Growth Codes (BCGC) for reliable data dissemination, which performs well under a broad range of diverse conditions.
These can be a high probability of signal collisions, any degree of nodes' mobility, a large number of nodes, or occurring node failures, for example.
BCGC do not require complex initialization and only use simple XOR operations for encoding and decoding.
Furthermore, decoding can be started as soon as a first packet/codeword has been received.
Evaluations by using an in-house implemented network simulator as well as a real-world testbed showed that BCGC enhance reliability and enable to retrieve data dependably despite an unreliable network.
In terms of latency, throughput, and energy consumption, depending on the conditions and the procedure being compared, BCGC can achieve the same performance or even outperform existing procedures significantly while being robust to changes in conditions and allowing low complexity of the nodes as well as early decoding.
The progress which has been made in semiconductor chip production in recent years enables a multitude of cores on a single die. However, due to further decreasing structure sizes, fault tolerance and energy consumption will represent key challenges. Furthermore, an efficient communication infrastructure is indispensable due to the high parallelism at those systems. The predominant communication system at such highly parallel systems is a Network on Chip (NoC). The focus of this thesis is on NoCs which are based on deflection routing. In this context, contributions are made to two domains, fault tolerance and dimensioning of the optimal link width. Both aspects are essential for the application of reliable, energy efficient, and deflection routing based NoCs.
It is expected that future semiconductor systems have to cope with high fault probabilities. The inherently given high connectivity of most NoC topologies can be exploited to tolerate the breakdown of links and other components. In this thesis, a fault-tolerant router architecture has been developed, which stands out for the deployed interconnection architecture and the method to overcome complex fault situations. The presented simulation results show, all data packets arrive at their destination, even at high fault probabilities. In contrast to routing table based architectures, the hardware costs of the herein presented architecture are lower and, in particular, independent of the number of components in the network.
Besides fault tolerance, hardware costs and energy efficiency are of great importance. The utilized link width has a decisive influence on these aspects. In particular, at deflection routing based NoCs, over- and under-sizing of the link width leads to unnecessary high hardware costs and bad performance, respectively. In the second part of this thesis, the optimal link width at deflection routing based NoCs is investigated. Additionally, a method to reduce the link width is introduced. Simulation and synthesis results show, the herein presented method allows a significant reduction of hardware costs at comparable performance.
Virtual reality and related media and communication technologies have a growing
impact on professional application fields and our daily life. Virtual environments
have the potential to change the way we perceive ourselves and how we interact
with others. In comparison to other technologies, virtual reality allows for the
convincing display of a virtual self-representation, an avatar, to oneself and also to
others. This is referred to as user embodiment. Avatars can be of varying realism
and abstraction in their appearance and in the behaviors they convey. Such userembodying
interfaces, in turn, can impact the perception of the self as well as
the perception of interactions. For researchers, designers, and developers it is of
particular interest to understand these perceptual impacts, to apply them to therapy,
assistive applications, social platforms, or games, for example. The present thesis
investigates and relates these impacts with regard to three areas: intrapersonal
effects, interpersonal effects, and effects of social augmentations provided by the
simulation.
With regard to intrapersonal effects, we specifically explore which simulation
properties impact the illusion of owning and controlling a virtual body, as well
as a perceived change in body schema. Our studies lead to the construction of
an instrument to measure these dimensions and our results indicate that these
dimensions are especially affected by the level of immersion, the simulation latency,
as well as the level of personalization of the avatar.
With regard to interpersonal effects we compare physical and user-embodied social
interactions, as well as different degrees of freedom in the replication of nonverbal
behavior. Our results suggest that functional levels of interaction are maintained,
whereas aspects of presence can be affected by avatar-mediated interactions, and
collaborative motor coordination can be disturbed by immersive simulations.
Social interaction is composed of many unknown symbols and harmonic patterns
that define our understanding and interpersonal rapport. For successful virtual
social interactions, a mere replication of physical world behaviors to virtual environments
may seem feasible. However, the potential of mediated social interactions
goes beyond this mere replication. In a third vein of research, we propose and
evaluate alternative concepts on how computers can be used to actively engage in
mediating social interactions, namely hybrid avatar-agent technologies. Specifically,
we investigated the possibilities to augment social behaviors by modifying and
transforming user input according to social phenomena and behavior, such as nonverbal
mimicry, directed gaze, joint attention, and grouping. Based on our results
we argue that such technologies could be beneficial for computer-mediated social
interactions such as to compensate for lacking sensory input and disturbances in
data transmission or to increase aspects of social presence by visual substitution or
amplification of social behaviors.
Based on related work and presented findings, the present thesis proposes the
perspective of considering computers as social mediators. Concluding from prototypes
and empirical studies, the potential of technology to be an active mediator of social
perception with regard to the perception of the self, as well as the perception of
social interactions may benefit our society by enabling further methods for diagnosis,
treatment, and training, as well as the inclusion of individuals with social disorders.
To this regard, we discuss implications for our society and ethical aspects. This
thesis extends previous empirical work and further presents novel instruments,
concepts, and implications to open up new perspectives for the development of
virtual reality, mixed reality, and augmented reality applications.
The DAEDALUS mission concept aims at exploring and characterising the entrance and initial part of Lunar lava tubes within a compact, tightly integrated spherical robotic device, with a complementary payload set and autonomous capabilities.
The mission concept addresses specifically the identification and characterisation of potential resources for future ESA exploration, the local environment of the subsurface and its geologic and compositional structure.
A sphere is ideally suited to protect sensors and scientific equipment in rough, uneven environments.
It will house laser scanners, cameras and ancillary payloads.
The sphere will be lowered into the skylight and will explore the entrance shaft, associated caverns and conduits. Lidar (light detection and ranging) systems produce 3D models with high spatial accuracy independent of lighting conditions and visible features.
Hence this will be the primary exploration toolset within the sphere.
The additional payload that can be accommodated in the robotic sphere consists of camera systems with panoramic lenses and scanners such as multi-wavelength or single-photon scanners.
A moving mass will trigger movements.
The tether for lowering the sphere will be used for data communication and powering the equipment during the descending phase.
Furthermore, the connector tether-sphere will host a WIFI access point, such that data of the conduit can be transferred to the surface relay station. During the exploration phase, the robot will be disconnected from the cable, and will use wireless communication.
Emergency autonomy software will ensure that in case of loss of communication, the robot will continue the nominal mission.
Frequently, port scans are early indicators of more serious attacks. Unfortunately, the detection of slow port scans in company networks is challenging due to the massive amount of network data. This paper proposes an innovative approach for preprocessing flow-based data which is specifically tailored to the detection of slow port scans. The preprocessing chain generates new objects based on flow-based data aggregated over time windows while taking domain knowledge as well as additional knowledge about the network structure into account. The computed objects are used as input for the further analysis. Based on these objects, we propose two different approaches for detection of slow port scans. One approach is unsupervised and uses sequential hypothesis testing whereas the other approach is supervised and uses classification algorithms. We compare both approaches with existing port scan detection algorithms on the flow-based CIDDS-001 data set. Experiments indicate that the proposed approaches achieve better detection rates and exhibit less false alarms than similar algorithms.
The first step towards aerial planetary exploration has been made. Ingenuity shows extremely promising results, and new missions are already underway. Rotorcraft are capable of flight. This capability could be utilized to support the last stages of Entry, Descent, and Landing. Thus, mass and complexity could be scaled down.
Autorotation is one method of descent. It describes unpowered descent and landing, typically performed by helicopters in case of an engine failure. MAPLE is suggested to test these procedures and understand autorotation on other planets. In this series of experiments, the Ingenuity helicopter is utilized. Ingenuity would autorotate a ”mid-air-landing” before continuing with normal flight. Ultimately, the collected data shall help to understand autorotation on Mars and its utilization for interplanetary exploration.
Lightning has fascinated humanity since the beginning of our existence. Different types of lightning like sprites and blue jets were discovered, and many more are theorized. However, it is very likely that these phenomena are not exclusive to our home planet. Venus’s dense and active atmosphere is a place where lightning is to be expected. Missions like Venera, Pioneer, and Galileo have carried instruments to measure electromagnetic activity. These measurements have indeed delivered results. However, these results are not clear. They could be explained by other effects like cosmic rays, plasma noise, or spacecraft noise. Furthermore, these lightning seem different from those we know from our home planet. In order to tackle these issues, a different approach to measurement is proposed. When multiple devices in different spacecraft or locations can measure the same atmospheric discharge, most other explanations become increasingly less likely. Thus, the suggested instrument and method of VELEX incorporates multiple spacecraft. With this approach, the question about the existence of lightning on Venus could be settled.
Learning is a central component of human life and essential for personal development. Therefore, utilizing new technologies in the learning context and exploring their combined potential are considered essential to support self-directed learning in a digital age. A learning environment can be expanded by various technical and content-related aspects. Gamification in the form of elements from video games offers a potential concept to support the learning process. This can be supplemented by technology-supported learning. While the use of tablets is already widespread in the learning context, the integration of a social robot can provide new perspectives on the learning process. However, simply adding new technologies such as social robots or gamification to existing systems may not automatically result in a better learning environment. In the present study, game elements as well as a social robot were integrated separately and conjointly into a learning environment for basic Spanish skills, with a follow-up on retained knowledge. This allowed us to investigate the respective and combined effects of both expansions on motivation, engagement and learning effect. This approach should provide insights into the integration of both additions in an adult learning context. We found that the additions of game elements and the robot did not significantly improve learning, engagement or motivation. Based on these results and a literature review, we outline relevant factors for meaningful integration of gamification and social robots in learning environments in adult learning.
Today knowledge base authoring for the engineering of intelligent systems is performed mainly by using tools with graphical user interfaces. An alternative human-computer interaction para- digm is the maintenance and manipulation of electronic documents, which provides several ad- vantages with respect to the social aspects of knowledge acquisition. Until today it hardly has found any attention as a method for knowledge engineering.
This thesis provides a comprehensive discussion of document-centered knowledge acquisition with knowledge markup languages. There, electronic documents are edited by the knowledge authors and the executable knowledge base entities are captured by markup language expressions within the documents. The analysis of this approach reveals significant advantages as well as new challenges when compared to the use of traditional GUI-based tools.
Some advantages of the approach are the low barriers for domain expert participation, the simple integration of informal descriptions, and the possibility of incremental knowledge for- malization. It therefore provides good conditions for building up a knowledge acquisition pro- cess based on the mixed-initiative strategy, being a flexible combination of direct and indirect knowledge acquisition. Further it turns out that document-centered knowledge acquisition with knowledge markup languages provides high potential for creating customized knowledge au- thoring environments, tailored to the needs of the current knowledge engineering project and its participants. The thesis derives a process model to optimally exploit this customization po- tential, evolving a project specific authoring environment by an agile process on the meta level. This meta-engineering process continuously refines the three aspects of the document space: The employed markup languages, the scope of the informal knowledge, and the structuring and organization of the documents. The evolution of the first aspect, the markup languages, plays a key role, implying the design of project specific markup languages that are easily understood by the knowledge authors and that are suitable to capture the required formal knowledge precisely. The goal of the meta-engineering process is to create a knowledge authoring environment, where structure and presentation of the domain knowledge comply well to the users’ mental model of the domain. In that way, the approach can help to ease major issues of knowledge-based system development, such as high initial development costs and long-term maintenance problems.
In practice, the application of the meta-engineering approach for document-centered knowl- edge acquisition poses several technical challenges that need to be coped with by appropriate tool support. In this thesis KnowWE, an extensible document-centered knowledge acquisition environment is presented. The system is designed to support the technical tasks implied by the meta-engineering approach, as for instance design and implementation of new markup lan- guages, content refactoring, and authoring support. It is used to evaluate the approach in several real-world case-studies from different domains, such as medicine or engineering for instance.
We end the thesis by a summary and point out further interesting research questions consid- ering the document-centered knowledge acquisition approach.
OCR4all—An open-source tool providing a (semi-)automatic OCR workflow for historical printings
(2019)
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years, great progress has been made in the area of historical OCR, resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, character recognition, and post-processing. The drawback of these tools often is their limited applicability by non-technical users like humanist scholars and in particular the combined use of several tools in a workflow. In this paper, we present an open-source OCR software called OCR4all, which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required ground truth for training stronger mixed models (for segmentation, as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers a comfortable GUI that allows error corrections not only in the final output, but already in early stages to minimize error propagations. In the long run, this constant manual correction produces large quantities of valuable, high quality training material, which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. During experiments, the fully automated application on 19th Century novels showed that OCR4all can considerably outperform the commercial state-of-the-art tool ABBYY Finereader on moderate layouts if suitably pretrained mixed OCR models are available. Furthermore, on very complex early printed books, even users with minimal or no experience were able to capture the text with manageable effort and great quality, achieving excellent Character Error Rates (CERs) below 0.5%. The architecture of OCR4all allows the easy integration (or substitution) of newly developed tools for its main components by standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
An Intelligent Semi-Automatic Workflow for Optical Character Recognition of Historical Printings
(2020)
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years great progress has been made in the area of historical OCR resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, Automatic Text Recognition (ATR) and postcorrection. Their major drawback is that they only offer limited applicability by non-technical users like humanist scholars, in particular when it comes to the combined use of several tools in a workflow. Furthermore, depending on the material, these tools are usually not able to fully automatically achieve sufficiently low error rates, let alone perfect results, creating a demand for an interactive postcorrection functionality which, however, is generally not incorporated.
This thesis addresses these issues by presenting an open-source OCR software called OCR4all which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required Ground Truth (GT) for training stronger mixed models (for segmentation as well as text recognition) is not available, yet, neither in the desired quantity nor quality.
To deal with this issue in the short run, OCR4all offers better recognition capabilities in combination with a very comfortable Graphical User Interface (GUI) that allows error corrections not only in the final output, but already in early stages to minimize error propagation. In the long run this constant manual correction produces large quantities of valuable, high quality training material which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. The architecture of OCR4all allows for an easy integration (or substitution) of newly developed tools for its main components by supporting standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
In addition to OCR4all, several methodical extensions in the form of accuracy improving techniques for training and recognition are presented. Most notably an effective, sophisticated, and adaptable voting methodology using a single ATR engine, a pretraining procedure, and an Active Learning (AL) component are proposed. Experiments showed that combining pretraining and voting significantly improves the effectiveness of book-specific training, reducing the obtained Character Error Rates (CERs) by more than 50%.
The proposed extensions were further evaluated during two real world case studies: First, the voting and pretraining techniques are transferred to the task of constructing so-called mixed models which are trained on a variety of different fonts. This was done by using 19th century Fraktur script as an example, resulting in a considerable improvement over a variety of existing open-source and commercial engines and models. Second, the extension from ATR on raw text to the adjacent topic of typography recognition was successfully addressed by thoroughly indexing a historical lexicon that heavily relies on different font types in order to encode its complex semantic structure.
During the main experiments on very complex early printed books even users with minimal or no experience were able to not only comfortably deal with the challenges presented by the complex layout, but also to recognize the text with manageable effort and great quality, achieving excellent CERs below 0.5%. Furthermore, the fully automated application on 19th century novels showed that OCR4all (average CER of 0.85%) can considerably outperform the commercial state-of-the-art tool ABBYY Finereader (5.3%) on moderate layouts if suitably pretrained mixed ATR models are available.
Lidar pose tracking of a tumbling spacecraft using the smoothed normal distribution transform
(2023)
Lidar sensors enable precise pose estimation of an uncooperative spacecraft in close range. In this context, the iterative closest point (ICP) is usually employed as a tracking method. However, when the size of the point clouds increases, the required computation time of the ICP can become a limiting factor. The normal distribution transform (NDT) is an alternative algorithm which can be more efficient than the ICP, but suffers from robustness issues. In addition, lidar sensors are also subject to motion blur effects when tracking a spacecraft tumbling with a high angular velocity, leading to a loss of precision in the relative pose estimation. This work introduces a smoothed formulation of the NDT to improve the algorithm’s robustness while maintaining its efficiency. Additionally, two strategies are investigated to mitigate the effects of motion blur. The first consists in un-distorting the point cloud, while the second is a continuous-time formulation of the NDT. Hardware-in-the-loop tests at the European Proximity Operations Simulator demonstrate the capability of the proposed methods to precisely track an uncooperative spacecraft under realistic conditions within tens of milliseconds, even when the spacecraft tumbles with a significant angular rate.
Practical optimization problems often comprise several incomparable and conflicting objectives. When booking a trip using several means of transport, for instance, it should be fast and at the same time not too expensive. The first part of this thesis is concerned with the algorithmic solvability of such multiobjective optimization problems. Several solution notions are discussed and compared with respect to their difficulty. Interestingly, these solution notions are always equally difficulty for a single-objective problem and they differ considerably already for two objectives (unless P = NP). In this context, the difference between search and decision problems is also investigated in general. Furthermore, new and improved approximation algorithms for several variants of the traveling salesperson problem are presented. Using tools from discrepancy theory, a general technique is developed that helps to avoid an obstacle that is often hindering in multiobjective approximation: The problem of combining two solutions such that the new solution is balanced in all objectives and also mostly retains the structure of the original solutions. The second part of this thesis is dedicated to several aspects of systems of equations for (formal) languages. Firstly, conjunctive and Boolean grammars are studied, which are extensions of context-free grammars by explicit intersection and complementation operations, respectively. Among other results, it is shown that one can considerably restrict the union operation on conjunctive grammars without changing the generated language. Secondly, certain circuits are investigated whose gates do not compute Boolean values but sets of natural numbers. For these circuits, the equivalence problem is studied, i.\,e.\ the problem of deciding whether two given circuits compute the same set or not. It is shown that, depending on the allowed types of gates, this problem is complete for several different complexity classes and can thus be seen as a parametrized) representative for all those classes.
In the last 40 years, complexity theory has grown to a rich and powerful field in theoretical computer science. The main task of complexity theory is the classification of problems with respect to their consumption of resources (e.g., running time or required memory). To study the computational complexity (i.e., consumption of resources) of problems, similar problems are grouped into so called complexity classes. During the systematic study of numerous problems of practical relevance, no efficient algorithm for a great number of studied problems was found. Moreover, it was unclear whether such algorithms exist. A major breakthrough in this situation was the introduction of the complexity classes P and NP and the identification of hardest problems in NP. These hardest problems of NP are nowadays known as NP-complete problems. One prominent example of an NP-complete problem is the satisfiability problem of propositional formulas (SAT). Here we get a propositional formula as an input and it must be decided whether an assignment for the propositional variables exists, such that this assignment satisfies the given formula. The intensive study of NP led to numerous related classes, e.g., the classes of the polynomial-time hierarchy PH, P, #P, PP, NL, L and #L. During the study of these classes, problems related to propositional formulas were often identified to be complete problems for these classes. Hence some questions arise: Why is SAT so hard to solve? Are there modifications of SAT which are complete for other well-known complexity classes? In the context of these questions a result by E. Post is extremely useful. He identified and characterized all classes of Boolean functions being closed under superposition. It is possible to study problems which are connected to generalized propositional logic by using this result, which was done in this thesis. Hence, many different problems connected to propositional logic were studied and classified with respect to their computational complexity, clearing the borderline between easy and hard problems.
Continued reports over the past decades of unknown aerial phenomena (short UAP) have given high relevance to the investigation and research of these. Especially reports by US Navy pilots and official investigations by the US Office of the director of national intelligence have emphasized the value of such efforts. Due to the inherently limited scope of earth based observations, a satellite based instrument for detection of such phenomena may prove especially useful. This paper as such investigates the possible viability of such an instrument on a nano satellite mission.
The introduction of new types of frequency spectrum in 6G technology facilitates the convergence of conventional mobile communications and radar functions. Thus, the mobile network itself becomes a versatile sensor system. This enables mobile network operators to offer a sensing service in addition to conventional data and telephony services. The potential benefits are expected to accrue to various stakeholders, including individuals, the environment, and society in general. The paper discusses technological development, possible integration, and use cases, as well as future development areas.