Refine
Has Fulltext
- yes (52)
Year of publication
- 2021 (52) (remove)
Document Type
- Journal article (29)
- Doctoral Thesis (13)
- Conference Proceeding (7)
- Book (1)
- Report (1)
- Working Paper (1)
Keywords
- virtual reality (9)
- artificial intelligence (3)
- augmented reality (3)
- Quality of Experience (2)
- Virtuelle Realität (2)
- avatars (2)
- human-computer interaction (2)
- immersion (2)
- immersive technologies (2)
- performance modeling (2)
- quality of experience (2)
- 3D Laser Scanning (1)
- 3D-reconstruction methods (1)
- 3DTK toolkit (1)
- Algorithmische Geometrie (1)
- Algorithmus (1)
- Angriff (1)
- Augmented Reality (1)
- Auto-Scaling (1)
- Außerschulische Bildung (1)
- Baseline Constrained LAMBDA (1)
- Benchmarking (1)
- Berechnungskomplexität (1)
- Beweissystem (1)
- Cloud Computing (1)
- Computer Science education (1)
- Computersicherheit (1)
- Computerspiel (1)
- Conjunction analysis (1)
- Corporate Network (1)
- CubeSat (1)
- CubeSat GNSS (1)
- Daedalus-Projekt (1)
- Data Mining (1)
- Datensicherung (1)
- Didaktik der Informatik (1)
- Dienstgüte (1)
- Disjoint pair (1)
- Drohne <Flugkörper> (1)
- EPM (1)
- Educational robotics (1)
- Educational robotics competitions (1)
- Eindringerkennung (1)
- Elementaggregation über dynamische Terme (1)
- Error-State Extendend Kalman Filter (1)
- Erweiterte Realität <Informatik> (1)
- FRAMEWORK <Programm> (1)
- Fachdidaktik (1)
- Feature Engineering & Extraction (1)
- Flugnavigation (1)
- Flugregelung (1)
- Forecasting (1)
- Game mechanic (1)
- Gamification (1)
- Graphenzeichnen (1)
- Gravitationsmodellunsicherheit (1)
- Gravity model uncertainty (1)
- HMD (Head-Mounted Display) (1)
- HTTP adaptive video streaming (1)
- IT-Sicherheit (1)
- Industrieroboter (1)
- Informatik (1)
- Integer Expression (1)
- Integer circuit (1)
- Intelligent Virtual Agents (1)
- Interkulturelles Lernen (1)
- International Comparative Research (1)
- Intrusion Detection (1)
- Kalman-Filter (1)
- Kerneldensity estimation (1)
- Knowledge encoding (1)
- Kombinatorik (1)
- Komplexität (1)
- Konjunktionsanalyse (1)
- Konvexe Zeichnungen (1)
- Künstliche Intelligenz (1)
- Lava (1)
- Lehrerbildung (1)
- Leistungsbewertung (1)
- Lernen (1)
- Loose Coupling (1)
- Lunar Caves (1)
- Lunar Exploration (1)
- Mapping (1)
- Markov model (1)
- Markovian and Non-Markovian systems (1)
- Mathematisches Modell (1)
- Medienkompetenz (1)
- Mensch-Maschine-Kommunikation (1)
- Mensch-Maschine-Schnittstelle (1)
- Modellgetriebene Entwicklung (1)
- Mond (1)
- NP-vollständiges Problem (1)
- Netzwerkdaten (1)
- Neuronale Netze (1)
- Nutzerschnittstellen (1)
- Onboard Software (1)
- Orakel <Informatik> (1)
- Orbit determination (1)
- Orbitbestimung (1)
- P-optimal (1)
- Phasenmehrdeutigkeit (1)
- Planare Graphen (1)
- Poisson surface reconstruction (1)
- Polyeder (1)
- Problemlösefähigkeiten (1)
- Prognose (1)
- Projektmanagement (1)
- Propositional proof system (1)
- Referenzmodell (1)
- Roboterwettbewerbe (1)
- Robotik (1)
- Rodos (1)
- SIMOC (1)
- Satellit (1)
- Sensorfusion (1)
- Serious game (1)
- Skalierbarkeit (1)
- Software-defined networking (1)
- Source Code Generation (1)
- Space Debris (1)
- Spherical Robot (1)
- Spielmechanik (1)
- Statische Analyse (1)
- TETCs (1)
- Telemetrie (1)
- Thermospheric density uncertainty (1)
- Thermosphärische Dichteunsicherheit (1)
- Ultra-Wideband (UWB) radio ranging (1)
- Umfrage (1)
- Uncertainty realism (1)
- Unmanned Aerial Vehicle (UAV) (1)
- Unsicherheitsrealismus (1)
- Veranstaltung (1)
- Videospiel (1)
- Videoübertragung (1)
- Virtualisierung (1)
- Vorgehensmodell (1)
- Wissensencodierung (1)
- XR (1)
- XR-artificial intelligence combination (1)
- XR-artificial intelligence continuum (1)
- Zeitreihenanalyse (1)
- acrophobia (1)
- affective computing (1)
- agency (1)
- agent-based models (1)
- agents (1)
- anxiety (1)
- application design (1)
- attack-aware (1)
- avatar embodiment (1)
- biosignals (1)
- clinical data warehouse (1)
- clinical study (1)
- co-authorships (1)
- co-inventorships (1)
- collaboration (1)
- communication network (1)
- communication networks (1)
- cost-sensitive learning (1)
- crowdsensing (1)
- crowdsourced measurements (1)
- crying (1)
- decision-making (1)
- denial of service (1)
- dependable software (1)
- descriptors (1)
- design cycle (1)
- detection time simulation (1)
- dimensions of proximity (1)
- discrete-time models and analysis (1)
- education (1)
- educational games (1)
- electronic data capture (1)
- electronic health records (1)
- elevated plus-maze (1)
- embedding techniques (1)
- embodiment (1)
- emotions (1)
- event detection (1)
- foreign language learning and teaching (1)
- gait disorder (1)
- game mechanics (1)
- games (1)
- hackathons (1)
- handwriting (1)
- harness free satellite (1)
- head-mounted display (1)
- heterogeneous background (1)
- human behaviour (1)
- human computer interaction (HCI) (1)
- human-artificial intelligence interaction (1)
- human-artificial intelligence interface (1)
- human-centered design (1)
- human-centered, human-robot (1)
- imbalanced regression (1)
- immersive interfaces (1)
- immersive learning technologies (1)
- infant (1)
- insect tracking (1)
- interactive authoring system (1)
- intercultural learning and teaching (1)
- interdisciplinary education (1)
- intrusion detection (1)
- iowa gambling task (1)
- laser ranging (1)
- learning environments (1)
- locomotion (1)
- machine learning (1)
- map projections (1)
- meditation (1)
- melodic interval (1)
- melody development (1)
- mindfulness (1)
- mixed reality (1)
- mixed-cultural settings (1)
- mobile networks (1)
- motivation (1)
- movement ecology (1)
- multimodal learning (1)
- multiple sclerosis (1)
- natural environment (1)
- natural language processing · · · (1)
- natural user interfaces (1)
- network function virtualization (1)
- octree (1)
- online survey (1)
- optimization (1)
- passive haptic feedback (1)
- performance analysis (1)
- performance evaluation (1)
- physiology (1)
- place-illusion (1)
- plausibility-illusion (1)
- point cloud compression (1)
- point cloud registration (1)
- point-to-plane measure (1)
- point-to-point measure (1)
- problem solving skills (1)
- procedural content generation (1)
- psychomotor training (1)
- psychophyisology (1)
- quality of experience prediction (1)
- queueing theory (1)
- real-time (1)
- realism (1)
- recommender system (1)
- rehabilitation (1)
- research methods (1)
- sample weighting (1)
- satellite technology (1)
- secondary data usage (1)
- self-assembly (1)
- self-aware (1)
- self-supervised learning (1)
- semantic understanding (1)
- semitone (1)
- sensor devices (1)
- serious games (1)
- sesnsors (1)
- sketching (1)
- software-definded networking (1)
- space–terrestrial networks (1)
- spatial presence (1)
- statistical methods (1)
- statistical validity (1)
- stochastic processes (1)
- stroke (1)
- structural battery (1)
- stylus (1)
- supervised learning (1)
- system simulation (1)
- systematic literature review (1)
- systematic review (1)
- teacher education (1)
- time perception (1)
- tools (1)
- trait anxiety (1)
- translational neuroscience (1)
- transportation (1)
- usability evaluation (1)
- use cases (1)
- user experience (1)
- user interaction (1)
- user study (1)
- user-generated content (1)
- verbal behaviour (1)
- video game QoE (1)
- video game context factors (1)
- video streaming (1)
- virtual body ownership (1)
- virtual environments (1)
- virtual humans (1)
- virtual-reality-continuum (1)
- vocal development (1)
- wireless communication (1)
- wireless-bus (1)
Institute
- Institut für Informatik (52) (remove)
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 824128 (1)
Introduction:
Perception and memorizing of melody and rhythm start about the third trimester of gestation. Infants have astonishing musical predispositions, and melody contour is most salient for them.
Objective:
To longitudinally analyse melody contour of spontaneous crying of healthy infants and to identify melodic intervals. The aim was 3-fold: (1) to answer the question whether spontaneous crying of healthy infants regularly exhibits melodic intervals across the observation period, (2) to investigate whether interval events become more complex with age and (3) to analyse interval size distribution.
Methods:
Weekly cry recordings of 12 healthy infants (6 females) over the first 4 months of life were analysed (6,130 cry utterances) using frequency spectrograms and pitch analyses (PRAAT). A preselection of utterances containing a well-identifiable, noise-free and undisturbed melodic contour was applied to identify and measure melodic intervals in the final subset of 3,114 utterances. Age-dependent frequency of occurrence of melodic intervals was statistically analysed using generalized estimating equations.
Results:
85.3% of all preselected melody contours (n = 3,114) either contained single rising or falling melodic intervals or complex events as combinations of both. In total 6,814 melodic intervals were measured. A significant increase in interval occurrence was found characterized by a non-linear age effect (3 developmental phases). Complex events were found to significantly increase linearly with age. In both calculations, no sex effect was found. Interval size distribution showed a maximum of the minor second as the prevailing musical interval in infants’ crying over the first 4 months of life.
Conclusion:
Melodic intervals seem to be a regular phenomenon of spontaneous crying of healthy infants. They are suggested to be a further candidate for developing an early risk marker of vocal control in infants. Subsequent studies are needed to compare healthy infants and infants at risk for respiratory-laryngeal dysfunction to investigate the diagnostic value of the occurrence of melodic intervals and their age-depending complexification.
Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others’ appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material. Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one’s own body than for other virtual humans.
These days, we are living in a digitalized world. Both our professional and private lives are pervaded by various IT services, which are typically operated using distributed computing systems (e.g., cloud environments). Due to the high level of digitalization, the operators of such systems are confronted with fast-paced and changing requirements. In particular, cloud environments have to cope with load fluctuations and respective rapid and unexpected changes in the computing resource demands. To face this challenge, so-called auto-scalers, such as the threshold-based mechanism in Amazon Web Services EC2, can be employed to enable elastic scaling of the computing resources. However, despite this opportunity, business-critical applications are still run with highly overprovisioned resources to guarantee a stable and reliable service operation. This strategy is pursued due to the lack of trust in auto-scalers and the concern that inaccurate or delayed adaptations may result in financial losses.
To adapt the resource capacity in time, the future resource demands must be "foreseen", as reacting to changes once they are observed introduces an inherent delay. In other words, accurate forecasting methods are required to adapt systems proactively. A powerful approach in this context is time series forecasting, which is also applied in many other domains. The core idea is to examine past values and predict how these values will evolve as time progresses. According to the "No-Free-Lunch Theorem", there is no algorithm that performs best for all scenarios. Therefore, selecting a suitable forecasting method for a given use case is a crucial task. Simply put, each method has its benefits and drawbacks, depending on the specific use case. The choice of the forecasting method is usually based on expert knowledge, which cannot be fully automated, or on trial-and-error. In both cases, this is expensive and prone to error.
Although auto-scaling and time series forecasting are established research fields, existing approaches cannot fully address the mentioned challenges: (i) In our survey on time series forecasting, we found that publications on time series forecasting typically consider only a small set of (mostly related) methods and evaluate their performance on a small number of time series with only a few error measures while providing no information on the execution time of the studied methods. Therefore, such articles cannot be used to guide the choice of an appropriate method for a particular use case; (ii) Existing open-source hybrid forecasting methods that take advantage of at least two methods to tackle the "No-Free-Lunch Theorem" are computationally intensive, poorly automated, designed for a particular data set, or they lack a predictable time-to-result. Methods exhibiting a high variance in the time-to-result cannot be applied for time-critical scenarios (e.g., auto-scaling), while methods tailored to a specific data set introduce restrictions on the possible use cases (e.g., forecasting only annual time series); (iii) Auto-scalers typically scale an application either proactively or reactively. Even though some hybrid auto-scalers exist, they lack sophisticated solutions to combine reactive and proactive scaling. For instance, resources are only released proactively while resource allocation is entirely done in a reactive manner (inherently delayed); (iv) The majority of existing mechanisms do not take the provider's pricing scheme into account while scaling an application in a public cloud environment, which often results in excessive charged costs. Even though some cost-aware auto-scalers have been proposed, they only consider the current resource demands, neglecting their development over time. For example, resources are often shut down prematurely, even though they might be required again soon.
To address the mentioned challenges and the shortcomings of existing work, this thesis presents three contributions: (i) The first contribution-a forecasting benchmark-addresses the problem of limited comparability between existing forecasting methods; (ii) The second contribution-Telescope-provides an automated hybrid time series forecasting method addressing the challenge posed by the "No-Free-Lunch Theorem"; (iii) The third contribution-Chamulteon-provides a novel hybrid auto-scaler for coordinated scaling of applications comprising multiple services, leveraging Telescope to forecast the workload intensity as a basis for proactive resource provisioning. In the following, the three contributions of the thesis are summarized:
Contribution I - Forecasting Benchmark
To establish a level playing field for evaluating the performance of forecasting methods in a broad setting, we propose a novel benchmark that automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. The data set was assembled from publicly available time series and was designed to exhibit much higher diversity than existing forecasting competitions. Besides proposing a new data set, we introduce two new measures that describe different aspects of a forecast. We applied the developed benchmark to evaluate Telescope.
Contribution II - Telescope
To provide a generic forecasting method, we introduce a novel machine learning-based forecasting approach that automatically retrieves relevant information from a given time series. More precisely, Telescope automatically extracts intrinsic time series features and then decomposes the time series into components, building a forecasting model for each of them. Each component is forecast by applying a different method and then the final forecast is assembled from the forecast components by employing a regression-based machine learning algorithm. In more than 1300 hours of experiments benchmarking 15 competing methods (including approaches from Uber and Facebook) on 400 time series, Telescope outperformed all methods, exhibiting the best forecast accuracy coupled with a low and reliable time-to-result. Compared to the competing methods that exhibited, on average, a forecast error (more precisely, the symmetric mean absolute forecast error) of 29%, Telescope exhibited an error of 20% while being 2556 times faster. In particular, the methods from Uber and Facebook exhibited an error of 48% and 36%, and were 7334 and 19 times slower than Telescope, respectively.
Contribution III - Chamulteon
To enable reliable auto-scaling, we present a hybrid auto-scaler that combines proactive and reactive techniques to scale distributed cloud applications comprising multiple services in a coordinated and cost-effective manner. More precisely, proactive adaptations are planned based on forecasts of Telescope, while reactive adaptations are triggered based on actual observations of the monitored load intensity. To solve occurring conflicts between reactive and proactive adaptations, a complex conflict resolution algorithm is implemented. Moreover, when deployed in public cloud environments, Chamulteon reviews adaptations with respect to the cloud provider's pricing scheme in order to minimize the charged costs. In more than 400 hours of experiments evaluating five competing auto-scaling mechanisms in scenarios covering five different workloads, four different applications, and three different cloud environments, Chamulteon exhibited the best auto-scaling performance and reliability while at the same time reducing the charged costs. The competing methods provided insufficient resources for (on average) 31% of the experimental time; in contrast, Chamulteon cut this time to 8% and the SLO (service level objective) violations from 18% to 6% while using up to 15% less resources and reducing the charged costs by up to 45%.
The contributions of this thesis can be seen as major milestones in the domain of time series forecasting and cloud resource management. (i) This thesis is the first to present a forecasting benchmark that covers a variety of different domains with a high diversity between the analyzed time series. Based on the provided data set and the automatic evaluation procedure, the proposed benchmark contributes to enhance the comparability of forecasting methods. The benchmarking results for different forecasting methods enable the selection of the most appropriate forecasting method for a given use case. (ii) Telescope provides the first generic and fully automated time series forecasting approach that delivers both accurate and reliable forecasts while making no assumptions about the analyzed time series. Hence, it eliminates the need for expensive, time-consuming, and error-prone procedures, such as trial-and-error searches or consulting an expert. This opens up new possibilities especially in time-critical scenarios, where Telescope can provide accurate forecasts with a short and reliable time-to-result.
Although Telescope was applied for this thesis in the field of cloud computing, there is absolutely no limitation regarding the applicability of Telescope in other domains, as demonstrated in the evaluation. Moreover, Telescope, which was made available on GitHub, is already used in a number of interdisciplinary data science projects, for instance, predictive maintenance in an Industry 4.0 context, heart failure prediction in medicine, or as a component of predictive models of beehive development. (iii) In the context of cloud resource management, Chamulteon is a major milestone for increasing the trust in cloud auto-scalers. The complex resolution algorithm enables reliable and accurate scaling behavior that reduces losses caused by excessive resource allocation or SLO violations. In other words, Chamulteon provides reliable online adaptations minimizing charged costs while at the same time maximizing user experience.
Immersive virtual environments provide users with the opportunity to escape from the real world, but scripted dialogues can disrupt the presence within the world the user is trying to escape within. Both Non-Playable Character (NPC) to Player and NPC to NPC dialogue can be non-natural and the reliance on responding with pre-defined dialogue does not always meet the players emotional expectations or provide responses appropriate to the given context or world states. This paper investigates the application of Artificial Intelligence (AI) and Natural Language Processing to generate dynamic human-like responses within a themed virtual world. Each thematic has been analysed against humangenerated responses for the same seed and demonstrates invariance of rating across a range of model sizes, but shows an effect of theme and the size of the corpus used for fine-tuning the context for the game world.
This thesis is divided into two parts.
In the first part we contribute to a working program initiated by Pudlák (2017) who lists several major complexity theoretic conjectures relevant to proof complexity and asks for oracles that separate pairs of corresponding relativized conjectures. Among these conjectures are:
- \(\mathsf{CON}\) and \(\mathsf{SAT}\): coNP (resp., NP) does not contain complete sets that have P-optimal proof systems.
- \(\mathsf{CON}^{\mathsf{N}}\): coNP does not contain complete sets that have optimal proof systems.
- \(\mathsf{TFNP}\): there do not exist complete total polynomial search problems (also known as total NP search problems).
- \(\mathsf{DisjNP}\) and \(\mathsf{DisjCoNP}\): There do not exist complete disjoint NP pairs (coNP pairs).
- \(\mathsf{UP}\): UP does not contain complete problems.
- \(\mathsf{NP}\cap\mathsf{coNP}\): \(\mathrm{NP}\cap\mathrm{coNP}\) does not contain complete problems.
- \(\mathrm{P}\ne\mathrm{NP}\).
We construct several of the oracles that Pudlák asks for.
In the second part we investigate the computational complexity of balance problems for \(\{-,\cdot\}\)-circuits computing finite sets of natural numbers (note that \(-\) denotes the set difference). These problems naturally build on problems for integer expressions and integer circuits studied by Stockmeyer and Meyer (1973), McKenzie and Wagner (2007), and Glaßer et al. (2010).
Our work shows that the balance problem for \(\{-,\cdot\}\)-circuits is undecidable which is the first natural problem for integer circuits or related constraint satisfaction problems that admits only one arithmetic operation and is proven to be undecidable.
Starting from this result we precisely characterize the complexity of balance problems for proper subsets of \(\{-,\cdot\}\). These problems turn out to be complete for one of the classes L, NL, and NP.
Dynamic point cloud compression based on projections, surface reconstruction and video compression
(2021)
In this paper we will present a new dynamic point cloud compression based on different projection types and bit depth, combined with the surface reconstruction algorithm and video compression for obtained geometry and texture maps. Texture maps have been compressed after creating Voronoi diagrams. Used video compression is specific for geometry (FFV1) and texture (H.265/HEVC). Decompressed point clouds are reconstructed using a Poisson surface reconstruction algorithm. Comparison with the original point clouds was performed using point-to-point and point-to-plane measures. Comprehensive experiments show better performance for some projection maps: cylindrical, Miller and Mercator projections.
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Natural walking in virtual reality games is constrained by the physical boundaries defined by the size of the player’s tracking space. Impossible spaces, a redirected walking technique, enlarge the virtual environment by creating overlapping architecture and letting multiple locations occupy the same physical space. Within certain thresholds, this is subtle to the player. In this paper, we present our approach to implement such impossible spaces and describe how we handled challenges like objects with simulated physics or precomputed global illumination.
Corfu is a framework for satellite software, not only for the onboard part but also for the ground. Developing software with Corfu follows an iterative model-driven approach. The basis of the process is an engineering model. Engineers formally describe the basic structure of the onboard software in configuration files, which build the engineering model. In the first step, Corfu verifies the model at different levels. Not only syntactically and semantically but also on a higher level such as the scheduling.
Based on the model, Corfu generates a software scaffold, which follows an application-centric approach. Software images onboard consist of a list of applications connected through communication channels called topics. Corfu’s generic and generated code covers this fundamental communication, telecommand, and telemetry handling. All users have to do is inheriting from a generated class and implement the behavior in overridden methods. For each application, the generator creates an abstract class with pure virtual methods. Those methods are callback functions, e.g., for handling telecommands or executing code in threads.
However, from the model, one can not foresee the software implementation by users. Therefore, as an innovation compared to other frameworks, Corfu introduces feedback from the user code back to the model. In this way, we extend the engineering model with information about functions/methods, their invocations, their stack usage, and information about events and telemetry emission. Indeed, it would be possible to add further information extraction for additional use cases. We extract the information in two ways: assembly and source code analysis. The assembly analysis collects information about the stack usage of functions and methods.
On the one side, Corfu uses the gathered information to accomplished additional verification steps, e.g., checking if stack usages exceed stack sizes of threads. On the other side, we use the gathered information to improve the performance of onboard software. In a use case, we show how the compiled binary and bandwidth towards the ground is reducible by exploiting source code information at run-time.
The capabilities of small satellites have improved significantly in recent years. Specifically multi-satellite systems become increasingly popular, since they allow the support of new applications. The development and testing of these multi-satellite systems is a new challenge for engineers and requires the implementation of appropriate development and testing environments. In this paper, a modular network simulation framework for space–terrestrial systems is presented. It enables discrete event simulations for the development and testing of communication protocols, as well as mission-based analysis of other satellite system aspects, such as power supply and attitude control. ESTNeT is based on the discrete event simulator OMNeT++ and will be released under an open source license.