Refine
Has Fulltext
- yes (13)
Is part of the Bibliography
- yes (13)
Year of publication
- 2022 (13) (remove)
Document Type
- Doctoral Thesis (13) (remove)
Language
- English (13) (remove)
Keywords
- Cloud Computing (2)
- Kleinsatellit (2)
- Zuverlässigkeit (2)
- Anomalieerkennung (1)
- Automation (1)
- Betriebssystem (1)
- Broadcast Growth Codes (BCGC) (1)
- Cloud (1)
- Cloud-native (1)
- Compass framework (1)
Institute
- Institut für Informatik (13) (remove)
Detecting anomalies in transaction data is an important task with a high potential to avoid financial loss due to irregularities deliberately or inadvertently carried out, such as credit card fraud, occupational fraud in companies or ordering and accounting errors. With ongoing digitization of our world, data-driven approaches, including machine learning, can draw benefit from data with less manual effort and feature engineering. A large variety of machine learning-based anomaly detection methods approach this by learning a precise model of normality from which anomalies can be distinguished. Modeling normality in transactional data, however, requires to capture distributions and dependencies within the data precisely with special attention to numerical dependencies such as quantities, prices or amounts.
To implicitly model numerical dependencies, Neural Arithmetic Logic Units have been proposed as neural architecture. In practice, however, these have stability and precision issues.
Therefore, we first develop an improved neural network architecture, iNALU, which is designed to better model numerical dependencies as found in transaction data. We compare this architecture to the previous approach and show in several experiments of varying complexity that our novel architecture provides better precision and stability.
We integrate this architecture into two generative neural network models adapted for transaction data and investigate how well normal behavior is modeled. We show that both architectures can successfully model normal transaction data, with our neural architecture improving generative performance for one model.
Since categorical and numerical variables are common in transaction data, but many machine learning methods only process numerical representations, we explore different representation learning techniques to transform categorical transaction data into dense numerical vectors. We extend this approach by proposing an outlier-aware discretization, thus incorporating numerical attributes into the computation of categorical embeddings, and investigate latent spaces, as well as quantitative performance for anomaly detection.
Next, we evaluate different scenarios for anomaly detection on transaction data. We extend our iNALU architecture to a neural layer that can model both numerical and non-numerical dependencies and evaluate it in a supervised and one-class setting. We investigate the stability and generalizability of our approach and show that it outperforms a variety of models in the balanced supervised setting and performs comparably in the one-class setting. Finally, we evaluate three approaches to using a generative model as an anomaly detector and compare the anomaly detection performance.
Latency is an inherent problem of computing systems. Each computation takes time until the result is available. Virtual reality systems use elaborated computer resources to create virtual experiences. The latency of those systems is often ignored or assumed as small enough to provide a good experience.
This cumulative thesis is comprised of published peer reviewed research papers exploring the behaviour and effects of latency. Contrary to the common description of time invariant latency, latency is shown to fluctuate. Few other researchers have looked into this time variant behaviour. This thesis explores time variant latency with a focus on randomly occurring latency spikes. Latency spikes are observed both for small algorithms and as end to end latency in complete virtual reality systems. Most latency measurements gather close to the mean latency with potentially multiple smaller clusters of larger latency values and rare extreme outliers. The latency behaviour differs for different implementations of an algorithm. Operating system schedulers and programming language environments such as garbage collectors contribute to the overall latency behaviour. The thesis demonstrates these influences on the example of different implementations of message passing.
The plethora of latency sources result in an unpredictable latency behaviour. Measuring and reporting it in scientific experiments is important. This thesis describes established approaches to measuring latency and proposes an enhanced setup to gather detailed information. The thesis proposes to dissect the measured data with a stacked z-outlier-test to separate the clusters of latency measurements for better reporting.
Latency in virtual reality applications can degrade the experience in multiple ways. The thesis focuses on cybersickness as a major detrimental effect. An approach to simulate time variant latency is proposed to make latency available as an independent variable in experiments to understand latency's effects. An experiment with modified latency shows that latency spikes can contribute to cybersickness. A review of related research shows that different time invariant latency behaviour also contributes to cybersickness.
An enduring engineering problem is the creation of unreliable software leading to unreliable systems. One reason for this is source code is written in a complicated manner making it too hard for humans to review and understand. Complicated code leads to other issues beyond dependability, such as expanded development efforts and ongoing difficulties with maintenance, ultimately costing developers and users more money.
There are many ideas regarding where blame lies in the reation of buggy and unreliable systems. One prevalent idea is the selected life cycle model is to blame. The oft-maligned “waterfall” life cycle model is a particularly popular recipient of blame. In response, many organizations changed their life cycle model in hopes of addressing these issues. Agile life cycle models have become very popular, and they promote communication between team members and end users. In theory, this communication leads to fewer misunderstandings and should lead to less complicated and more reliable code.
Changing the life cycle model can indeed address communications ssues, which can resolve many problems with understanding requirements.
However, most life cycle models do not specifically address coding practices or software architecture. Since lifecycle models do not address the structure of the code, they are often ineffective at addressing problems related to code complicacy.
This dissertation answers several research questions concerning software complicacy, beginning with an investigation of traditional metrics and static analysis to evaluate their usefulness as measurement tools. This dissertation also establishes a new concept in applied linguistics by creating a measurement of software complicacy based on linguistic economy. Linguistic economy describes the efficiencies of speech, and this thesis shows the applicability of linguistic economy to software. Embedded in each topic is a discussion
of the ramifications of overly complicated software, including the relationship of complicacy to software faults. Image recognition using machine learning is also investigated as a potential method of identifying problematic source code.
The central part of the work focuses on analyzing the source code of hundreds of different projects from different areas. A static analysis was performed on the source code of each project, and traditional software metrics were calculated. Programs were also analyzed using techniques developed by linguists to measure expression and statement complicacy and identifier complicacy. Professional software engineers were also directly surveyed to understand mainstream perspectives.
This work shows it is possible to use traditional metrics as indicators of potential project bugginess. This work also discovered it is possible to use image recognition to identify problematic pieces of source code. Finally, this work discovered it is possible to use linguistic methods to determine which statements and expressions are least desirable and more complicated for programmers.
This work’s principle conclusion is that there are multiple ways to discover traits indicating a project or a piece of source code has characteristics of being buggy. Traditional metrics and static analysis can be used to gain some understanding of software complicacy and bugginess potential. Linguistic economy demonstrates a new tool for measuring software complicacy, and machine learning can predict where bugs may lie in source code. The significant implication of this work is developers can recognize when a project is becoming buggy and take practical steps to avoid creating buggy projects.
With the miniaturization of satellites a fundamental change took place in the space industry. Instead of single big monolithic satellites nowadays more and more systems are envisaged consisting of a number of small satellites to form cooperating systems in space. The lower costs for development and launch as well as the spatial distribution of these systems enable the implementation of new scientific missions and commercial services.
With this paradigm shift new challenges constantly emerge for satellite developers, particularly in the area of wireless communication systems and network protocols.
Satellites in low Earth orbits and ground stations form dynamic space-terrestrial networks. The characteristics of these networks differ fundamentally from those of other networks.
The resulting challenges with regard to communication system design, system analysis, packet forwarding, routing and medium access control as well as challenges concerning the reliability and efficiency of wireless communication links are addressed in this thesis.
The physical modeling of space-terrestrial networks is addressed by analyzing existing satellite systems and communication devices, by evaluating measurements and by implementing a simulator for space-terrestrial networks.
The resulting system and channel models were used as a basis for the prediction of the dynamic network topologies, link properties and channel interference. These predictions allowed for the implementation of efficient routing and medium access control schemes for space-terrestrial networks. Further, the implementation and utilization of software-defined ground stations is addressed, and a data upload scheme for the operation of small satellite formations is presented.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
The application of Wireless Sensor Networks (WSNs) with a large number of tiny, cost-efficient, battery-powered sensor nodes that are able to communicate directly with each other poses many challenges.
Due to the large number of communicating objects and despite a used CSMA/CA MAC protocol, there may be many signal collisions.
In addition, WSNs frequently operate under harsh conditions and nodes are often prone to failure, for example, due to a depleted battery or unreliable components.
Thus, nodes or even large parts of the network can fail.
These aspects lead to reliable data dissemination and data storage being a key issue.
Therefore, these issues are addressed herein while keeping latency low, throughput high, and energy consumption reduced.
Furthermore, simplicity as well as robustness to changes in conditions are essential here.
In order to achieve these aims, a certain amount of redundancy has to be included.
This can be realized, for example, by using network coding.
Existing approaches, however, often only perform well under certain conditions or for a specific scenario, have to perform a time-consuming initialization, require complex calculations, or do not provide the possibility of early decoding.
Therefore, we developed a network coding procedure called Broadcast Growth Codes (BCGC) for reliable data dissemination, which performs well under a broad range of diverse conditions.
These can be a high probability of signal collisions, any degree of nodes' mobility, a large number of nodes, or occurring node failures, for example.
BCGC do not require complex initialization and only use simple XOR operations for encoding and decoding.
Furthermore, decoding can be started as soon as a first packet/codeword has been received.
Evaluations by using an in-house implemented network simulator as well as a real-world testbed showed that BCGC enhance reliability and enable to retrieve data dependably despite an unreliable network.
In terms of latency, throughput, and energy consumption, depending on the conditions and the procedure being compared, BCGC can achieve the same performance or even outperform existing procedures significantly while being robust to changes in conditions and allowing low complexity of the nodes as well as early decoding.
In today's world, circumstances, processes, and requirements for systems in general-in this thesis a special focus is given to the context of Cyber-Physical Systems (CPS)-are becoming increasingly complex and dynamic.
In order to operate properly in such dynamic environments, systems must adapt to dynamic changes, which has led to the research area of Self-Adaptive Systems (SAS).
These systems can deal with changes in their environment and the system itself.
In our daily lives, we come into contact with many different self-adaptive systems that are designed to support and improve our way of life.
In this work we focus on the two domains Intelligent Transportation Systems (ITS) and logistics as both domains provide complex and adaptable use cases to prototypical apply the contributions of this thesis.
However, the contributions are not limited to these areas and can be generalized also to other domains such as the general area of CPS and Internet of Things including smart grids or even intelligent computer networks.
In ITS, real-time traffic control is an example adaptive system that monitors the environment, analyzes observations, and plans and executes adaptation actions.
Another example is platooning, which is the ability of vehicles to drive with close inter-vehicle distances.
This technology enables an increase in road throughput and safety, which directly addresses the increased infrastructure needs due to increased traffic on the roads.
In logistics, the Vehicle Routing Problem (VRP) deals with the planning of road freight transport tours.
To cope with the ever-increasing transport volume due to the rise of just-in-time production and online shopping, efficient and correct route planning for transports is important.
Further, warehouses play a central role in any company's supply chain and contribute to the logistical success.
The processes of storage assignment and order picking are the two main tasks in mezzanine warehouses highly affected by a dynamic environment.
Usually, optimization algorithms are applied to find solutions in reasonable computation time.
SASes can help address these dynamics by allowing systems to deal with changing demands and constraints.
For the application of SASes in the two areas ITS and logistics, the definition of adaptation planning strategies is the key success factor.
A wide range of adaptation planning strategies for different domains can be found in the literature, and the operator must select the most promising strategy for the problem at hand.
However, the No-Free-Lunch theorem states that the performance of one strategy is not necessarily transferable to other problems.
Accordingly, the algorithm selection problem, first defined in 1976, aims to find the best performing algorithm for the current problem.
Since then, this problem has been explored more and more, and the machine learning community, for example, considers it a learning problem.
The ideas surrounding the algorithm selection problem have been applied in various use cases, but little research has been done to generalize the approaches.
Moreover, especially in the field of SASes, the selection of the most appropriate strategy depends on the current situation of the system.
Techniques for identifying the situation of a system can be found in the literature, such as the use of rules or clustering techniques.
This knowledge can then be used to improve the algorithm selection, or in the scope of this thesis, to improve the selection of adaptation planning strategies.
In addition, knowledge about the current situation and the performance of strategies in similar previously observed situations provides another opportunity for improvements.
This ongoing learning and reasoning about the system and its environment is found in the research area Self-Aware Computing (SeAC).
In this thesis, we explore common characteristics of adaptation planning strategies in the domain of ITS and logistics presenting a self-aware optimization framework for adaptation planning strategies.
We consider platooning coordination strategies from ITS and optimization techniques from logistics as adaptation planning strategies that can be exchanged during operation to better reflect the current situation.
Further, we propose to integrate fairness and uncertainty handling mechanisms directly into the adaptation planning strategies.
We then examine the complex structure of the logistics use cases VRP and mezzanine warehouses and identify their systems-of-systems structure.
We propose a two-stage approach for vertical or nested systems and propose to consider the impact of intertwining horizontal or coexisting systems.
More specifically, we summarize the six main contributions of this thesis as follows:
First, we analyze specific characteristics of adaptation planning strategies with a particular focus on ITS and logistics.
We use platooning and route planning in highly dynamic environments as representatives of ITS and we use the rich Vehicle Routing Problem (rVRP) and mezzanine warehouses as representatives of the logistics domain.
Using these case studies, we derive the need for situation-aware optimization of adaptation planning strategies and argue that fairness is an important consideration when applying these strategies in ITS.
In logistics, we discuss that these complex systems can be considered as systems-of-systems and this structure affects each subsystem.
Hence, we argue that the consideration of these characteristics is a crucial factor for the success of the system.
Second, we design a self-aware optimization framework for adaptation planning strategies.
The optimization framework is abstracted into a third layer above the application and its adaptation planning system, which allows the concept to be applied to a diverse set of use cases.
Further, the Domain Data Model (DDM) used to configure the framework enables the operator to easily apply it by defining the available adaptation planning strategies, parameters to be optimized, and performance measures.
The framework consists of four components: (i) Coordination, (ii) Situation Detection, (iii) Strategy Selection, and (iv) Parameter Optimization.
While the coordination component receives observations and triggers the other components, the situation detection applies rules or clustering techniques to identify the current situation.
The strategy selection uses this knowledge to select the most promising strategy for the current situation, and the parameter optimization applies optimization algorithms to tune the parameters of the strategy.
Moreover, we apply the concepts of the SeAC domain and integrate learning and reasoning processes to enable ongoing advancement of the framework.
We evaluate our framework using the platooning use case and consider platooning coordination strategies as the adaptation planning strategies to be selected and optimized.
Our evaluation shows that the framework is able to select the most appropriate adaptation strategy and learn the situational behavior of the system.
Third, we argue that fairness aspects, previously identified as an important characteristic of adaptation planning strategies, are best addressed directly as part of the strategies.
Hence, focusing on platooning as an example use case, we propose a set of fairness mechanisms to balance positive and negative effects of platooning among all participants in a platoon.
We design six vehicle sequence rotation mechanisms that continuously change the leader position among all participants, as this is the position with the least positive effects.
We analyze these strategies on roads of different sizes and with different traffic volumes, and show that these mechanisms should also be chosen wisely.
Fourth, we address the uncertainty characteristic of adaptation planning strategies and propose a methodology to account for uncertainty and also address it directly as part of the adaptation planning strategies.
We address the use case of fueling planning along a route associated with highly dynamic fuel prices and develop six utility functions that account for different aspects of route planning.
Further, we incorporate uncertainty measures for dynamic fuel prices by adding penalties for longer travel times or greater distance to the next gas station.
Through this approach, we are able to reduce the uncertainty at planning time and obtain a more robust route planning.
Fifth, we analyze optimization of nested systems-of-systems for the use case rVRP.
Before proposing an approach to deal with the complex structure of the problem, we analyze important constraints and objectives that need to be considered when formulating a real-world rVRP.
Then, we propose a two-stage workflow to optimize both systems individually, flexibly, and interchangeably.
We apply Genetic Algorithms and Ant Colony Optimization (ACO) to both nested systems and compare the performance of our workflow with state-of-the-art optimization algorithms for this use case.
In our evaluation, we show that the proposed two-stage workflow is able to handle the complex structure of the problem and consider all real-world constraints and objectives.
Finally, we study coexisting systems-of-systems by optimizing typical processes in mezzanine warehouses.
We first define which ergonomic and economic constraints and objectives must be considered when addressing a real-world problem.
Then, we analyze the interrelatedness of the storage assignment and order picking problems; we identify opportunities to design optimization approaches that optimize all objectives and aim for a good overall system performance, taking into account the interdependence of both systems.
We use the NSGA-II for storage assignment and Ant Colony Optimization (ACO) for order picking and adapt them to the specific requirements of horizontal systems-of-systems.
In our evaluation, we compare our approaches to state-of-the-art approaches in mezzanine warehouses and show that our proposed approaches increase the system performance.
Our proposed approaches provide important contributions to both academic research and practical applications.
To the best of our knowledge, we are the first to design a self-aware optimization framework for adaptation planning strategies that integrates situation-awareness, algorithm selection, parameter tuning, as well as learning and reasoning.
Our evaluation of platooning coordination shows promising results for the application of the framework.
Moreover, our proposed strategies to compensate for negative effects of platooning represent an important milestone, which could lead to higher acceptance of this technology in society and support its future adoption in the real world.
The proposed methodology and utility functions that address uncertainty are an important step to improving the capabilities of SAS in an increasingly turbulent environment.
Similarly, our contributions to systems-of-systems optimization are major contributions to the state of logistics and systems-of-systems research.
Finally, we select real-world use cases for the application of our approaches and cooperate with industrial partners, which highlights the practical relevance of our contributions.
The reduction of manual effort and required expert knowledge in our self-aware optimization framework is a milestone in bridging the gap between academia and practice.
One of our partners integrated the two-stage approach to tackling the rVRP into its software system, improving both time to solution and solution quality.
In conclusion, the contributions of this thesis have spawned several research projects such as a long-term industrial project on optimizing tours and routes in parcel delivery funded by Bayerisches Verbundforschungsprogramm (BayVFP) – Digitalisierung and further collaborations, opening up many promising avenues for future research.
In this doctoral thesis we cover the performance evaluation of next generation data plane architectures, comprised of complex software as well as programmable hardware components that allow fine granular configuration. In the scope of the thesis we propose mechanisms to monitor the performance of singular components and model key performance indicators of software based packet processing solutions. We present novel approaches towards network abstraction that allow the integration of heterogeneous data plane technologies into a singular network while maintaining total transparency between control and data plane. Finally, we investigate a full, complex system consisting of multiple software-based solutions and perform a detailed performance analysis. We employ simulative approaches to investigate overload control mechanisms that allow efficient operation under adversary conditions. The contributions of this work build the foundation for future research in the areas of network softwarization and network function virtualization.
One consequence of the recent coronavirus pandemic is increased demand and use of online services around the globe. At the same time, performance requirements for modern technologies are becoming more stringent as users become accustomed to higher standards. These increased performance and availability requirements, coupled with the unpredictable usage growth, are driving an increasing proportion of applications to run on public cloud platforms as they promise better scalability and reliability.
With data centers already responsible for about one percent of the world's power consumption, optimizing resource usage is of paramount importance. Simultaneously, meeting the increasing and changing resource and performance requirements is only possible by optimizing resource management without introducing additional overhead. This requires the research and development of new modeling approaches to understand the behavior of running applications with minimal information.
However, the emergence of modern software paradigms makes it increasingly difficult to derive such models and renders previous performance modeling techniques infeasible. Modern cloud applications are often deployed as a collection of fine-grained and interconnected components called microservices. Microservice architectures offer massive benefits but also have broad implications for the performance characteristics of the respective systems. In addition, the microservices paradigm is typically paired with a DevOps culture, resulting in frequent application and deployment changes. Such applications are often referred to as cloud-native applications. In summary, the increasing use of ever-changing cloud-hosted microservice applications introduces a number of unique challenges for modeling the performance of modern applications. These include the amount, type, and structure of monitoring data, frequent behavioral changes, or infrastructure variabilities. This violates common assumptions of the state of the art and opens a research gap for our work.
In this thesis, we present five techniques for automated learning of performance models for cloud-native software systems. We achieve this by combining machine learning with traditional performance modeling techniques. Unlike previous work, our focus is on cloud-hosted and continuously evolving microservice architectures, so-called cloud-native applications. Therefore, our contributions aim to solve the above challenges to deliver automated performance models with minimal computational overhead and no manual intervention. Depending on the cloud computing model, privacy agreements, or monitoring capabilities of each platform, we identify different scenarios where performance modeling, prediction, and optimization techniques can provide great benefits. Specifically, the contributions of this thesis are as follows:
Monitorless: Application-agnostic prediction of performance degradations.
To manage application performance with only platform-level monitoring, we propose Monitorless, the first truly application-independent approach to detecting performance degradation. We use machine learning to bridge the gap between platform-level monitoring and application-specific measurements, eliminating the need for application-level monitoring. Monitorless creates a single and holistic resource saturation model that can be used for heterogeneous and untrained applications. Results show that Monitorless infers resource-based performance degradation with 97% accuracy. Moreover, it can achieve similar performance to typical autoscaling solutions, despite using less monitoring information.
SuanMing: Predicting performance degradation using tracing.
We introduce SuanMing to mitigate performance issues before they impact the user experience. This contribution is applied in scenarios where tracing tools enable application-level monitoring. SuanMing predicts explainable causes of expected performance degradations and prevents performance degradations before they occur. Evaluation results show that SuanMing can predict and pinpoint future performance degradations with an accuracy of over 90%.
SARDE: Continuous and autonomous estimation of resource demands.
We present SARDE to learn application models for highly variable application deployments. This contribution focuses on the continuous estimation of application resource demands, a key parameter of performance models. SARDE represents an autonomous ensemble estimation technique. It dynamically and continuously optimizes, selects, and executes an ensemble of approaches to estimate resource demands in response to changes in the application or its environment. Through continuous online adaptation, SARDE efficiently achieves an average resource demand estimation error of 15.96% in our evaluation.
DepIC: Learning parametric dependencies from monitoring data.
DepIC utilizes feature selection techniques in combination with an ensemble regression approach to automatically identify and characterize parametric dependencies. Although parametric dependencies can massively improve the accuracy of performance models, DepIC is the first approach to automatically learn such parametric dependencies from passive monitoring data streams. Our evaluation shows that DepIC achieves 91.7% precision in identifying dependencies and reduces the characterization prediction error by 30% compared to the best individual approach.
Baloo: Modeling the configuration space of databases.
To study the impact of different configurations within distributed DBMSs, we introduce Baloo. Our last contribution models the configuration space of databases considering measurement variabilities in the cloud. More specifically, Baloo dynamically estimates the required benchmarking measurements and automatically builds a configuration space model of a given DBMS. Our evaluation of Baloo on a dataset consisting of 900 configuration points shows that the framework achieves a prediction error of less than 11% while saving up to 80% of the measurement effort.
Although the contributions themselves are orthogonally aligned, taken together they provide a holistic approach to performance management of modern cloud-native microservice applications.
Our contributions are a significant step forward as they specifically target novel and cloud-native software development and operation paradigms, surpassing the capabilities and limitations of previous approaches.
In addition, the research presented in this paper also has a significant impact on the industry, as the contributions were developed in collaboration with research teams from Nokia Bell Labs, Huawei, and Google.
Overall, our solutions open up new possibilities for managing and optimizing cloud applications and improve cost and energy efficiency.
Human-computer interfaces have the potential to support mental health practitioners in alleviating mental distress.
Adaption of this technology in practice is, however, slow.
We provide means to extend the design space of human-computer interfaces for mitigating mental distress.
To this end, we suggest three complementary approaches: using presentation technology, using virtual environments, and using communication technology to facilitate social interaction.
We provide new evidence that elementary aspects of presentation technology affect the emotional processing of virtual stimuli, that perception of our environment affects the way we assess our environment, and that communication technologies affect social bonding between users.
By showing how interfaces modify emotional reactions and facilitate social interaction, we provide converging evidence that human-computer interfaces can help alleviate mental distress.
These findings may advance the goal of adapting technological means to the requirements of mental health practitioners.