004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (42)
Year of publication
- 2021 (42) (remove)
Document Type
- Journal article (35)
- Conference Proceeding (5)
- Doctoral Thesis (1)
- Report (1)
Language
- English (42)
Keywords
- virtual reality (11)
- artificial intelligence (3)
- augmented reality (3)
- human-computer interaction (3)
- crowdsensing (2)
- immersion (2)
- immersive technologies (2)
- mHealth (2)
- spatial presence (2)
- 3D Laser Scanning (1)
- 3D fluoroscopy (1)
- 3D-reconstruction methods (1)
- 3DTK toolkit (1)
- Algorithmische Geometrie (1)
- Algorithmus (1)
- CETCH cycle (1)
- CO2-sequestration (1)
- Charged aerosol detector (CAD) (1)
- Computerspiel (1)
- Daedalus-Projekt (1)
- EEG (1)
- EEG frequency band analysis (1)
- EEG preprocessing (1)
- EEG processing (1)
- EPM (1)
- Fatty acids (1)
- Gradient boosted trees (GBT) (1)
- Graphenzeichnen (1)
- HMD (Head-Mounted Display) (1)
- HTTP adaptive video streaming (1)
- High-performance liquid chromatography (HPLC) (1)
- Intelligent Virtual Agents (1)
- Kerneldensity estimation (1)
- Kombinatorik (1)
- Komplexität (1)
- Konvexe Zeichnungen (1)
- Künstliche Intelligenz (1)
- Lava (1)
- Lunar Caves (1)
- Lunar Exploration (1)
- MDR (1)
- Mapping (1)
- Mensch-Maschine-Kommunikation (1)
- Mond (1)
- Planare Graphen (1)
- Poisson surface reconstruction (1)
- Polyeder (1)
- Quantitative structure-property relationship modeling (QSPR) (1)
- Spherical Robot (1)
- Veranstaltung (1)
- XR (1)
- XR-artificial intelligence combination (1)
- XR-artificial intelligence continuum (1)
- acrophobia (1)
- advertising effectiveness (1)
- agent-based models (1)
- agents (1)
- anamnesis tool (1)
- aneurysm (1)
- anxiety (1)
- application design (1)
- artificial intelligence education (1)
- artificial intelligence literacy (1)
- augmentation (1)
- avatar embodiment (1)
- avatars (1)
- behavior change (1)
- bibliometric analysis (1)
- biofuel (1)
- biomanufacturing (1)
- biosignals (1)
- carboxylation (1)
- cardiac magnetic resonance (1)
- co-authorships (1)
- co-inventorships (1)
- collaboration (1)
- community detection (1)
- computational (1)
- computers as social actors (1)
- condition prediction (1)
- conversational agent (1)
- conversational agents (1)
- cost-sensitive learning (1)
- crowdsourced measurements (1)
- cybersickness (1)
- decision-making (1)
- deep learning (1)
- design (1)
- design cycle (1)
- detection time simulation (1)
- dimensions of proximity (1)
- ecological momentary assessment (1)
- education (1)
- electroencephalography (1)
- elementary modes (1)
- elevated plus-maze (1)
- embedding techniques (1)
- empathy (1)
- engineering (1)
- environmental sound (1)
- enzyme (1)
- event detection (1)
- event-related potentials-ERP (1)
- expertise framing (Min5-Max 8) (1)
- fluoroscopy (1)
- foreign language learning and teaching (1)
- framework (1)
- game mechanics (1)
- games (1)
- handwriting (1)
- human computer interaction (HCI) (1)
- human-artificial intelligence interaction (1)
- human-artificial intelligence interface (1)
- human-centered design (1)
- human-centered, human-robot (1)
- imbalanced regression (1)
- immersive advertising (1)
- immersive learning technologies (1)
- intention-behavior-gap (1)
- interaction (1)
- interactive authoring system (1)
- intercultural learning and teaching (1)
- interdisciplinary education (1)
- intervention design (1)
- intervention evaluation (1)
- intraoperative imaging (1)
- iowa gambling task (1)
- learning environments (1)
- locomotion (1)
- machine learning (1)
- map projections (1)
- measurement (1)
- media equation (1)
- medical analytics (1)
- medical device regulation (1)
- medical device software (1)
- meditation (1)
- metabolic modeling (1)
- metabolism (1)
- microbes (1)
- mindfulness (1)
- mixed reality (1)
- mixed-cultural settings (1)
- mobile application (1)
- mobile networks (1)
- multimodal learning (1)
- natural language processing · · · (1)
- natural user interfaces (1)
- neural networks (1)
- noise measurement (1)
- nonverbal behavior (1)
- octree (1)
- passive haptic feedback (1)
- perception (1)
- performance analysis (1)
- photorespiration (1)
- place-illusion (1)
- plausibility-illusion (1)
- point cloud compression (1)
- point-to-plane measure (1)
- point-to-point measure (1)
- procedural content generation (1)
- psychomotor training (1)
- psychophyisology (1)
- quality of experience (1)
- quality of experience prediction (1)
- realism (1)
- recommender system (1)
- research methods (1)
- sample weighting (1)
- segmentation (1)
- self-assembly (1)
- semantic understanding (1)
- sensitivity analysis (1)
- sensor devices (1)
- serious games (1)
- sesnsors (1)
- sketching (1)
- smart speaker (1)
- statistical validity (1)
- stylus (1)
- supervised learning (1)
- synthetic pathways (1)
- system architecture design (1)
- systematic literature review (1)
- systematic review (1)
- teacher education (1)
- time perception (1)
- time series (1)
- tinnitus (1)
- tools (1)
- trait anxiety (1)
- transformations (1)
- translational neuroscience (1)
- transportation (1)
- trust (1)
- trustworthiness (1)
- usability evaluation (1)
- use cases (1)
- user experience (1)
- user interaction (1)
- user study (1)
- user-generated content (1)
- verbal behaviour (1)
- virtual agent (1)
- virtual audience (1)
- virtual environments (1)
- virtual humans (1)
- virtual-reality-continuum (1)
- visual analytics (1)
- voice assistant (1)
- voice-based artificial intelligence (1)
Institute
- Institut für Informatik (26)
- Institut Mensch - Computer - Medien (10)
- Institut für Klinische Epidemiologie und Biometrie (4)
- Institut für Psychologie (2)
- Theodor-Boveri-Institut für Biowissenschaften (2)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (1)
- Institut für Pharmazie und Lebensmittelchemie (1)
- Institut für Pädagogik (1)
- Institut für diagnostische und interventionelle Neuroradiologie (ehem. Abteilung für Neuroradiologie) (1)
- Neurochirurgische Klinik und Poliklinik (1)
Sonstige beteiligte Institutionen
- Cologne Game Lab (2)
- Birmingham City University (1)
- INAF Padova, Italy (1)
- Jacobs University Bremen, Germany (1)
- Open University of the Netherlands (1)
- Servicezentrum Medizin-Informatik (Universitätsklinikum) (1)
- TH Köln (1)
- University of Cologne (1)
- University of Padova, Italy (1)
- Universität Hamburg (1)
- VIGEA, Italy (1)
As an emerging market for voice assistants (VA), the healthcare sector imposes increasing requirements on the users’ trust in the technological system. To encourage patients to reveal sensitive data requires patients to trust in the technological counterpart. In an experimental laboratory study, participants were presented a VA, which was introduced as either a “specialist” or a “generalist” tool for sexual health. In both conditions, the VA asked the exact same health-related questions. Afterwards, participants assessed the trustworthiness of the tool and further source layers (provider, platform provider, automatic speech recognition in general, data receiver) and reported individual characteristics (disposition to trust and disclose sexual information). Results revealed that perceiving the VA as a specialist resulted in higher trustworthiness of the VA and of the provider, the platform provider and automatic speech recognition in general. Furthermore, the provider’s trustworthiness affected the perceived trustworthiness of the VA. Presenting both a theoretical line of reasoning and empirical data, the study points out the importance of the users’ perspective on the assistant. In sum, this paper argues for further analyses of trustworthiness in voice-based systems and its effects on the usage behavior as well as the impact on responsible design of future technology.
Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article’s contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development.
Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions.
The design and evaluation of assisting technologies to support behavior change processes have become an essential topic within the field of human-computer interaction research in general and the field of immersive intervention technologies in particular. The mechanisms and success of behavior change techniques and interventions are broadly investigated in the field of psychology. However, it is not always easy to adapt these psychological findings to the context of immersive technologies. The lack of theoretical foundation also leads to a lack of explanation as to why and how immersive interventions support behavior change processes. The Behavioral Framework for immersive Technologies (BehaveFIT) addresses this lack by 1) presenting an intelligible categorization and condensation of psychological barriers and immersive features, by 2) suggesting a mapping that shows why and how immersive technologies can help to overcome barriers and finally by 3) proposing a generic prediction path that enables a structured, theory-based approach to the development and evaluation of immersive interventions. These three steps explain how BehaveFIT can be used, and include guiding questions for each step. Further, two use cases illustrate the usage of BehaveFIT. Thus, the present paper contributes to guidance for immersive intervention design and evaluation, showing that immersive interventions support behavior change processes and explain and predict 'why' and 'how' immersive interventions can bridge the intention-behavior-gap.
The concept of digital literacy has been introduced as a new cultural technique, which is regarded as essential for successful participation in a (future) digitized world. Regarding the increasing importance of AI, literacy concepts need to be extended to account for AI-related specifics. The easy handling of the systems results in increased usage, contrasting limited conceptualizations (e.g., imagination of future importance) and competencies (e.g., knowledge about functional principles). In reference to voice-based conversational agents as a concrete application of AI, the present paper aims for the development of a measurement to assess the conceptualizations and competencies about conversational agents. In a first step, a theoretical framework of “AI literacy” is transferred to the context of conversational agent literacy. Second, the “conversational agent literacy scale” (short CALS) is developed, constituting the first attempt to measure interindividual differences in the “(il) literate” usage of conversational agents. 29 items were derived, of which 170 participants answered. An explanatory factor analysis identified five factors leading to five subscales to assess CAL: storage and transfer of the smart speaker’s data input; smart speaker’s functional principles; smart speaker’s intelligent functions, learning abilities; smart speaker’s reach and potential; smart speaker’s technological (surrounding) infrastructure. Preliminary insights into construct validity and reliability of CALS showed satisfying results. Third, using the newly developed instrument, a student sample’s CAL was assessed, revealing intermediated values. Remarkably, owning a smart speaker did not lead to higher CAL scores, confirming our basic assumption that usage of systems does not guarantee enlightened conceptualizations and competencies. In sum, the paper contributes to the first insights into the operationalization and understanding of CAL as a specific subdomain of AI-related competencies.
Crowdsourced network measurements (CNMs) are becoming increasingly popular as they assess the performance of a mobile network from the end user's perspective on a large scale. Here, network measurements are performed directly on the end-users' devices, thus taking advantage of the real-world conditions end-users encounter. However, this type of uncontrolled measurement raises questions about its validity and reliability. The problem lies in the nature of this type of data collection. In CNMs, mobile network subscribers are involved to a large extent in the measurement process, and collect data themselves for the operator. The collection of data on user devices in arbitrary locations and at uncontrolled times requires means to ensure validity and reliability. To address this issue, our paper defines concepts and guidelines for analyzing the precision of CNMs; specifically, the number of measurements required to make valid statements. In addition to the formal definition of the aspect, we illustrate the problem and use an extensive sample data set to show possible assessment approaches. This data set consists of more than 20.4 million crowdsourced mobile measurements from across France, measured by a commercial data provider.
Psycho-pathological conditions, such as depression or schizophrenia, are often accompanied by a distorted perception of time. People suffering from this conditions often report that the passage of time slows down considerably and that they are “stuck in time.” Virtual Reality (VR) could potentially help to diagnose and maybe treat such mental conditions. However, the conditions in which a VR simulation could correctly diagnose a time perception deviation are still unknown. In this paper, we present an experiment investigating the difference in time experience with and without a virtual body in VR, also known as avatar. The process of substituting a person’s body with a virtual body is called avatar embodiment. Numerous studies demonstrated interesting perceptual, emotional, behavioral, and psychological effects caused by avatar embodiment. However, the relations between time perception and avatar embodiment are still unclear. Whether or not the presence or absence of an avatar is already influencing time perception is still open to question. Therefore, we conducted a between-subjects design with and without avatar embodiment as well as a real condition (avatar vs. no-avatar vs. real). A group of 105 healthy subjects had to wait for seven and a half minutes in a room without any distractors (e.g., no window, magazine, people, decoration) or time indicators (e.g., clocks, sunlight). The virtual environment replicates the real physical environment. Participants were unaware that they will be asked to estimate their waiting time duration as well as describing their experience of the passage of time at a later stage. Our main finding shows that the presence of an avatar is leading to a significantly faster perceived passage of time. It seems to be promising to integrate avatar embodiment in future VR time-based therapy applications as they potentially could modulate a user’s perception of the passage of time. We also found no significant difference in time perception between the real and the VR conditions (avatar, no-avatar), but further research is needed to better understand this outcome.
In this paper, we bridge the gap between procedural content generation (PCG) and user-generated content (UGC) by proposing and demonstrating an interactive agent-based model of self-assembling ensembles that can be directed though user input. We motivate these efforts by considering the opportunities technology provides to pursue game designs based on according game design frameworks. We present three different use cases of the proposed model that emphasize its potential to (1) self-assemble into predefined 3D graphical assets, (2) define new structures in the context of virtual environments by self-assembling layers on the surfaces of arbitrary 3D objects, and (3) allow novel structures to self-assemble only considering the model’s configuration and no external dependencies. To address the performance restrictions in computer games, we realized the prototypical model implementation by means of an efficient entity component system (ECS). We conclude the paper with an outlook on future steps to further explore novel interactive, dynamic PCG mechanics and to ensure their efficiency.
In many real world settings, imbalanced data impedes model performance of learning algorithms, like neural networks, mostly for rare cases. This is especially problematic for tasks focusing on these rare occurrences. For example, when estimating precipitation, extreme rainfall events are scarce but important considering their potential consequences. While there are numerous well studied solutions for classification settings, most of them cannot be applied to regression easily. Of the few solutions for regression tasks, barely any have explored cost-sensitive learning which is known to have advantages compared to sampling-based methods in classification tasks. In this work, we propose a sample weighting approach for imbalanced regression datasets called DenseWeight and a cost-sensitive learning approach for neural network regression with imbalanced data called DenseLoss based on our weighting scheme. DenseWeight weights data points according to their target value rarities through kernel density estimation (KDE). DenseLoss adjusts each data point’s influence on the loss according to DenseWeight, giving rare data points more influence on model training compared to common data points. We show on multiple differently distributed datasets that DenseLoss significantly improves model performance for rare data points through its density-based weighting scheme. Additionally, we compare DenseLoss to the state-of-the-art method SMOGN, finding that our method mostly yields better performance. Our approach provides more control over model training as it enables us to actively decide on the trade-off between focusing on common or rare cases through a single hyperparameter, allowing the training of better models for rare data points.
To deliver the best user experience (UX), the human-centered design cycle (HCDC) serves as a well-established guideline to application developers. However, it does not yet cover network-specific requirements, which become increasingly crucial, as most applications deliver experience over the Internet. The missing network-centric view is provided by Quality of Experience (QoE), which could team up with UX towards an improved overall experience. By considering QoE aspects during the development process, it can be achieved that applications become network-aware by design. In this paper, the Quality of Experience Centered Design Cycle (QoE-CDC) is proposed, which provides guidelines on how to design applications with respect to network-specific requirements and QoE. Its practical value is showcased for popular application types and validated by outlining the design of a new smartphone application. We show that combining HCDC and QoE-CDC will result in an application design, which reaches a high UX and avoids QoE degradation.