Refine
Has Fulltext
- yes (16)
Is part of the Bibliography
- yes (16)
Document Type
- Journal article (16) (remove)
Keywords
- artificial intelligence (4)
- machine learning (4)
- Consistent partial least squares (2)
- deep learning (2)
- B2B (1)
- Bootstrap (1)
- Confidence interval (1)
- Digitale Plattformen (1)
- Human Resource Management (1)
- IS education (1)
- Künstliche Intelligenz (1)
- LDA (1)
- Literaturanalyse (1)
- Maschinelles Lernen (1)
- Measurement error correlation (1)
- Menschliche Denkmuster (1)
- Model specification (1)
- Monte Carlo simulation (1)
- Practitioner's guide (1)
- Produktionskapazitäten (1)
- Produktionsnetzwerke (1)
- RFID (1)
- Statistical misconception (1)
- Structural equation modelling (1)
- Sustainable HRM (1)
- Testing parameter difference (1)
- XAI (1)
- Ziele (1)
- analytical model building (1)
- apparel retail (1)
- artificial neural networks (1)
- bank mergers (1)
- business analytics (1)
- business process anagement (1)
- co-worker similarity (1)
- cognitive biases (1)
- coherent risk measures (1)
- comparative HRM (1)
- curriculum research (1)
- data management (1)
- demographics (1)
- design science research (1)
- design theory (1)
- digital platforms (1)
- explainable artificial intelligence (1)
- global reporting initiative (1)
- goals (1)
- implementation framework (1)
- intelligent system (1)
- intelligent systems (1)
- interview study (1)
- labor demand (1)
- labor market (1)
- literature review (1)
- manufacturing capacities (1)
- methodology (1)
- microscopy (1)
- neural network (1)
- pension reform (1)
- predictive process analytics (1)
- predictive process monitoring (1)
- process mining (1)
- production networks (1)
- project management (1)
- quality control (1)
- regulatory capital (1)
- robotic process automation (1)
- software (1)
- store logistics (1)
- subadditivity (1)
- sustainability reporting (1)
- system transparency (1)
- text mining (1)
- topic modeling (1)
- trust (1)
- user acceptance (1)
- workplace (1)
- workshop (1)
Institute
- Betriebswirtschaftliches Institut (16) (remove)
Robotic process automation is a disruptive technology to automate already digital yet manual tasks and subprocesses as well as whole business processes rapidly. In contrast to other process automation technologies, robotic process automation is lightweight and only accesses the presentation layer of IT systems to mimic human behavior. Due to the novelty of robotic process automation and the varying approaches when implementing the technology, there are reports that up to 50% of robotic process automation projects fail. To tackle this issue, we use a design science research approach to develop a framework for the implementation of robotic process automation projects. We analyzed 35 reports on real-life projects to derive a preliminary sequential model. Then, we performed multiple expert interviews and workshops to validate and refine our model. The result is a framework with variable stages that offers guidelines with enough flexibility to be applicable in complex and heterogeneous corporate environments as well as for small and medium-sized companies. It is structured by the three phases of initialization, implementation, and scaling. They comprise eleven stages relevant during a project and as a continuous cycle spanning individual projects. Together they structure how to manage knowledge and support processes for the execution of robotic process automation implementation projects.
Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations.
Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability.
This paper shows that labor demand plays an important role in the labor market reactions to a pension reform in Germany. Employers with a high share of older worker inflow compared with their younger worker inflow, employers in sectors with few investments in research and development, and employers in sectors with a high share of collective bargaining agreements allow their employees to stay employed longer after the reform. These employers offer their older employees partial retirement instead of forcing them into unemployment before early retirement because the older employees incur low substitution costs and high dismissal costs.
Die interorganisatorische Zusammenarbeit in Produktionsnetzwerken kann Herausforderungen durch eine hohe Marktdynamik, immer anspruchsvollere Kundenbedürfnisse und steigenden Kostendruck entgegenwirken. Neben der klassischen vertikalen Verschiebung von Kapazitäten in Richtung geeigneter Zulieferer, lassen sich Fertigungskapazitäten auch durch eine horizontale Zusammenarbeit zwischen produzierenden Unternehmen handeln. Im Sinne der Sharing Economy bieten digitale Plattformen eine geeignete Infrastruktur zur Verknüpfung und Koordination der Marktakteure eines Produktionsnetzwerks. So können Fertigungsunternehmen flexibel Produktionsausfällen entgegenwirken und freie Maschinenkapazitäten auslasten. Eine wesentliche Voraussetzung für den Erfolg solcher digitalen Plattformen für Produktionsnetzwerke ist die Definition von Zielen, welche bisher in der Literatur nur unzureichend und nicht bezogen auf diese spezifische Plattformart untersucht wurden. In dieser Arbeit wird ein umfängliches konzeptionelles Zielmodell für diese spezifische Plattformart erstellt. Zu spezifischen Zielen digitaler Plattformen für Produktionsnetzwerke zählen neben wirtschaftlichen oder technischen Zielen beispielsweise auch produktionsbezogene Marktleistungsziele wie die Gewährleistung von Produktionsflexibilität. Aufbauend darauf wird gezeigt, wie das Design der beschriebenen Plattformen einen Einfluss auf die Erreichung bestimmter Ziele hat und wie spezielle Mechanismen zur Zielerreichung beitragen.
Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These systems have human-like decision capacity for selected applications based on a decision rationale which cannot be looked-up conveniently and constitutes a black box. As a consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said to hinder trust and enforce aversion towards these systems, studies that connect user trust to transparency and subsequently acceptance are scarce. In response, our research is concerned with the development of a theoretical model that explains end-user acceptance of intelligent systems. We utilize the unified theory of acceptance and use in information technology as well as explanation theory and related theories on initial trust and user trust in information systems. The proposed model is tested in an industrial maintenance workplace scenario using maintenance experts as participants to represent the user group. Results show that acceptance is performance-driven at first sight. However, transparency plays an important indirect role in regulating trust and the perception of performance.
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
Künstliche Intelligenz (KI) dringt vermehrt in sensible Bereiche des alltäglichen menschlichen Lebens ein. Es werden nicht mehr nur noch einfache Entscheidungen durch intelligente Systeme getroffen, sondern zunehmend auch komplexe Entscheidungen. So entscheiden z. B. intelligente Systeme, ob Bewerber in ein Unternehmen eingestellt werden sollen oder nicht. Oftmals kann die zugrundeliegende Entscheidungsfindung nur schwer nachvollzogen werden und ungerechtfertigte Entscheidungen können dadurch unerkannt bleiben, weshalb die Implementierung einer solchen KI auch häufig als sogenannte Blackbox bezeichnet wird. Folglich steigt die Bedrohung, durch unfaire und diskriminierende Entscheidungen einer KI benachteiligt behandelt zu werden. Resultieren diese Verzerrungen aus menschlichen Handlungen und Denkmustern spricht man von einer kognitiven Verzerrung oder einem kognitiven Bias. Aufgrund der Neuigkeit dieser Thematik ist jedoch bisher nicht ersichtlich, welche verschiedenen kognitiven Bias innerhalb eines KI-Projektes auftreten können. Ziel dieses Beitrages ist es, anhand einer strukturierten Literaturanalyse, eine gesamtheitliche Darstellung zu ermöglichen. Die gewonnenen Erkenntnisse werden anhand des in der Praxis weit verbreiten Cross-Industry Standard Process for Data Mining (CRISP-DM) Modell aufgearbeitet und klassifiziert. Diese Betrachtung zeigt, dass der menschliche Einfluss auf eine KI in jeder Entwicklungsphase des Modells gegeben ist und es daher wichtig ist „mensch-ähnlichen“ Bias in einer KI explizit zu untersuchen.
The study considers the application of text mining techniques to the analysis of curricula for study programs offered by institutions of higher education. It presents a novel procedure for efficient and scalable quantitative content analysis of module handbooks using topic modeling. The proposed approach allows for collecting, analyzing, evaluating, and comparing curricula from arbitrary academic disciplines as a partially automated, scalable alternative to qualitative content analysis, which is traditionally conducted manually. The procedure is illustrated by the example of IS study programs in Germany, based on a data set of more than 90 programs and 3700 distinct modules. The contributions made by the study address the needs of several different stakeholders and provide insights into the differences and similarities among the study programs examined. For example, the results may aid academic management in updating the IS curricula and can be incorporated into the curricular design process. With regard to employers, the results provide insights into the fulfillment of their employee skill expectations by various universities and degrees. Prospective students can incorporate the results into their decision concerning where and what to study, while university sponsors can utilize the results in their grant processes.
Today, intelligent systems that offer artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Deep learning is a machine learning concept based on artificial neural networks. For many applications, deep learning models outperform shallow machine learning models and traditional data analysis approaches. In this article, we summarize the fundamentals of machine learning and deep learning to generate a broader understanding of the methodical underpinning of current intelligent systems. In particular, we provide a conceptual distinction between relevant terms and concepts, explain the process of automated analytical model building through machine learning and deep learning, and discuss the challenges that arise when implementing such intelligent systems in the field of electronic markets and networked business. These naturally go beyond technological aspects and highlight issues in human-machine interaction and artificial intelligence servitization.