Refine
Has Fulltext
- yes (48)
Is part of the Bibliography
- yes (48)
Year of publication
Document Type
- Doctoral Thesis (30)
- Journal article (14)
- Report (2)
- Working Paper (2)
Language
- English (48) (remove)
Keywords
- Operations Management (7)
- Maschinelles Lernen (6)
- Entscheidungsunterstützung (5)
- Prescriptive Analytics (4)
- Wirtschaftsinformatik (4)
- artificial intelligence (4)
- Advanced Analytics (3)
- Big Data (3)
- Entscheidungsunterstützungssystem (3)
- Supply Chain Management (3)
- deep learning (3)
- machine learning (3)
- Abschlussprüfung (2)
- Artificial Intelligence (2)
- Aufsichtsrat (2)
- Beschaffung (2)
- Bilanzpolitik (2)
- Capacity Management (2)
- Consistent partial least squares (2)
- Decision Support Systems (2)
- Deutschland (2)
- Earnings Quality (2)
- Empirical Analysis (2)
- Internet der Dinge (2)
- Kapazitätsplanung (2)
- Künstliche Intelligenz (2)
- Predictive Analytics (2)
- RFID (2)
- Strategisches Management (2)
- Supply Chain (2)
- USA (2)
- 2015 (1)
- ASSET4 (1)
- Accounting (1)
- Agency-Theorie (1)
- Aktienmarkt (1)
- Aktionärsabstimmungen (1)
- Alliances (1)
- Allocation Planning (1)
- Anreize (1)
- Arbeitsmarkt (1)
- Audit Quality (1)
- Auditing (1)
- Auditor Independence (1)
- Automated Checkout (1)
- Bank (1)
- Banking (1)
- Beschaffungsorganisation (1)
- Beschaffungsportfolio (1)
- Beschaffungsstrategie (1)
- Beschäftigtenmobilität (1)
- Beschäftigung (1)
- Bestandsplanung (1)
- Betrieb (1)
- Biotechnologie (1)
- Biotechnologische Industrie (1)
- Biotechnology (1)
- Blockchain (1)
- Bootstrap (1)
- Business Process Management (1)
- Business Process Modeling (1)
- China (1)
- Chinese stock market (1)
- Confidence interval (1)
- Controlling (1)
- Corporate Financial Performance (1)
- Corporate Governance (1)
- Corporate Social Responsibility (1)
- Corporate Sustainability (1)
- Data Analytics (1)
- Data Driven Operations (1)
- Data-driven Operations Management (1)
- Datenschutz (1)
- Delphi Study (1)
- Diversifikation (1)
- Doctorial Consortium, WI 2015 (1)
- ERP (1)
- ERP CSF (1)
- ERP Implementation (1)
- ESG (environmental, social, governance) (1)
- Earnings Management (1)
- Earnings management (1)
- Efficiency (1)
- Einkauf (1)
- Electronic Article Surveillance (1)
- Emergency Medical Service System (1)
- Emission Policy (1)
- Empirische Forschung (1)
- Employment (1)
- Entwicklungsländer (1)
- Environmental Performance (1)
- Erfahrung (1)
- Erfolgsplanung (1)
- Erklärbare Künstliche Intelligenz (1)
- Executive Compensation (1)
- Explainability (1)
- Explainable AI (1)
- Explainable Artificial Intelligence (1)
- Finanzkrise (1)
- Forschung und Entwicklung (1)
- Framework Develpment (1)
- Genossenschaftsbanken (1)
- Gesundheitswesen (1)
- Global Health (1)
- Governance (1)
- Hauptversammlung (1)
- High-technology (1)
- Hochtechnologie (1)
- Human Resource Management (1)
- Human Resource Management <Humanvermögen> <Personalentwicklung> (1)
- Hypercube Queueing Model (1)
- IS education (1)
- Industrie 4.0 (1)
- Industry 4.0 (1)
- Informationssicherheit (1)
- Innovation (1)
- International IT Projects (1)
- Internet of Things (1)
- Inventory Management (1)
- Krankenhaus (1)
- Kreditgenossenschaft (1)
- Kundenorientierung (1)
- LDA (1)
- Labor Mobility (1)
- Laden (1)
- Lead Time (1)
- Lerneffekt (1)
- Lernkurve (1)
- Lieferzeit (1)
- Lineare Optimierung (1)
- Location Factors (1)
- Location Optimization (1)
- Machine Learning (1)
- Management (1)
- Managementinformationssystem (1)
- Marktsegmentierung (1)
- Mathematical Optimization (1)
- Measurement error correlation (1)
- Medizinische Versorgung (1)
- Mehrkriterielle Optimierung (1)
- Mergers and Acquisitions (1)
- Metaheuristic (1)
- Metaheuristik (1)
- Model specification (1)
- Modellierung (1)
- Monte Carlo simulation (1)
- Moral Hazard (1)
- Multi-objective optimization (1)
- Nachhaltigkeit (1)
- Notfallmedizin (1)
- Organisationstheorie (1)
- Osnabrück (1)
- Planungssystem (1)
- Practitioner's guide (1)
- Preisbildung (1)
- Privatsphäre (1)
- Procurement (1)
- Production Site (1)
- Produktdiversifikation (1)
- Produktionsstandort (1)
- Qualität (1)
- Qualität der Abschlussprüfung (1)
- Qualität der Rechnungslegung (1)
- Rechnungslegung (1)
- Rechnungswesen (1)
- Rechtsvergleich (1)
- Regulierung (1)
- Resource allocation (1)
- Risiko (1)
- Risk (1)
- Roboter (1)
- Sensors (1)
- Shareholder Voting (1)
- Simulation (1)
- Skalierbarkeit (1)
- Smart Fitting Room (1)
- Spitzentechnologie (1)
- Standortfaktor (1)
- Standortfaktor <Wirtschaft> (1)
- Standortplanung (1)
- Statistical misconception (1)
- Strategische Planung (1)
- Structural equation modelling (1)
- Success Factors (1)
- Supply Chain Design (1)
- Sustainable HRM (1)
- Tagung (1)
- Technologieunternehmen (1)
- Technology Acceptance (1)
- Testing parameter difference (1)
- Textileinzelhandel (1)
- Tourenplanung (1)
- Training (1)
- Umweltpolitik / Kompensation (1)
- Unabhängigkeit (1)
- Unabhängigkeit des Abschlussprüfers (1)
- Unternehmensakquisition (1)
- Unternehmensbewertung (1)
- Unternehmenskooperation (1)
- Unternehmensverfassung (1)
- Vergütung (1)
- Warteschlangentheorie (1)
- Wirtschaftsprüfer (1)
- Wirtschaftsprüfung (1)
- XAI (1)
- acquisition behavior (1)
- analytical model building (1)
- apparel retail (1)
- artificial neural networks (1)
- audit fees (1)
- auditor independence (1)
- bank mergers (1)
- banksrevenue (1)
- bioimage analysis (1)
- business analytics (1)
- business process anagement (1)
- capacity control (1)
- co-worker similarity (1)
- coherent risk measures (1)
- comparative HRM (1)
- cooperative diversification (1)
- curriculum research (1)
- data management (1)
- data-driven (1)
- demographics (1)
- design science research (1)
- design theory (1)
- efficiency (1)
- experience (1)
- explainable artificial intelligence (1)
- future information (1)
- global reporting initiative (1)
- hospital (1)
- implementation framework (1)
- implied cost of capital (1)
- implizite Kapitalkosten (1)
- intelligent system (1)
- intelligent systems (1)
- interview study (1)
- intrinsic value models (1)
- labor demand (1)
- labor market (1)
- learning curve (1)
- learning effect (1)
- market segmentation (1)
- methodology (1)
- microscopy (1)
- neural network (1)
- organisational design (1)
- organizational science (1)
- pension reform (1)
- perceived external financial reporting quality (1)
- performance (1)
- predictive process analytics (1)
- predictive process monitoring (1)
- process mining (1)
- product diversification (1)
- profitability (1)
- project management (1)
- purchasing portfolio matrices (1)
- purchasing strategies (1)
- quality control (1)
- queueing (1)
- regulatory capital (1)
- risk (1)
- robotic process automation (1)
- software (1)
- store logistics (1)
- strategic management (1)
- subadditivity (1)
- sustainability reporting (1)
- system transparency (1)
- text mining (1)
- topic modeling (1)
- trust (1)
- user acceptance (1)
- worker mobility (1)
- workplace (1)
- workshop (1)
Institute
- Betriebswirtschaftliches Institut (48) (remove)
EU-Project number / Contract (GA) number
Companies are expected to act as international players and to use their capabilities to provide customized products and services quickly and efficiently. Today, consumers expect their requirements to be met within a short time and at a favorable price. Order-to-delivery lead time has steadily gained in importance for consumers. Furthermore, governments can use various emissions policies to force companies and customers to reduce their greenhouse gas emissions. This thesis investigates the influence of order-to-delivery lead time and different emission policies on the design of a supply chain. Within this work different supply chain design models are developed to examine these different influences. The first model incorporates lead times and total costs, and various emission policies are implemented to illustrate the trade-off between the different measures. The second model reflects the influence of order-to-delivery lead time sensitive consumers, and different emission policies are implemented to study their impacts. The analysis shows that the share of order-to-delivery lead time sensitive consumers has a significant impact on the design of a supply chain. Demand uncertainty and uncertainty in the design of different emission policies are investigated by developing an appropriate robust mathematical optimization model. Results show that especially uncertainties on the design of an emission policy can significantly impact the total cost of a supply chain. The effects of differently designed emission policies in various countries are investigated in the fourth model. The analyses highlight that both lead times and emission policies can strongly influence companies' offshoring and nearshoring strategies.
Recent computing advances are driving the integration of artificial intelligence (AI)-based systems into nearly every facet of our daily lives. To this end, AI is becoming a frontier for enabling algorithmic decision-making by mimicking or even surpassing human intelligence. Thereupon, these AI-based systems can function as decision support systems (DSSs) that assist experts in high-stakes use cases where human lives are at risk. All that glitters is not gold, due to the accompanying complexity of the underlying machine learning (ML) models, which apply mathematical and statistical algorithms to autonomously derive nonlinear decision knowledge. One particular subclass of ML models, called deep learning models, accomplishes unsurpassed performance, with the drawback that these models are no longer explainable to humans. This divergence may result in an end-user’s unwillingness to utilize this type of AI-based DSS, thus diminishing the end-user’s system acceptance.
Hence, the explainable AI (XAI) research stream has gained momentum, as it develops techniques to unravel this black-box while maintaining system performance. Non-surprisingly, these XAI techniques become necessary for justifying, evaluating, improving, or managing the utilization of AI-based DSSs. This yields a plethora of explanation techniques, creating an XAI jungle from which end-users must choose. In turn, these techniques are preliminarily engineered by developers for developers without ensuring an actual end-user fit. Thus, it renders unknown how an end-user’s mental model behaves when encountering such explanation techniques.
For this purpose, this cumulative thesis seeks to address this research deficiency by investigating end-user perceptions when encountering intrinsic ML and post-hoc XAI explanations. Drawing on this, the findings are synthesized into design knowledge to enable the deployment of XAI-based DSSs in practice. To this end, this thesis comprises six research contributions that follow the iterative and alternating interplay between behavioral science and design science research employed in information systems (IS) research and thus contribute to the overall research objectives as follows: First, an in-depth study of the impact of transparency and (initial) trust on end-user acceptance is conducted by extending and validating the unified theory of acceptance and use of technology model. This study indicates both factors’ strong but indirect effects on system acceptance, validating further research incentives. In particular, this thesis focuses on the overarching concept of transparency. Herein, a systematization in the form of a taxonomy and pattern analysis of existing user-centered XAI studies is derived to structure and guide future research endeavors, which enables the empirical investigation of the theoretical trade-off between performance and explainability in intrinsic ML algorithms, yielding a less gradual trade-off, fragmented into three explainability groups. This includes an empirical investigation on end-users’ perceived explainability of post-hoc explanation types, with local explanation types performing best. Furthermore, an empirical investigation emphasizes the correlation between comprehensibility and explainability, indicating almost significant (with outliers) results for the assumed correlation. The final empirical investigation aims at researching XAI explanation types on end-user cognitive load and the effect of cognitive load on end-user task performance and task time, which also positions local explanation types as best and demonstrates the correlations between cognitive load and task performance and, moreover, between cognitive load and task time. Finally, the last research paper utilizes i.a. the obtained knowledge and derives a nascent design theory for XAI-based DSSs. This design theory encompasses (meta-) design requirements, design principles, and design features in a domain-independent and interdisciplinary fashion, including end-users and developers as potential user groups. This design theory is ultimately tested through a real-world instantiation in a high-stakes maintenance scenario.
From an IS research perspective, this cumulative thesis addresses the lack of research on perception and design knowledge for an ensured utilization of XAI-based DSS. This lays the foundation for future research to obtain a holistic understanding of end-users’ heuristic behaviors during decision-making to facilitate the acceptance of XAI-based DSSs in operational practice.
The collection at hand is concerned with learning curve effects in hospitals as highly specialized expert organizations and comprises four papers, each focusing on a different aspect of the topic. Three papers are concerned with surgeons, and one is concerned with the staff of the emergency room in a conservative treatment.
The preface compactly addresses the steadily increasing health care costs and economic pressure, the hospital landscape in Germany as well as its development. Furthermore, the DRG lump-sum compensation and the characteristics of the health sector, which is strongly regulated by the state and in which ethical aspects must be omnipresent, are outlined. Besides, the benefit of knowing about learning curve effects in order to cut costs and to keep quality stable or even improve it, is addressed.
The first paper of the collection investigates the learning effects in a hospital which has specialized on endoprosthetics (total hip and knee replacement). Doing so, the specialized as well as the non-specialized interventions are studied. Costs are not investigated directly, but cost indicators. The indicator of costs in the short term are operating room times. The one of medium- to long-term costs is quality. It is operationalized by complications in the post-anesthesia care unit. The study estimates regression models (OLS and logit). The results indicate that the specialization comes along with advantages due to learning effects in terms of shorter operating room times and lower complication rates in endoprosthetic interventions. For the non-specialized interventions, the results are the same. There are no possibly negative effects of specialization on non-specialized surgeries, but advantageous spillover effects. Altogether, the specialization can be regarded as reasonable, as it cuts costs of all surgeries in the short, medium, and long term. The authors are Carsten Bauer, Nele Möbs, Oliver Unger, Andrea Szczesny, and Christian Ernst.
In the second paper surgeons’ learning curves effects in a teamwork vs. an individual work setting are in the focus of interest. Thus, the study combines learning curve effects with teamwork in health care, an issue increasingly discussed in recent literature. The investigated interventions are tonsillectomies (surgical excision of the palatine tonsils), a standard intervention. The indicator of costs in the short and medium to long term are again operating room times and complications as a proxy for quality respectively. Complications are secondary bleedings, which usually occur a few days after surgery. The study estimates regression models (OLS and logit). The results show that operating room times decrease with increasing surgeon’s experience. Surgeons who also operate in teams learn faster than the ones always operating on their own. Thus, operating room times are shorter for surgeons who also take part in team interventions. As a special feature, the data set contains the costs per case. This enables assuring that the assumed cost indicators are valid. The findings recommend team surgeries especially for resident physicians. The authors are Carsten Bauer, Oliver Unger, and Martin Holderried.
The third paper is dedicated to stapes surgery, a therapy for conductive hearing loss caused by otosclerosis (overflow bone growth). It is conceptually simple, but technically difficult. Therefore, it is regarded as the optimum to study learning curve effects in surgery. The paper seeks a comprehensive investigation. Thus, operating room times are employed as short-term cost indicator and quality as the medium to long term one. To measure quality, the postoperative difference between air and bone conduction threshold as well as a combination of this difference and the absence of complications. This paper also estimates different regression models (OLS and logit). Besides investigating the effects on department level, the study also considers the individual level, this means operating room times and quality are investigated for individual surgeons. This improves the comparison of learning curves, as the surgeons worked under widely identical conditions. It becomes apparent that the operating room times initially decrease with increasing experience. The marginal effect of additional experience gets smaller until the direction of the effect changes and the operating room times increase with increasing experience, probably caused by the allocation of difficult cases to the most experienced surgeons. Regarding quality, no learning curve effects are observed. The authors are Carsten Bauer, Johannes Taeger, and Kristen Rak.
The fourth paper is a systematic literature review on learning effects in the treatment of ischemic strokes. In case of stroke, every minute counts. Therefore, there is the inherent need to reduce the time from symptom onset to treatment. The article is concerned with the reduction of the time from arrival at the hospital to thrombolysis treatment, the so-called “door-to-needle time”. In the literature, there are studies on learning in a broader sense caused by a quality improvement program as well as learning in a narrower sense, in which learning curve effects are evaluated. Besides, studies on the time differences between low-volume and high-volume hospitals are considered, as the differences are probably the result of learning and economies of scale. Virtually all the 165 evaluated articles report improvements regarding the time to treatment. Furthermore, the clinical results substantiate the common association of shorter times from arrival to treatment with improved clinical outcomes. The review additionally discusses the economic implications of the results. The author is Carsten Bauer.
The preface brings forward that after the measurement of learning curve effects, further efforts are necessary for using them in order to increase efficiency, as the issue does not admit of easy, standardized solutions. Furthermore, the postface emphasizes the importance of multiperspectivity in research for the patient outcome, the health care system, and society.
Robotic process automation is a disruptive technology to automate already digital yet manual tasks and subprocesses as well as whole business processes rapidly. In contrast to other process automation technologies, robotic process automation is lightweight and only accesses the presentation layer of IT systems to mimic human behavior. Due to the novelty of robotic process automation and the varying approaches when implementing the technology, there are reports that up to 50% of robotic process automation projects fail. To tackle this issue, we use a design science research approach to develop a framework for the implementation of robotic process automation projects. We analyzed 35 reports on real-life projects to derive a preliminary sequential model. Then, we performed multiple expert interviews and workshops to validate and refine our model. The result is a framework with variable stages that offers guidelines with enough flexibility to be applicable in complex and heterogeneous corporate environments as well as for small and medium-sized companies. It is structured by the three phases of initialization, implementation, and scaling. They comprise eleven stages relevant during a project and as a continuous cycle spanning individual projects. Together they structure how to manage knowledge and support processes for the execution of robotic process automation implementation projects.
Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations.
Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability.
The global selection of production sites is a very complex task of great strategic importance for Original Equipment Manufacturers (OEMs), not only to ensure their sustained competitiveness, but also due to the sizeable long-term investment associated with a production site. With this in mind, this work develops a process model with which OEMs can select the most appropriate production site for their specific production activity in practice. Based on a literature analysis, the process model is developed by determining all necessary preparation, by defining the properties of the selection process model, providing all necessary instructions for choosing and evaluating location factors, and by laying out the procedure of the selection process model. Moreover, the selection process model includes a discussion of location factors which are possibly relevant for OEMs when selecting a production site. This discussion contains a description and, if relevant, a macroeconomic analysis of each location factor, an explanation of their relevance for constructing and operating a production site, additional information for choosing relevant location factors, and information and instructions on evaluating them in the selection process model. To be successfully applicable, the selection process model is developed based on the assumption that the production site must not be selected in isolation, but as part of the global production network and supply chain of the OEM and, additionally, to advance the OEM’s related strategic goals. Furthermore, the selection process model is developed on the premise that a purely quantitative model cannot realistically solve an OEM’s complex selection of a production site, that the realistic analysis of the conditions at potential production sites requires evaluating the changes of these conditions over the planning horizon of the production site and that the future development of many of these conditions can only be assessed with uncertainty.
Innovative possibilities for data collection, networking, and evaluation are unleashing previously untapped potential for industrial production. However, harnessing this potential also requires a change in the way we work. In addition to expanded automation, human-machine cooperation is becoming more important: The machine achieves a reduction in complexity for humans through artificial intelligence. In fractions of a second large amounts of data of high decision quality are analyzed and suggestions are offered. The human being, for this part, usually makes the ultimate decision. He validates the machine’s suggestions and, if necessary, (physically) executes them.
Both entities are highly dependent on each other to accomplish the task in the best possible way. Therefore, it seems particularly important to understand to what extent such cooperation can be effective. Current developments in the field of artificial intelligence show that research in this area is particularly focused on neural network approaches. These are considered to be highly powerful but have the disadvantage of lacking transparency. Their inherent computational processes and the respective result reasoning remain opaque to humans. Some researchers assume that human users might therefore reject the system’s suggestions. The research domain of explainable artificial intelligence (XAI) addresses this problem and tries to develop methods to realize systems that are highly efficient and explainable.
This work is intended to provide further insights relevant to the defined goal of XAI. For this purpose, artifacts are developed that represent research achievements regarding the systematization, perception, and adoption of artificially intelligent decision support systems from a user perspective. The focus is on socio-technical insights with the aim to better understand which factors are important for effective human-machine cooperation. The elaborations predominantly represent extended grounded research. Thus, the artifacts imply an extension of knowledge in order to develop and/ or test effective XAI methods and techniques based on this knowledge. Industry 4.0, with a focus on maintenance, is used as the context for this development.
Novel deep learning (DL) architectures, better data availability, and a significant increase in computing power have enabled scientists to solve problems that were considered unassailable for many years. A case in point is the “protein folding problem“, a 50-year-old grand challenge in biology that was recently solved by the DL-system AlphaFold. Other examples comprise the development of large DL-based language models that, for instance, generate newspaper articles that hardly differ from those written by humans. However, developing unbiased, reliable, and accurate DL models for various practical applications remains a major challenge - and many promising DL projects get stuck in the piloting stage, never to be completed. In light of these observations, this thesis investigates the practical challenges encountered throughout the life cycle of DL projects and proposes solutions to develop and deploy rigorous DL models.
The first part of the thesis is concerned with prototyping DL solutions in different domains. First, we conceptualize guidelines for applied image recognition and showcase their application in a biomedical research project. Next, we illustrate the bottom-up development of a DL backend for an augmented intelligence system in the manufacturing sector. We then turn to the fashion domain and present an artificial curation system for individual fashion outfit recommendations that leverages DL techniques and unstructured data from social media and fashion blogs. After that, we showcase how DL solutions can assist fashion designers in the creative process. Finally, we present our award-winning DL solution for the segmentation of glomeruli in human kidney tissue images that was developed for the Kaggle data science competition HuBMAP - Hacking the Kidney.
The second part continues the development path of the biomedical research project beyond the prototyping stage. Using data from five laboratories, we show that ground truth estimation from multiple human annotators and training of DL model ensembles help to establish objectivity, reliability, and validity in DL-based bioimage analyses.
In the third part, we present deepflash2, a DL solution that addresses the typical challenges encountered during training, evaluation, and application of DL models in bioimaging. The tool facilitates the objective and reliable segmentation of ambiguous bioimages through multi-expert annotations and integrated quality assurance. It is embedded in an easy-to-use graphical user interface and offers best-in-class predictive performance for semantic and instance segmentation under economical usage of computational resources.
The digital transformation facilitates new forms of collaboration between companies along the supply chain and between companies and consumers. Besides sharing information on centralized platforms, blockchain technology is often regarded as a potential basis for this kind of collaboration. However, there is much hype surrounding the technology due to the rising popularity of cryptocurrencies, decentralized finance (DeFi), and non-fungible tokens (NFTs). This leads to potential issues being overlooked. Therefore, this thesis aims to investigate, highlight, and address the current weaknesses of blockchain technology: Inefficient consensus, privacy, smart contract security, and scalability.
First, to provide a foundation, the four key challenges are introduced, and the research objectives are defined, followed by a brief presentation of the preliminary work for this thesis.
The following four parts highlight the four main problem areas of blockchain. Using big data analytics, we extracted and analyzed the blockchain data of six major blockchains to identify potential weaknesses in their consensus algorithm. To improve smart contract security, we classified smart contract functionalities to identify similarities in structure and design. The resulting taxonomy serves as a basis for future standardization efforts for security-relevant features, such as safe math functions and oracle services. To challenge privacy assumptions, we researched consortium blockchains from an adversary role. We chose four blockchains with misconfigured nodes and extracted as much information from those nodes as possible. Finally, we compared scalability solutions for blockchain applications and developed a decision process that serves as a guideline to improve the scalability of their applications.
Building on the scalability framework, we showcase three potential applications for blockchain technology. First, we develop a token-based approach for inter-company value stream mapping. By only relying on simple tokens instead of complex smart-contracts, the computational load on the network is expected to be much lower compared to other solutions. The following two solutions use offloading transactions and computations from the main blockchain. The first approach uses secure multiparty computation to offload the matching of supply and demand for manufacturing capacities to a trustless network. The transaction is written to the main blockchain only after the match is made. The second approach uses the concept of payment channel networks to enable high-frequency bidirectional micropayments for WiFi sharing. The host gets paid for every second of data usage through an off-chain channel. The full payment is only written to the blockchain after the connection to the client gets terminated.
Finally, the thesis concludes by briefly summarizing and discussing the results and providing avenues for further research.
Increasing global competition forces organizations to improve their processes to gain a competitive advantage. In the manufacturing sector, this is facilitated through tremendous digital transformation. Fundamental components in such digitalized environments are process-aware information systems that record the execution of business processes, assist in process automation, and unlock the potential to analyze processes. However, most enterprise information systems focus on informational aspects, process automation, or data collection but do not tap into predictive or prescriptive analytics to foster data-driven decision-making. Therefore, this dissertation is set out to investigate the design of analytics-enabled information systems in five independent parts, which step-wise introduce analytics capabilities and assess potential opportunities for process improvement in real-world scenarios.
To set up and extend analytics-enabled information systems, an essential prerequisite is identifying success factors, which we identify in the context of process mining as a descriptive analytics technique. We combine an established process mining framework and a success model to provide a structured approach for assessing success factors and identifying challenges, motivations, and perceived business value of process mining from employees across organizations as well as process mining experts and consultants. We extend the existing success model and provide lessons for business value generation through process mining based on the derived findings. To assist the realization of process mining enabled business value, we design an artifact for context-aware process mining. The artifact combines standard process logs with additional context information to assist the automated identification of process realization paths associated with specific context events. Yet, realizing business value is a challenging task, as transforming processes based on informational insights is time-consuming.
To overcome this, we showcase the development of a predictive process monitoring system for disruption handling in a production environment. The system leverages state-of-the-art machine learning algorithms for disruption type classification and duration prediction. It combines the algorithms with additional organizational data sources and a simple assignment procedure to assist the disruption handling process. The design of such a system and analytics models is a challenging task, which we address by engineering a five-phase method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks. The method facilitates the integration of heterogeneous data sources through dedicated neural network input heads, which are concatenated for a prediction. An evaluation based on a real-world use-case highlights the superior performance of the resulting multi-headed network.
Even the improved model performance provides no perfect results, and thus decisions about assigning agents to solve disruptions have to be made under uncertainty. Mathematical models can assist here, but due to complex real-world conditions, the number of potential scenarios massively increases and limits the solution of assignment models. To overcome this and tap into the potential of prescriptive process monitoring systems, we set out a data-driven approximate dynamic stochastic programming approach, which incorporates multiple uncertainties for an assignment decision. The resulting model has significant performance improvement and ultimately highlights the particular importance of analytics-enabled information systems for organizational process improvement.
This paper shows that labor demand plays an important role in the labor market reactions to a pension reform in Germany. Employers with a high share of older worker inflow compared with their younger worker inflow, employers in sectors with few investments in research and development, and employers in sectors with a high share of collective bargaining agreements allow their employees to stay employed longer after the reform. These employers offer their older employees partial retirement instead of forcing them into unemployment before early retirement because the older employees incur low substitution costs and high dismissal costs.
Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These systems have human-like decision capacity for selected applications based on a decision rationale which cannot be looked-up conveniently and constitutes a black box. As a consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said to hinder trust and enforce aversion towards these systems, studies that connect user trust to transparency and subsequently acceptance are scarce. In response, our research is concerned with the development of a theoretical model that explains end-user acceptance of intelligent systems. We utilize the unified theory of acceptance and use in information technology as well as explanation theory and related theories on initial trust and user trust in information systems. The proposed model is tested in an industrial maintenance workplace scenario using maintenance experts as participants to represent the user group. Results show that acceptance is performance-driven at first sight. However, transparency plays an important indirect role in regulating trust and the perception of performance.
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
The study considers the application of text mining techniques to the analysis of curricula for study programs offered by institutions of higher education. It presents a novel procedure for efficient and scalable quantitative content analysis of module handbooks using topic modeling. The proposed approach allows for collecting, analyzing, evaluating, and comparing curricula from arbitrary academic disciplines as a partially automated, scalable alternative to qualitative content analysis, which is traditionally conducted manually. The procedure is illustrated by the example of IS study programs in Germany, based on a data set of more than 90 programs and 3700 distinct modules. The contributions made by the study address the needs of several different stakeholders and provide insights into the differences and similarities among the study programs examined. For example, the results may aid academic management in updating the IS curricula and can be incorporated into the curricular design process. With regard to employers, the results provide insights into the fulfillment of their employee skill expectations by various universities and degrees. Prospective students can incorporate the results into their decision concerning where and what to study, while university sponsors can utilize the results in their grant processes.
The strategic planning of Emergency Medical Service systems is directly related to the probability of surviving of the affected humans. Academic research has contributed to the evaluation of these systems by defining a variety of key performance metrics. The average response time, the workload of the system, several waiting time parameters as well as the fraction of demand that cannot immediately be served are among the most important examples. The Hypercube Queueing Model is one of the most applied models in this field. Due to its theoretical background and the implied high computational times, the Hypercube Queueing Model has only been recently used for the optimization of Emergency Medical Service systems. Likewise, only a few system performance metrics were calculated with the help of the model and the full potential therefore has not yet been reached. Most of the existing studies in the field of optimization with the help of a Hypercube Queueing Model apply the expected response time of the system as their objective function. While it leads to oftentimes balanced system configurations, other influencing factors were identified. The embedding of the Hypercube Queueing Model in the Robust Optimization as well as the Robust Goal Programming intended to offer a more holistic view through the use of different day times. It was shown that the behavior of Emergency Medical Service systems as well as the corresponding parameters are highly subjective to them. The analysis and optimization of such systems should therefore consider the different distributions of the demand, with regard to their quantity and location, in order to derive a holistic basis for the decision-making.
Today, intelligent systems that offer artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Deep learning is a machine learning concept based on artificial neural networks. For many applications, deep learning models outperform shallow machine learning models and traditional data analysis approaches. In this article, we summarize the fundamentals of machine learning and deep learning to generate a broader understanding of the methodical underpinning of current intelligent systems. In particular, we provide a conceptual distinction between relevant terms and concepts, explain the process of automated analytical model building through machine learning and deep learning, and discuss the challenges that arise when implementing such intelligent systems in the field of electronic markets and networked business. These naturally go beyond technological aspects and highlight issues in human-machine interaction and artificial intelligence servitization.
The first problem is that of the optimal volume allocation in procurement. The choice of this problem was motivated by a study whose objective was to support decision-making at two procurement organizations for the procurement of Depot Medroxyprogesterone Acetate (DMPA), an injectable contraceptive. At the time of this study, only one supplier that had undergone the costly and lengthy process of WHO pre-qualification was available to these organizations. However, a new entrant supplier was expected to receive WHO qualification within the next year, thus becoming a viable second source for DMPA procurement. When deciding how to allocate the procurement volume between the two suppliers, the buyers had to consider the impact on price as well as risk. Higher allocations to one supplier yield lower prices but expose a buyer to higher supply risks, while an even allocation will result in lower supply risk but also reduce competitive pressure, resulting in higher prices. Our research investigates this single- versus dual-sourcing problem and quantifies in one model the impact of the procurement volume on competition and risk. To support decision-makers, we develop a mathematical framework that accounts for the characteristics of donor-funded global health markets and models the effects of an entrant on purchasing costs and supply risks. Our in-depth analysis provides insights into how the optimal allocation decision is affected by various parameters and explores the trade-off between competition and supply risk. For example, we find that, even if the entrant supplier introduces longer leads times and a higher default risk, the buyer still benefits from dual sourcing. However, these risk-diversification benefits depend heavily on the entrant’s in-country registration: If the buyer can ship the entrant’s product to only a selected number of countries, the buyer does not benefit from dual sourcing as much as it would if entrant’s product could be shipped to all supplied countries. We show that the buyer should be interested in qualifying the entrant’s product in countries with high demand first.
In the second problem we explore a new tendering mechanism called the postponement tender, which can be useful when buyers in the global health industry want to contract new generics suppliers with uncertain product quality. The mechanism allows a buyer to postpone part of the procurement volume’s allocation so the buyer can learn about the unknown quality before allocating the remaining volume to the best supplier in terms of both price and quality. We develop a mathematical model to capture the decision-maker’s trade-offs in setting the right split between the initial volume and the postponed volume. Our analysis shows that a buyer can benefit from this mechanism more than it can from a single-sourcing format, as it can decrease the risk of receiving poor quality (in terms of product quality and logistics performance) and even increase competitive pressure between the suppliers, thereby lowering the purchasing costs. By considering market parameters like the buyer’s size, the suppliers’ value (difference between quality and cost), quality uncertainty, and minimum order volumes, we derive optimal sourcing strategies for various market structures and explore how competition is affected by the buyer’s learning about the suppliers’ quality through the initial volume.
The third problem considers the repeated procurement problem of pharmacies in Kenya that have multi-product inventories. Coordinating orders allows pharmacies to achieve lower procurement prices by using the quantity discounts manufacturers offer and sharing fixed ordering costs, such as logistics costs. However, coordinating and optimizing orders for multiple products is complex and costly. To solve the coordinated procurement problem, also known as the Joint Replenishment Problem (JRP) with quantity discounts, a novel, data-driven inventory policy using sample-average approximation is proposed. The inventory policy is developed based on renewal theory and is evaluated using real-world sales data from Kenyan pharmacies. Multiple benchmarks are used to evaluate the performance of the approach. First, it is compared to the theoretically optimal policy --- that is, a dynamic-programming policy --- in the single-product setting without quantity discounts to show that the proposed policy results in comparable inventory costs. Second, the policy is evaluated for the original multi-product setting with quantity discounts and compared to ex-post optimal costs. The evaluation shows that the policy’s performance in the multi-product setting is similar to its performance in the single-product setting (with respect to ex-post optimal costs), suggesting that the proposed policy offers a promising, data-driven solution to these types of multi-product inventory problems.
Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM).
First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems.
The first article, “Prescriptive Analytics for Flexible Capacity Management”, develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters.
The second article, “Prescriptive Analytics for a Multi-Shift Staffing Problem”, uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the “worlds” of queuing theory and prescriptive analytics.
The third article, “Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management” proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions.
The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation.
Accounting plays an essential role in solving the principal-agent problem between managers and shareholders of capital market-oriented companies through the provision of information by the manager. However, this can succeed only if the accounting information is of high quality. In this context, the perceptions of shareholders regarding earnings quality are of particular importance.
The present dissertation intends to contribute to a deeper understanding regarding earnings quality from the perspective of shareholders of capital market-oriented companies. In particular, the thesis deals with indicators of shareholders’ perceptions of earnings quality, the influence of the auditor’s independence on these perceptions, and the shareholders’ assessment of the importance of earnings quality in general. Therefore, this dissertation examines market reactions to earnings announcements, measures of earnings quality and the auditor’s independence, as well as shareholders’ voting behavior at annual general meetings.
Following the introduction and a theoretical part consisting of two chapters, which deal with the purposes of accounting and auditing as well as the relevance of shareholder voting at the annual general meeting in the context of the principal-agent theory, the dissertation presents three empirical studies.
The empirical study presented in chapter 4 investigates auditor ratification votes in a U.S. setting. The study addresses the question of whether the results of auditor ratification votes are informative regarding shareholders’ perceptions of earnings quality. Using a returns-earnings design, the study demonstrates that the results of auditor ratification votes are associated with market reactions to unexpected earnings at the earnings announcement date. Furthermore, there are indications that this association seems to be positively related to higher levels of information asymmetry between managers and shareholders. Thus, there is empirical support for the notion that the results of auditor ratification votes are earnings-related information that might help shareholders to make informed investment decisions.
Chapter 5 investigates the relation between the economic importance of the client and perceived earnings quality. In particular, it is examined whether and when shareholders have a negative perception of an auditor’s economic dependence on the client. The results from a Big 4 client sample in the U.S. (fiscal years 2010 through 2014) indicate a negative association between the economic importance of the client and shareholders’ perceptions of earnings quality. The results are interpreted to mean that shareholders are still concerned about auditor independence even ten years after the implementation of the Sarbanes-Oxley Act. Furthermore, the association between the economic importance of the client and shareholders’ perceptions of earnings quality applies predominantly to the subsample of clients that are more likely to be financially distressed. Therefore, the empirical results reveal that shareholders’ perceptions of auditor independence are conditional on the client’s circumstances.
The study presented in chapter 6 sheds light on the question of whether earnings quality influences shareholders’ satisfaction with the members of the company’s board. Using data from 1,237 annual general meetings of German listed companies from 2010 through 2015, the study provides evidence that earnings quality – measured by the absolute value of discretionary accruals – is related to shareholders’ satisfaction with the company’s board. Moreover, the findings imply that shareholders predominantly blame the management board for inferior earnings quality. Overall, the evidence that earnings quality positively influences shareholders’ satisfaction emphasizes the relevance of earnings quality.