Betriebswirtschaftliches Institut
Refine
Has Fulltext
- yes (6)
Is part of the Bibliography
- yes (6)
Year of publication
- 2021 (6) (remove)
Document Type
- Doctoral Thesis (4)
- Journal article (2)
Keywords
- Operations Management (2)
- Altersversorgung (1)
- Altersvorsorge (1)
- Beschaffung (1)
- Besteuerung (1)
- Betriebliche Altersversorgung (1)
- Betriebsrenten (1)
- Capacity Management (1)
- Data Driven Operations (1)
- Data-driven Operations Management (1)
Institute
The study considers the application of text mining techniques to the analysis of curricula for study programs offered by institutions of higher education. It presents a novel procedure for efficient and scalable quantitative content analysis of module handbooks using topic modeling. The proposed approach allows for collecting, analyzing, evaluating, and comparing curricula from arbitrary academic disciplines as a partially automated, scalable alternative to qualitative content analysis, which is traditionally conducted manually. The procedure is illustrated by the example of IS study programs in Germany, based on a data set of more than 90 programs and 3700 distinct modules. The contributions made by the study address the needs of several different stakeholders and provide insights into the differences and similarities among the study programs examined. For example, the results may aid academic management in updating the IS curricula and can be incorporated into the curricular design process. With regard to employers, the results provide insights into the fulfillment of their employee skill expectations by various universities and degrees. Prospective students can incorporate the results into their decision concerning where and what to study, while university sponsors can utilize the results in their grant processes.
Today, intelligent systems that offer artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Deep learning is a machine learning concept based on artificial neural networks. For many applications, deep learning models outperform shallow machine learning models and traditional data analysis approaches. In this article, we summarize the fundamentals of machine learning and deep learning to generate a broader understanding of the methodical underpinning of current intelligent systems. In particular, we provide a conceptual distinction between relevant terms and concepts, explain the process of automated analytical model building through machine learning and deep learning, and discuss the challenges that arise when implementing such intelligent systems in the field of electronic markets and networked business. These naturally go beyond technological aspects and highlight issues in human-machine interaction and artificial intelligence servitization.
The strategic planning of Emergency Medical Service systems is directly related to the probability of surviving of the affected humans. Academic research has contributed to the evaluation of these systems by defining a variety of key performance metrics. The average response time, the workload of the system, several waiting time parameters as well as the fraction of demand that cannot immediately be served are among the most important examples. The Hypercube Queueing Model is one of the most applied models in this field. Due to its theoretical background and the implied high computational times, the Hypercube Queueing Model has only been recently used for the optimization of Emergency Medical Service systems. Likewise, only a few system performance metrics were calculated with the help of the model and the full potential therefore has not yet been reached. Most of the existing studies in the field of optimization with the help of a Hypercube Queueing Model apply the expected response time of the system as their objective function. While it leads to oftentimes balanced system configurations, other influencing factors were identified. The embedding of the Hypercube Queueing Model in the Robust Optimization as well as the Robust Goal Programming intended to offer a more holistic view through the use of different day times. It was shown that the behavior of Emergency Medical Service systems as well as the corresponding parameters are highly subjective to them. The analysis and optimization of such systems should therefore consider the different distributions of the demand, with regard to their quantity and location, in order to derive a holistic basis for the decision-making.
Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM).
First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems.
The first article, “Prescriptive Analytics for Flexible Capacity Management”, develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters.
The second article, “Prescriptive Analytics for a Multi-Shift Staffing Problem”, uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the “worlds” of queuing theory and prescriptive analytics.
The third article, “Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management” proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions.
The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation.
Aufgrund der bekannten Probleme der umlagefinanzierten gesetzlichen Rentenversicherung versucht der deutsche Gesetzgeber seit einiger Zeit, die eigenverantwortliche Altersvorsorge zu fördern. Häufig steht dabei die betriebliche Altersversorgung (bAV) im Fokus. In dieser Arbeit wird mittels Experten- und Arbeitnehmerinterviews ausführlich herausgearbeitet, wo zentrale Verbreitungshemmnisse der bAV liegen und wie diese durch Anpassung der steuer- und sozialversicherungsrechtlichen Rahmenbedingungen adressiert werden können. Wesentliche Elemente dieser Reformüberlegungen sind in das zum 01.01.2018 in Kraft getretene Betriebsrentenstärkungsgesetz eingeflossen.
Daneben wird in dieser Arbeit mithilfe einer experimentalökonomischen Analyse gezeigt, wie verschiedene Arten der Besteuerung individuelle Sparentscheidungen beeinflussen können. Dabei wird deutlich, dass Individuen die Wirkung einer nachgelagerten Besteuerung häufig nicht korrekt wahrnehmen.
The first problem is that of the optimal volume allocation in procurement. The choice of this problem was motivated by a study whose objective was to support decision-making at two procurement organizations for the procurement of Depot Medroxyprogesterone Acetate (DMPA), an injectable contraceptive. At the time of this study, only one supplier that had undergone the costly and lengthy process of WHO pre-qualification was available to these organizations. However, a new entrant supplier was expected to receive WHO qualification within the next year, thus becoming a viable second source for DMPA procurement. When deciding how to allocate the procurement volume between the two suppliers, the buyers had to consider the impact on price as well as risk. Higher allocations to one supplier yield lower prices but expose a buyer to higher supply risks, while an even allocation will result in lower supply risk but also reduce competitive pressure, resulting in higher prices. Our research investigates this single- versus dual-sourcing problem and quantifies in one model the impact of the procurement volume on competition and risk. To support decision-makers, we develop a mathematical framework that accounts for the characteristics of donor-funded global health markets and models the effects of an entrant on purchasing costs and supply risks. Our in-depth analysis provides insights into how the optimal allocation decision is affected by various parameters and explores the trade-off between competition and supply risk. For example, we find that, even if the entrant supplier introduces longer leads times and a higher default risk, the buyer still benefits from dual sourcing. However, these risk-diversification benefits depend heavily on the entrant’s in-country registration: If the buyer can ship the entrant’s product to only a selected number of countries, the buyer does not benefit from dual sourcing as much as it would if entrant’s product could be shipped to all supplied countries. We show that the buyer should be interested in qualifying the entrant’s product in countries with high demand first.
In the second problem we explore a new tendering mechanism called the postponement tender, which can be useful when buyers in the global health industry want to contract new generics suppliers with uncertain product quality. The mechanism allows a buyer to postpone part of the procurement volume’s allocation so the buyer can learn about the unknown quality before allocating the remaining volume to the best supplier in terms of both price and quality. We develop a mathematical model to capture the decision-maker’s trade-offs in setting the right split between the initial volume and the postponed volume. Our analysis shows that a buyer can benefit from this mechanism more than it can from a single-sourcing format, as it can decrease the risk of receiving poor quality (in terms of product quality and logistics performance) and even increase competitive pressure between the suppliers, thereby lowering the purchasing costs. By considering market parameters like the buyer’s size, the suppliers’ value (difference between quality and cost), quality uncertainty, and minimum order volumes, we derive optimal sourcing strategies for various market structures and explore how competition is affected by the buyer’s learning about the suppliers’ quality through the initial volume.
The third problem considers the repeated procurement problem of pharmacies in Kenya that have multi-product inventories. Coordinating orders allows pharmacies to achieve lower procurement prices by using the quantity discounts manufacturers offer and sharing fixed ordering costs, such as logistics costs. However, coordinating and optimizing orders for multiple products is complex and costly. To solve the coordinated procurement problem, also known as the Joint Replenishment Problem (JRP) with quantity discounts, a novel, data-driven inventory policy using sample-average approximation is proposed. The inventory policy is developed based on renewal theory and is evaluated using real-world sales data from Kenyan pharmacies. Multiple benchmarks are used to evaluate the performance of the approach. First, it is compared to the theoretically optimal policy --- that is, a dynamic-programming policy --- in the single-product setting without quantity discounts to show that the proposed policy results in comparable inventory costs. Second, the policy is evaluated for the original multi-product setting with quantity discounts and compared to ex-post optimal costs. The evaluation shows that the policy’s performance in the multi-product setting is similar to its performance in the single-product setting (with respect to ex-post optimal costs), suggesting that the proposed policy offers a promising, data-driven solution to these types of multi-product inventory problems.