Betriebswirtschaftliches Institut
Refine
Has Fulltext
- yes (112)
Is part of the Bibliography
- yes (112)
Year of publication
Document Type
- Doctoral Thesis (79)
- Journal article (17)
- Working Paper (6)
- Book (5)
- Report (3)
- Master Thesis (2)
Keywords
- Deutschland (14)
- Maschinelles Lernen (7)
- Operations Management (7)
- Supply Chain Management (7)
- Unternehmensbewertung (6)
- Entscheidungsunterstützung (5)
- Rechnungslegung (5)
- Steuerrecht (5)
- artificial intelligence (5)
- Accounting (4)
Institute
EU-Project number / Contract (GA) number
Innovative possibilities for data collection, networking, and evaluation are unleashing previously untapped potential for industrial production. However, harnessing this potential also requires a change in the way we work. In addition to expanded automation, human-machine cooperation is becoming more important: The machine achieves a reduction in complexity for humans through artificial intelligence. In fractions of a second large amounts of data of high decision quality are analyzed and suggestions are offered. The human being, for this part, usually makes the ultimate decision. He validates the machine’s suggestions and, if necessary, (physically) executes them.
Both entities are highly dependent on each other to accomplish the task in the best possible way. Therefore, it seems particularly important to understand to what extent such cooperation can be effective. Current developments in the field of artificial intelligence show that research in this area is particularly focused on neural network approaches. These are considered to be highly powerful but have the disadvantage of lacking transparency. Their inherent computational processes and the respective result reasoning remain opaque to humans. Some researchers assume that human users might therefore reject the system’s suggestions. The research domain of explainable artificial intelligence (XAI) addresses this problem and tries to develop methods to realize systems that are highly efficient and explainable.
This work is intended to provide further insights relevant to the defined goal of XAI. For this purpose, artifacts are developed that represent research achievements regarding the systematization, perception, and adoption of artificially intelligent decision support systems from a user perspective. The focus is on socio-technical insights with the aim to better understand which factors are important for effective human-machine cooperation. The elaborations predominantly represent extended grounded research. Thus, the artifacts imply an extension of knowledge in order to develop and/ or test effective XAI methods and techniques based on this knowledge. Industry 4.0, with a focus on maintenance, is used as the context for this development.
The strategic planning of Emergency Medical Service systems is directly related to the probability of surviving of the affected humans. Academic research has contributed to the evaluation of these systems by defining a variety of key performance metrics. The average response time, the workload of the system, several waiting time parameters as well as the fraction of demand that cannot immediately be served are among the most important examples. The Hypercube Queueing Model is one of the most applied models in this field. Due to its theoretical background and the implied high computational times, the Hypercube Queueing Model has only been recently used for the optimization of Emergency Medical Service systems. Likewise, only a few system performance metrics were calculated with the help of the model and the full potential therefore has not yet been reached. Most of the existing studies in the field of optimization with the help of a Hypercube Queueing Model apply the expected response time of the system as their objective function. While it leads to oftentimes balanced system configurations, other influencing factors were identified. The embedding of the Hypercube Queueing Model in the Robust Optimization as well as the Robust Goal Programming intended to offer a more holistic view through the use of different day times. It was shown that the behavior of Emergency Medical Service systems as well as the corresponding parameters are highly subjective to them. The analysis and optimization of such systems should therefore consider the different distributions of the demand, with regard to their quantity and location, in order to derive a holistic basis for the decision-making.
Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM).
First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems.
The first article, “Prescriptive Analytics for Flexible Capacity Management”, develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters.
The second article, “Prescriptive Analytics for a Multi-Shift Staffing Problem”, uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the “worlds” of queuing theory and prescriptive analytics.
The third article, “Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management” proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions.
The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation.
Aufgrund der bekannten Probleme der umlagefinanzierten gesetzlichen Rentenversicherung versucht der deutsche Gesetzgeber seit einiger Zeit, die eigenverantwortliche Altersvorsorge zu fördern. Häufig steht dabei die betriebliche Altersversorgung (bAV) im Fokus. In dieser Arbeit wird mittels Experten- und Arbeitnehmerinterviews ausführlich herausgearbeitet, wo zentrale Verbreitungshemmnisse der bAV liegen und wie diese durch Anpassung der steuer- und sozialversicherungsrechtlichen Rahmenbedingungen adressiert werden können. Wesentliche Elemente dieser Reformüberlegungen sind in das zum 01.01.2018 in Kraft getretene Betriebsrentenstärkungsgesetz eingeflossen.
Daneben wird in dieser Arbeit mithilfe einer experimentalökonomischen Analyse gezeigt, wie verschiedene Arten der Besteuerung individuelle Sparentscheidungen beeinflussen können. Dabei wird deutlich, dass Individuen die Wirkung einer nachgelagerten Besteuerung häufig nicht korrekt wahrnehmen.
As a response to the growing public awareness on the importance of organisational contributions to sustainable development, there is an increased incentive for corporations to report on their sustainability activities. In parallel with this has been the development of Sustainable HRM' which embraces a growing body of practitioner and academic literature connecting the notions of corporate sustainability to HRM. The aim of this article is to analyse corporate sustainability reporting amongst the world's largest companies and to assess the HRM aspects of sustainability within these reports in comparison to environmental aspects of sustainable management and whether organisational attributes - principally country-of-origin - influences the reporting of such practices. A focus in this article is the extent to which the reporting of various aspects of sustainability may reflect dominant models of corporate governance in the country in which a company is headquartered. The findings suggest, first and against expectations, that the overall disclosure on HRM-related performance is not lower than that on environmental performance. Second, companies report more on their internal workforce compared to their external workforce. Finally, international differences, in particular those between companies headquartered in liberal market economies and coordinated market economies, are not as apparent as expected.
The first problem is that of the optimal volume allocation in procurement. The choice of this problem was motivated by a study whose objective was to support decision-making at two procurement organizations for the procurement of Depot Medroxyprogesterone Acetate (DMPA), an injectable contraceptive. At the time of this study, only one supplier that had undergone the costly and lengthy process of WHO pre-qualification was available to these organizations. However, a new entrant supplier was expected to receive WHO qualification within the next year, thus becoming a viable second source for DMPA procurement. When deciding how to allocate the procurement volume between the two suppliers, the buyers had to consider the impact on price as well as risk. Higher allocations to one supplier yield lower prices but expose a buyer to higher supply risks, while an even allocation will result in lower supply risk but also reduce competitive pressure, resulting in higher prices. Our research investigates this single- versus dual-sourcing problem and quantifies in one model the impact of the procurement volume on competition and risk. To support decision-makers, we develop a mathematical framework that accounts for the characteristics of donor-funded global health markets and models the effects of an entrant on purchasing costs and supply risks. Our in-depth analysis provides insights into how the optimal allocation decision is affected by various parameters and explores the trade-off between competition and supply risk. For example, we find that, even if the entrant supplier introduces longer leads times and a higher default risk, the buyer still benefits from dual sourcing. However, these risk-diversification benefits depend heavily on the entrant’s in-country registration: If the buyer can ship the entrant’s product to only a selected number of countries, the buyer does not benefit from dual sourcing as much as it would if entrant’s product could be shipped to all supplied countries. We show that the buyer should be interested in qualifying the entrant’s product in countries with high demand first.
In the second problem we explore a new tendering mechanism called the postponement tender, which can be useful when buyers in the global health industry want to contract new generics suppliers with uncertain product quality. The mechanism allows a buyer to postpone part of the procurement volume’s allocation so the buyer can learn about the unknown quality before allocating the remaining volume to the best supplier in terms of both price and quality. We develop a mathematical model to capture the decision-maker’s trade-offs in setting the right split between the initial volume and the postponed volume. Our analysis shows that a buyer can benefit from this mechanism more than it can from a single-sourcing format, as it can decrease the risk of receiving poor quality (in terms of product quality and logistics performance) and even increase competitive pressure between the suppliers, thereby lowering the purchasing costs. By considering market parameters like the buyer’s size, the suppliers’ value (difference between quality and cost), quality uncertainty, and minimum order volumes, we derive optimal sourcing strategies for various market structures and explore how competition is affected by the buyer’s learning about the suppliers’ quality through the initial volume.
The third problem considers the repeated procurement problem of pharmacies in Kenya that have multi-product inventories. Coordinating orders allows pharmacies to achieve lower procurement prices by using the quantity discounts manufacturers offer and sharing fixed ordering costs, such as logistics costs. However, coordinating and optimizing orders for multiple products is complex and costly. To solve the coordinated procurement problem, also known as the Joint Replenishment Problem (JRP) with quantity discounts, a novel, data-driven inventory policy using sample-average approximation is proposed. The inventory policy is developed based on renewal theory and is evaluated using real-world sales data from Kenyan pharmacies. Multiple benchmarks are used to evaluate the performance of the approach. First, it is compared to the theoretically optimal policy --- that is, a dynamic-programming policy --- in the single-product setting without quantity discounts to show that the proposed policy results in comparable inventory costs. Second, the policy is evaluated for the original multi-product setting with quantity discounts and compared to ex-post optimal costs. The evaluation shows that the policy’s performance in the multi-product setting is similar to its performance in the single-product setting (with respect to ex-post optimal costs), suggesting that the proposed policy offers a promising, data-driven solution to these types of multi-product inventory problems.
Allocation planning describes the process of allocating scarce supply to individual customers in order to prioritize demands from more important customers, i.e. because they request a higher service-level target. A common assumption across publications is that allocation planning is performed by a single planner with the ability to decide on the allocations to all customers simultaneously. In many companies, however, there does not exist such a central planner and, instead, allocation planning is a decentral and iterative process aligned with the company's multi-level hierarchical sales organization.
This thesis provides a rigorous analytical and numerical analysis of allocation planning in such hierarchical settings. It studies allocation methods currently used in practice and shows that these approaches typically lead to suboptimal allocations associated with significant performance losses. Therefore, this thesis provides multiple new allocation approaches which show a much higher performance, but still are simple enough to lend themselves to practical application. The findings in this thesis can guide decision makers when to choose which allocation approach and what factors are decisive for their performance. In general, our research suggests that with a suitable hierarchical allocation approach, decision makers can expect a similar performance as under centralized planning.
Traditional fashion retailers are increasingly hard-pressed to keep up with their digital competitors. In this context, the re-invention of brick-and-mortar stores as smart retail environments is being touted as a crucial step towards regaining a competitive edge. This thesis describes a design-oriented research project that deals with automated product tracking on the sales floor and presents three smart fashion store applications that are tied to such localization information: (i) an electronic article surveillance (EAS) system that distinguishes between theft and non-theft events, (ii) an automated checkout system that detects customers’ purchases when they are leaving the store and associates them with individual shopping baskets to automatically initiate payment processes, and (iii) a smart fitting room that detects the items customers bring into individual cabins and identifies the items they are currently most interested in to offer additional customer services (e.g., product recommendations or omnichannel services). The implementation of such cyberphysical systems in established retail environments is challenging, as architectural constraints, well-established customer processes, and customer expectations regarding privacy and convenience pose challenges to system design. To overcome these challenges, this thesis leverages Radio Frequency Identification (RFID) technology and machine learning techniques to address the different detection tasks. To optimally configure the systems and draw robust conclusions regarding their economic value contribution, beyond technological performance criteria, this thesis furthermore introduces a service operations model that allows mapping the systems’ technical detection characteristics to business relevant metrics such as service quality and profitability. This analytical model reveals that the same system component for the detection of object transitions is well suited for the EAS application but does not have the necessary high detection accuracy to be used as a component of an automated checkout system.
Vor allem unter Geringverdienern ist die betriebliche Altersversorgung nur unterdurchschnittlich verbreitet. Mit dem zum 01.01.2018 in Kraft getretenen Betriebsrentenstärkungsgesetz und insbesondere dem sogenannten BAV-Förderbetrag (§ 100 EStG) versucht der Gesetzgeber daher, diese Altersvorsorgeform attraktiver zu gestalten und so deren Verbreitung unter Geringverdienern auszuweiten. Dass dieses Ziel zumindest aus modelltheoretischer Sicht erreicht werden kann, zeigen die Ergebnisse dieser Studie auf. Anhand eines deterministischen Rechenmodells werden die finanziellen Vor- und Nachteile verschiedener Vorsorgealternativen aufgedeckt und präzise beziffert. Daneben widmet sich die Arbeit auch den steuer-, sozialversicherungs- und arbeitsrechtlichen Regelungen der betrieblichen Altersversorgung vor und nach Inkrafttreten des Betriebsrentenstärkungsgesetzes und diskutiert darüber hinaus alternative Reformmaßnahmen.
We investigate how the demographic composition of the workforce along the sex, nationality, education, age and tenure dimensions affects job switches. Fitting duration models for workers’ job‐to‐job turnover rate that control for workplace fixed effects in a representative sample of large manufacturing plants in Germany during 1975–2016, we find that larger co‐worker similarity in all five dimensions substantially depresses job‐to‐job moves, whereas workplace diversity is of limited importance. In line with conventional wisdom, which has that birds of a feather flock together, our interpretation of the results is that workers prefer having co‐workers of their kind and place less value on diverse workplaces.
Accounting plays an essential role in solving the principal-agent problem between managers and shareholders of capital market-oriented companies through the provision of information by the manager. However, this can succeed only if the accounting information is of high quality. In this context, the perceptions of shareholders regarding earnings quality are of particular importance.
The present dissertation intends to contribute to a deeper understanding regarding earnings quality from the perspective of shareholders of capital market-oriented companies. In particular, the thesis deals with indicators of shareholders’ perceptions of earnings quality, the influence of the auditor’s independence on these perceptions, and the shareholders’ assessment of the importance of earnings quality in general. Therefore, this dissertation examines market reactions to earnings announcements, measures of earnings quality and the auditor’s independence, as well as shareholders’ voting behavior at annual general meetings.
Following the introduction and a theoretical part consisting of two chapters, which deal with the purposes of accounting and auditing as well as the relevance of shareholder voting at the annual general meeting in the context of the principal-agent theory, the dissertation presents three empirical studies.
The empirical study presented in chapter 4 investigates auditor ratification votes in a U.S. setting. The study addresses the question of whether the results of auditor ratification votes are informative regarding shareholders’ perceptions of earnings quality. Using a returns-earnings design, the study demonstrates that the results of auditor ratification votes are associated with market reactions to unexpected earnings at the earnings announcement date. Furthermore, there are indications that this association seems to be positively related to higher levels of information asymmetry between managers and shareholders. Thus, there is empirical support for the notion that the results of auditor ratification votes are earnings-related information that might help shareholders to make informed investment decisions.
Chapter 5 investigates the relation between the economic importance of the client and perceived earnings quality. In particular, it is examined whether and when shareholders have a negative perception of an auditor’s economic dependence on the client. The results from a Big 4 client sample in the U.S. (fiscal years 2010 through 2014) indicate a negative association between the economic importance of the client and shareholders’ perceptions of earnings quality. The results are interpreted to mean that shareholders are still concerned about auditor independence even ten years after the implementation of the Sarbanes-Oxley Act. Furthermore, the association between the economic importance of the client and shareholders’ perceptions of earnings quality applies predominantly to the subsample of clients that are more likely to be financially distressed. Therefore, the empirical results reveal that shareholders’ perceptions of auditor independence are conditional on the client’s circumstances.
The study presented in chapter 6 sheds light on the question of whether earnings quality influences shareholders’ satisfaction with the members of the company’s board. Using data from 1,237 annual general meetings of German listed companies from 2010 through 2015, the study provides evidence that earnings quality – measured by the absolute value of discretionary accruals – is related to shareholders’ satisfaction with the company’s board. Moreover, the findings imply that shareholders predominantly blame the management board for inferior earnings quality. Overall, the evidence that earnings quality positively influences shareholders’ satisfaction emphasizes the relevance of earnings quality.
This dissertation consists of three independent, self-contained research papers that investigate how state-of-the-art machine learning algorithms can be used in combination with operations management models to consider high dimensional data for improved planning decisions. More specifically, the thesis focuses on the question concerning how the underlying decision support models change structurally and how those changes affect the resulting decision quality.
Over the past years, the volume of globally stored data has experienced tremendous growth. Rising market penetration of sensor-equipped production machinery, advanced ways to track user behavior, and the ongoing use of social media lead to large amounts of data on production processes, user behavior, and interactions, as well as condition information about technical gear, all of which can provide valuable information to companies in planning their operations. In the past, two generic concepts have emerged to accomplish this. The first concept, separated estimation and optimization (SEO), uses data to forecast the central inputs (i.e., the demand) of a decision support model. The forecast and a distribution of forecast errors are then used in a subsequent stochastic optimization model to determine optimal decisions. In contrast to this sequential approach, the second generic concept, joint estimation-optimization (JEO), combines the forecasting and optimization step into a single optimization problem. Following this approach, powerful machine learning techniques are employed to approximate highly complex functional relationships and hence relate feature data directly to optimal decisions.
The first article, “Machine learning for inventory management: Analyzing two concepts to get from data to decisions”, chapter 2, examines performance differences between implementations of these concepts in a single-period Newsvendor setting. The paper first proposes a novel JEO implementation based on the random forest algorithm to learn optimal decision rules directly from a data set that contains historical sales and auxiliary data. Going forward, we analyze structural properties that lead to these performance differences. Our results show that the JEO implementation achieves significant cost improvements over the SEO approach. These differences are strongly driven by the decision problem’s cost structure and the amount and structure of the remaining forecast uncertainty.
The second article, “Prescriptive call center staffing”, chapter 3, applies the logic of integrating data analysis and optimization to a more complex problem class, an employee staffing problem in a call center. We introduce a novel approach to applying the JEO concept that augments historical call volume data with features like the day of the week, the beginning of the month, and national holiday periods. We employ a regression tree to learn the ex-post optimal staffing levels based on similarity structures in the data and then generalize these insights to determine future staffing levels. This approach, relying on only few modeling assumptions, significantly outperforms a state-of-the-art benchmark that uses considerably more model structure and assumptions.
The third article, “Data-driven sales force scheduling”, chapter 4, is motivated by the problem of how a company should allocate limited sales resources. We propose a novel approach based on the SEO concept that involves a machine learning model to predict the probability of winning a specific project. We develop a methodology that uses this prediction model to estimate the “uplift”, that is, the incremental value of an additional visit to a particular customer location. To account for the remaining uncertainty at the subsequent optimization stage, we adapt the decision support model in such a way that it can control for the level of trust in the predicted uplifts. This novel policy dominates both a benchmark that relies completely on the uplift information and a robust benchmark that optimizes the sum of potential profits while neglecting any uplift information.
The results of this thesis show that decision support models in operations management can be transformed fundamentally by considering additional data and benefit through better decision quality respectively lower mismatch costs. The way how machine learning algorithms can be integrated into these decision support models depends on the complexity and the context of the underlying decision problem. In summary, this dissertation provides an analysis based on three different, specific application scenarios that serve as a foundation for further analyses of employing machine learning for decision support in operations management.
Autonomous cars and artificial intelligence that beats humans in Jeopardy or Go are glamorous examples of the so-called Second Machine Age that involves the automation of cognitive tasks [Brynjolfsson and McAfee, 2014]. However, the larger impact in terms of increasing the efficiency of industry and the productivity of society might come from computers that improve or take over business decisions by using large amounts of available data. This impact may even exceed that of the First Machine Age, the industrial revolution that started with James Watt’s invention of an efficient steam engine in the late eighteenth century. Indeed, the prevalent phrase that calls data “the new oil” indicates the growing awareness of data’s importance. However, many companies, especially those in the manufacturing and traditional service industries, still struggle to increase productivity using the vast amounts of
data [for Economic Co-operation and Development, 2018].
One reason for this struggle is that companies stick with a traditional way of using data for decision support in operations management that is not well suited to automated decision-making. In traditional inventory and capacity management, some data – typically just historical demand data – is used to estimate a model that makes predictions about uncertain planning parameters, such as customer demand. The planner then has two tasks: to adjust the prediction with respect to additional information that was not part of the data but still might influence demand and to take the remaining uncertainty into account and determine a safety buffer based on the underage and overage costs. In the best case, the planner determines the safety buffer based on an optimization model that takes the costs and the distribution of historical forecast errors into account; however, these decisions are usually based on a planner’s experience and intuition, rather than on solid data analysis.
This two-step approach is referred to as separated estimation and optimization (SEO). With SEO, using more data and better models for making the predictions would improve only the first step, which would still improve decisions but would not automize (and, hence, revolutionize) decision-making. Using SEO is like using a stronger horse to pull the plow: one still has to walk behind.
The real potential for increasing productivity lies in moving from predictive to prescriptive approaches, that is, from the two-step SEO approach, which uses predictive models in the estimation step, to a prescriptive approach, which integrates the optimization problem with the estimation of a model that then provides a direct functional relationship between the data and the decision. Following Akcay et al. [2011], we refer to this integrated approach as joint estimation-optimization (JEO). JEO approaches prescribe decisions, so they can automate the decision-making process. Just as the steam engine replaced manual work, JEO approaches replace cognitive work.
The overarching objective of this dissertation is to analyze, develop, and evaluate new ways for how data can be used in making planning decisions in operations management to unlock the potential for increasing productivity. In doing so, the thesis comprises five self-contained research articles that forge the bridge from predictive to prescriptive approaches. While the first article focuses on how sensitive data like condition data from machinery can be used to make predictions of spare-parts demand, the remaining articles introduce, analyze, and discuss prescriptive approaches to inventory and capacity management.
All five articles consider approach that use machine learning and data in innovative ways to improve current approaches to solving inventory or capacity management problems. The articles show that, by moving from predictive to prescriptive approaches, we can improve data-driven operations management in two ways: by making decisions more accurate and by automating decision-making. Thus, this dissertation provides examples of how digitization and the Second Machine Age can change decision-making in companies to increase efficiency and productivity.
Die Unabhängigkeit des Abschlussprüfers ist von anhaltender Relevanz, wird jedoch immer wieder in Frage gestellt. Der Fokus von Regulierungsbehörden und Forschung liegt auf kapitalmarktorientierten Unternehmen. Die Unabhängigkeit kann besonders gefährdet sein, wenn Schutzmechanismen, wie z. B. die Haftung oder das Risiko eines Reputationsverlustes, besonders schwach ausgeprägt sind. Es kann abgeleitet werden, dass bei privaten Unternehmen das Risiko eines Reputationsverlustes im Vergleich zu kapitalmarktorientierten Unternehmen geringer ist. Weiterhin ist das Haftungsrisiko für den Abschlussprüfer in Deutschland verglichen mit angelsächsischen Ländern geringer.
Damit untersucht die Arbeit die Unabhängigkeit in einem Umfeld, in dem diese besonders gefährdet ist. Als Surrogat wird die Wahrscheinlichkeit einer Going-Concern-Modifikation („GCM“) herangezogen. GCM können als Indikator für die Prüfungsqualität besonders geeignet sein, da sie ein direktes Ergebnis der Tätigkeit des Abschlussprüfers sind und von ihm formuliert und verantwortet werden. Für das Surrogat GCM ist für Deutschland im Bereich der privaten Unternehmen bislang keine Studie bekannt.
This paper provides a critical analysis of the subadditivity axiom, which is the key condition for coherent risk measures. Contrary to the subadditivity assumption, bank mergers can create extra risk. We begin with an analysis how a merger affects depositors, junior or senior bank creditors, and bank owners. Next it is shown that bank mergers can result in higher payouts having to be made by the deposit insurance scheme. Finally, we demonstrate that if banks are interconnected via interbank loans, a bank merger could lead to additional contagion risks. We conclude that the subadditivity assumption should be rejected, since a subadditive risk measure, by definition, cannot account for such increased risks.
Advanced Analytics in Operations Management and Information Systems: Methods and Applications
(2019)
The digital transformation of business and society presents enormous potentials for companies across all sectors. Fueled by massive advances in data generation, computing power, and connectivity, modern organizations have access to gigantic amounts of data. Companies seek to establish data-driven decision cultures to leverage competitive advantages in terms of efficiency and effectiveness. While most companies focus on descriptive tools such as reporting, dashboards, and advanced visualization, only a small fraction already leverages advanced analytics (i.e., predictive and prescriptive analytics) to foster data-driven decision-making today. Therefore, this thesis set out to investigate potential opportunities to leverage prescriptive analytics in four different independent parts.
As predictive models are an essential prerequisite for prescriptive analytics, the first two parts of this work focus on predictive analytics. Building on state-of-the-art machine learning techniques, we showcase the development of a predictive model in the context of capacity planning and staffing at an IT consulting company. Subsequently, we focus on predictive analytics applications in the manufacturing sector. More specifically, we present a data science toolbox providing guidelines and best practices for modeling, feature engineering, and model interpretation to manufacturing decision-makers. We showcase the application of this toolbox on a large data-set from a German manufacturing company.
Merely using the improved forecasts provided by powerful predictive models enables decision-makers to generate additional business value in some situations. However, many complex tasks require elaborate operational planning procedures. Here, transforming additional information into valuable actions requires new planning algorithms. Therefore, the latter two parts of this thesis focus on prescriptive analytics. To this end, we analyze how prescriptive analytics can be utilized to determine policies for an optimal searcher path problem based on predictive models. While rapid advances in artificial intelligence research boost the predictive power of machine learning models, a model uncertainty remains in most settings. The last part of this work proposes a prescriptive approach that accounts for the fact that predictions are imperfect and that the arising uncertainty needs to be considered. More specifically, it presents a data-driven approach to sales-force scheduling. Based on a large data set, a model to predictive the benefit of additional sales effort is trained. Subsequently, the predictions, as well as the prediction quality, are embedded into the underlying team orienteering problem to determine optimized schedules.
The present dissertation includes three research papers dealing with the following banking topics: (dis-) incentives and risk taking, earnings management and the regulation of supervisory boards.
„Do cooperative banks suffer from moral hazard behaviour? Evidence in the context of efficiency and risk“:
We use Granger-causality techniques to evaluate the intertemporal relationships among risk, efficiency and capital. We use two different measures of bank efficiency, i.e., cost and profit efficiency, since these measures reflect different managerial abilities. One is the ability to manage costs, and the other is the ability to maximize profits. We find that lower cost and profit efficiency Granger-cause increases in liquidity risk. We also identify that credit risk negatively Granger-causes cost and profit efficiency. Most importantly, our results show a positive relationship between capital and credit risk, thus displaying that moral hazard (due to limited liability and deposit insurance) does not apply to our sample of cooperative banks. On the contrary, we find evidence that banks with low capital are able to improve their loan quality in subsequent periods. These findings may be important to regulators, who should consider banks’ business models when introducing new regulatory capital constraints.
„Earnings Management Modelling in the Banking Industry – Evaluating valuable approaches“:
Accounting research has separately studied the field of Earnings Management (EM) for non-financial and financial industries. Since EM cannot be observed directly, it is important for every research question in any setting to find a verifiable proxy for EM. However, we still lack a thorough understanding of what regressors can add value to the estimation process of EM in banks. This study tries to close this gap and analyses existing model specifications of discretionary loan loss provisions (LLP) in the banking sector to identify common pattern groups and specific patterns used. Thereupon, we use an US-dataset from 2005-2015 and apply prevalent test procedures to examine the extent of measurement errors, extreme performance and omitted-variable biases and predictive power of the discretionary proxies of each of the models. Our results indicate that a thorough understanding about the methodological modelling process of EM in the banking industry is important. The currently established models to estimate EM are appropriate yet optimizable. In particular, we identify non-performing asset patterns as the most important group, while loan loss allowances and net charge offs can add some value, though do not seem to be indispensable. In addition, our results show that non-linearity of certain regressors can be an issue, which should be addressed in future research, while we identify some omitted and possibly correlated variables that might add value to specifications in identifying non-discretionary LLP. Results also indicate that a dynamic model and endogeneity robust estimation approach is not necessarily linked to better prediction power.
„Board Regulation and its Impact on Composition and Effects – Evidence from German Cooperative Bank“:
This study employs a system GMM framework to examine the impact of potential regulatory intervention regarding the occupations of supervisory board members in cooperative banks. To achieve insights the study proceeds in two different ways. First, the author investigates the changes in board structure prior and following to the German Act to Strengthen Financial Market and Insurance Supervision (FinVAG). Second, the author estimates the influence of Ph.D. degree holders and occupational concentration on bank-risk changes in consideration of the implementation of FinVAG. Therefore, the sample consists of 246 German cooperative banks from 2006-2011. Regarding bank-risk the author applies four different measures: credit-, equity-, liquidity-risk and the Z-Score, with the former three also being addressed in FinVAG. Results indicate that the implementation of FinVAG results in structural changes in board composition, especially at the expense of farmers. In addition, the implementation affects all risk-measures and relations between risk-measures and supervisory board characteristics in a risk-reducing and therefore intended way.
To disentangle the complex relationship between board characteristics and risk measures the study utilizes a two-step system GMM estimator to account for unobserved heterogeneity, and simultaneity in order to reduce endogeneity problems. The findings may be especially relevant for stakeholders, regulators, supervisors and managers.
In our globalized world, companies operate on an international market. To concentrate on their main competencies and be more competitive, they integrate into supply chain networks. However, these potentials also bear many risks. The emergence of an international market also creates pressure from competitors, forcing companies to collaborate with new and unknown companies in dynamic supply chain networks. In many cases, this can cause a lack of trust as the application of illegal practices and the breaking of agreements through complex and nontransparent supply chain networks pose a threat.
Blockchain technology provides a transparent, decentralized, and distributed means of chaining data storage and thus enables trust in its tamper-proof storage, even if there is no trust in the cooperation partners. The use of the blockchain also provides the opportunity to digitize, automate, and monitor processes within supply chain networks in real time.
The research project "Plattform für das integrierte Management von Kollaborationen in Wertschöpfungsnetzwerken" (PIMKoWe) addresses this issue. The aim of this report is to define requirements for such a collaboration platform. We define requirements based on a literature review and expert interviews, which allow for an objective consideration of scientific and practical aspects. An additional survey validates and further classifies these requirements as “essential”, “optional”, or “irrelevant”. In total, we have derived a collection of 45 requirements from different dimensions for the collaboration platform.
Employing these requirements, we illustrate a conceptual architecture of the platform as well as introduce a realistic application scenario. The presentation of the platform concept and the application scenario can provide the foundation for implementing and introducing a blockchain-based collaboration platform into existing supply chain networks in context of the research project PIMKoWe.
Dieser Beitrag konzentriert sich auf die Entwicklung von Technologieclustern und basiert auf zwei Forschungsfragen: Was sind die Voraussetzungen für die Entwicklung von Technologieclustern gemäß der Clusterforschung? Und erfüllt die Region Mainfranken die Voraussetzungen für eine Technologieclusterbildung? Zu diesem Zweck wird eine qualitative Studie unter Bezugnahme auf verschiedene theoretische Konzepte der Clusterbildung durchgeführt. Aus diesem Grund können die folgenden Determinanten der Clusterentwicklung abgeleitet werden: die Verkehrsinfrastruktur- und Infrastrukturkomponente, die Clusterumfeldkomponente, die Universitätskomponente, die Staatskomponente und die Branchenkomponente. Die Analyse der Parameterwerte der einzelnen Clusterkomponenten zeigt, dass die Kernanforderungen der Technologieclusterentwicklung in der Region Mainfranken erfüllt sind. Dennoch ist es notwendig, die Infrastruktur, die kommerzielle und industrielle Verfügbarkeit von Land und die Verfügbarkeit von Kapital zu verbessern, um ein erfolgreiches Technologiecluster zu bilden. Im Rahmen der vorliegenden Arbeit konnte darüber hinaus das Potenzial der Technologieclusterentwicklung im Bereich der künstlichen Intelligenz analysiert werden.
Die verfasste Arbeit beschäftigt sich mit der Handelsstrategie Carry Trades. Grundlage dieser Strategie ist das Ausnutzen von Zinsunterschieden, welche zwischen zwei Währungsräumen vorherrschen, und einer Wechselkursanpassung, die diese Unterschiede nicht komplett kompensiert. Investiert ein Anleger beispielsweise in eine ausländische Währung mit höherem Zinsniveau, so müsste sich der Wechselkurs gemäß der Zinsparitätentheorie in der Folge so anpassen, dass der höhere Ertrag durch die Zinsen beim Rücktausch der Währung vollständig egalisiert wird. Ziel dieser Arbeit war eine empirische Untersuchung für die Währungen der G10 auf wöchentlicher Handelsbasis sowie die Konstruktion und Berücksichtigung von ex ante Sharpe-Ratios als Handelsindikator.