Betriebswirtschaftliches Institut
Refine
Has Fulltext
- yes (115)
Is part of the Bibliography
- yes (115)
Year of publication
Document Type
- Doctoral Thesis (80)
- Journal article (19)
- Working Paper (6)
- Book (5)
- Report (3)
- Master Thesis (2)
Keywords
- Deutschland (14)
- Maschinelles Lernen (8)
- Operations Management (7)
- Supply Chain Management (7)
- Unternehmensbewertung (6)
- Entscheidungsunterstützung (5)
- Rechnungslegung (5)
- Steuerrecht (5)
- artificial intelligence (5)
- Accounting (4)
Institute
EU-Project number / Contract (GA) number
This paper examines professional associations’ local responses to global demands of accounting standardisation. Our longitudinal study from 1998 to 2018 analyses how professional associations of the German audit profession engaged in an intense framing contest over the adoption of external quality controls. Drawing on the concept of strategic action field and the literature on framing, we unpack how the gap between large audit firms and small audit firms increasingly undermined the capacity of the professional associations to fulfil their dual role of governance and representation. We unveil how their failed attempt to maintain the image of an unified profession ultimately led to the creation of a new professional association representing the ‘small auditor’ professional, which successfully, albeit temporarily, took control over the field of German auditing. Our findings suggest that the passivity of small audit firms in the process of translating global regulatory regimes should not be presumed. Rather, we provide insight into how small audit firms can rebuild their own identity by actively responding to waves of global regulation. Doing so, and contrary to prior research, our case highlights that governance units within strategic action fields are not necessarily aligned with the interests of the most powerful field actors.
We develop a model of oligopoly competition involving innovation effort, market entry and production flexibility under demand uncertainty. Several heterogeneous firms make efforts to develop new prototypes; if they succeed, they hold a shared option to enter a new market under stochastic demand. We derive analytic results for the Markov perfect equilibrium accounting for development effort, market entry and production decisions and complement these by numerical analyses. Firm value—which embeds real options—is not convex increasing in demand but exhibits “competitive waves” due to market entries by rivals. A firm with a development advantage (“innovator”) exerts greater innovation effort if the market is a niche, whereas another benefiting from economies of scale (“incumbent”) invests more if the market is larger. Positive externalities benefit the incumbent in the development stage, whereas the innovator is better off in counteracting negative externalities. Demand volatility raises firm incentives to innovate as it enhances the value of firm market‐entry and production flexibility.
Die Jahresabschlussprüfung verfolgt das Ziel, die Verlässlichkeit der Rechnungslegung zu bestätigen. Folglich kann sie einen wesentlichen Beitrag zu einem hohen Informationsniveau an den Märkten leisten. Angesichts dieser großen ökonomischen Bedeutung unternimmt der deutsche Gesetzgeber zahlreiche Anstrengungen, um eine hohe Prüfungsqualität sicherzustellen.
Die Sichtung der Wirtschaftsprüferordnung zeigt hierbei, dass regulatorische Maßnahmen ergriffen werden, die am Kern der Jahresabschlussprüfung ansetzen, nämlich an den Berufsangehörigen selbst. So wurde der Zugang zum Berufsstand der vereidigten Buchprüfer mehrmals geschlossen und wiedereröffnet. Des Weiteren sind markante Anpassungen des Niveaus des Wirtschaftsprüfungsexamens im Zeitablauf zu erkennen. Bei der Jahresabschlussprüfung der Unternehmen von öffentlichem Interesse sind außerdem besondere Berufspflichten zu erfüllen. Zum einen ist diesen schweren Eingriffen in die Freiheit der Berufswahl und der Berufsausübung gemein, dass sie allesamt die Qualifikation des Abschlussprüfers adressieren. Zum anderen werden die entsprechenden Gesetzesänderungen mehrheitlich mit einer Stärkung der Prüfungsqualität begründet.
Fraglich ist, inwiefern jene Facetten der Prüferqualifikation tatsächlich einen Einfluss auf die Prüfungsqualität ausüben. Aufgrund mangelnder Evidenz ergibt sich die Notwendigkeit, eine empirische Studie am deutschen Prüfermarkt durchzuführen und somit den Beginn zur Schließung der identifizierten Forschungslücke zu setzen.
Das Ziel der vorliegenden Dissertation besteht folglich darin, den Zusammenhang zwischen der Prüferqualifikation und der Prüfungsqualität mittels Regressionsanalysen zu untersuchen. Dazu wurde ein einzigartiger Datensatz zu deutschen privaten prüfungspflichtigen Kapitalgesellschaften mit unkonsolidierten Finanz- und Prüferinformationen im Zeitraum 2006-2018 mit insgesamt 217.585 grundlegenden Beobachtungen erhoben, bereinigt und aufbereitet. Da die Prüfungsqualität nicht direkt beobachtbar ist, wird zwischen wahrgenommener Prüfungsqualität und tatsächlicher Prüfungsqualität unterschieden. Im Rahmen dieser Dissertation wird die wahrgenommene Prüfungsqualität über Fremdkapitalkosten und die tatsächliche Prüfungsqualität über absolute diskretionäre Periodenabgrenzungen approximiert.
Die Ergebnisse der Hauptregressionen zeigen überwiegend, dass kein Zusammenhang zwischen den Maßgrößen der Prüferqualifikation und der wahrgenommenen und tatsächlichen Prüfungsqualität besteht. Die Zusatz- und Sensitivitätsanalysen unterstützen diesen Befund. So können mit Blick auf die Berufszugangsregelungen keine Qualitätsunterschiede zwischen den Berufsständen der Wirtschaftsprüfer und der vereidigten Buchprüfer nachgewiesen werden. Auch innerhalb des Berufstandes der Wirtschaftsprüfer ergeben sich keine Hinweise auf ein Qualitätsgefälle zwischen den Prüfergruppen, die unterschiedliche Examensanforderungen durchlebt haben. Hinsichtlich der Berufsausübungsregelungen ist zu beobachten, dass die zusätzlichen Anforderungen an die Jahresabschlussprüfung der Unternehmen von öffentlichem Interesse nicht mit einer anderen Prüfungsqualität bei privaten Unternehmen verbunden sind. Die beschriebenen regulatorischen Schritte des Gesetzgebers im Bereich der Prüferqualifikation erscheinen somit im Lichte einer verbesserten Prüfungsqualität nicht zwingend gerechtfertigt.
In a world of constant change, uncertainty has become a daily challenge for businesses. Rapidly shifting market conditions highlight the need for flexible responses to unforeseen events. Operations Management (OM) is crucial for optimizing business processes, including site planning, production control, and inventory management. Traditionally, companies have relied on theoretical models from microeconomics, game theory, optimization, and simulation. However, advancements in machine learning and mathematical optimization have led to a new research field: data-driven OM.
Data-driven OM uses real data, especially time series data, to create more realistic models that better capture decision-making complexities. Despite the promise of this new research area, a significant challenge remains: the availability of extensive historical training data. Synthetic data, which mimics real data, has been used to address this issue in other machine learning applications.
Therefore, this dissertation explores how synthetic data can be leveraged to improve decisions for data-driven inventory management, focusing on the single-period newsvendor problem, a classic stochastic optimization problem in inventory management.
The first article, "A Meta Analysis of Data-Driven Newsvendor Approaches", presents a standardized evaluation framework for data-driven prescriptive approaches, tested through a numerical study. Findings suggest model performance is not robust, emphasizing the need for a standardized evaluation process.
The second article, "Application of Generative Adversarial Networks in Inventory Management", examines using synthetic data generated by Generative Adversarial Networks (GANs) for the newsvendor problem. This study shows GANs can model complex demand relationships, offering a promising alternative to traditional methods.
The third article, "Combining Synthetic Data and Transfer Learning for Deep Reinforcement Learning in Inventory Management", proposes a method using Deep Reinforcement Learning (DRL) with synthetic and real data through transfer learning. This approach trains a generative model to learn demand distributions, generates synthetic data, and fine-tunes a DRL agent on a smaller real dataset. This method outperforms traditional approaches in controlled and practical settings, though further research is needed to generalize these findings.
Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability.
Das vorliegende Buch beschäftigt sich anhand einer Sammlung von realen Fällen, die in Aufgabenform formuliert sind, mit dem leider oft gestörten Verhältnis von Theorie und Praxis in der rechtsgeprägten Unternehmensbewertung.
Es weist ähnlich wie „normale“ Fallsammlungen die jeweiligen Aufgabenstellungen und die zugehörigen Lösungen aus. Die eigentlichen Fragestellungen in den Aufgabentexten sind durch kurze Erläuterungen eingerahmt, damit jeder Fall als solcher von einem mit Bewertungsfragen halbwegs Vertrauten relativ leicht verstanden und in seiner Bedeutung eingeordnet werden kann. Dieses Vorgehen ähnelt wiederum Lehrbüchern, die Inhalte über Fälle vermitteln, nur dass hier nicht hypothetische Fälle das jeweils idealtypisch richtige Vorgehen zeigen, sondern Praxisfälle plakative Verstöße contra legem artis.
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These systems have human-like decision capacity for selected applications based on a decision rationale which cannot be looked-up conveniently and constitutes a black box. As a consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said to hinder trust and enforce aversion towards these systems, studies that connect user trust to transparency and subsequently acceptance are scarce. In response, our research is concerned with the development of a theoretical model that explains end-user acceptance of intelligent systems. We utilize the unified theory of acceptance and use in information technology as well as explanation theory and related theories on initial trust and user trust in information systems. The proposed model is tested in an industrial maintenance workplace scenario using maintenance experts as participants to represent the user group. Results show that acceptance is performance-driven at first sight. However, transparency plays an important indirect role in regulating trust and the perception of performance.
Ever-growing data availability combined with rapid progress in analytics has laid the foundation for the emergence of business process analytics. Organizations strive to leverage predictive process analytics to obtain insights. However, current implementations are designed to deal with homogeneous data. Consequently, there is limited practical use in an organization with heterogeneous data sources. The paper proposes a method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks to overcome this limitation. A case study performed with a medium-sized German manufacturing company highlights the method’s utility for organizations.
Robotic process automation is a disruptive technology to automate already digital yet manual tasks and subprocesses as well as whole business processes rapidly. In contrast to other process automation technologies, robotic process automation is lightweight and only accesses the presentation layer of IT systems to mimic human behavior. Due to the novelty of robotic process automation and the varying approaches when implementing the technology, there are reports that up to 50% of robotic process automation projects fail. To tackle this issue, we use a design science research approach to develop a framework for the implementation of robotic process automation projects. We analyzed 35 reports on real-life projects to derive a preliminary sequential model. Then, we performed multiple expert interviews and workshops to validate and refine our model. The result is a framework with variable stages that offers guidelines with enough flexibility to be applicable in complex and heterogeneous corporate environments as well as for small and medium-sized companies. It is structured by the three phases of initialization, implementation, and scaling. They comprise eleven stages relevant during a project and as a continuous cycle spanning individual projects. Together they structure how to manage knowledge and support processes for the execution of robotic process automation implementation projects.
Künstliche Intelligenz (KI) dringt vermehrt in sensible Bereiche des alltäglichen menschlichen Lebens ein. Es werden nicht mehr nur noch einfache Entscheidungen durch intelligente Systeme getroffen, sondern zunehmend auch komplexe Entscheidungen. So entscheiden z. B. intelligente Systeme, ob Bewerber in ein Unternehmen eingestellt werden sollen oder nicht. Oftmals kann die zugrundeliegende Entscheidungsfindung nur schwer nachvollzogen werden und ungerechtfertigte Entscheidungen können dadurch unerkannt bleiben, weshalb die Implementierung einer solchen KI auch häufig als sogenannte Blackbox bezeichnet wird. Folglich steigt die Bedrohung, durch unfaire und diskriminierende Entscheidungen einer KI benachteiligt behandelt zu werden. Resultieren diese Verzerrungen aus menschlichen Handlungen und Denkmustern spricht man von einer kognitiven Verzerrung oder einem kognitiven Bias. Aufgrund der Neuigkeit dieser Thematik ist jedoch bisher nicht ersichtlich, welche verschiedenen kognitiven Bias innerhalb eines KI-Projektes auftreten können. Ziel dieses Beitrages ist es, anhand einer strukturierten Literaturanalyse, eine gesamtheitliche Darstellung zu ermöglichen. Die gewonnenen Erkenntnisse werden anhand des in der Praxis weit verbreiten Cross-Industry Standard Process for Data Mining (CRISP-DM) Modell aufgearbeitet und klassifiziert. Diese Betrachtung zeigt, dass der menschliche Einfluss auf eine KI in jeder Entwicklungsphase des Modells gegeben ist und es daher wichtig ist „mensch-ähnlichen“ Bias in einer KI explizit zu untersuchen.
Die interorganisatorische Zusammenarbeit in Produktionsnetzwerken kann Herausforderungen durch eine hohe Marktdynamik, immer anspruchsvollere Kundenbedürfnisse und steigenden Kostendruck entgegenwirken. Neben der klassischen vertikalen Verschiebung von Kapazitäten in Richtung geeigneter Zulieferer, lassen sich Fertigungskapazitäten auch durch eine horizontale Zusammenarbeit zwischen produzierenden Unternehmen handeln. Im Sinne der Sharing Economy bieten digitale Plattformen eine geeignete Infrastruktur zur Verknüpfung und Koordination der Marktakteure eines Produktionsnetzwerks. So können Fertigungsunternehmen flexibel Produktionsausfällen entgegenwirken und freie Maschinenkapazitäten auslasten. Eine wesentliche Voraussetzung für den Erfolg solcher digitalen Plattformen für Produktionsnetzwerke ist die Definition von Zielen, welche bisher in der Literatur nur unzureichend und nicht bezogen auf diese spezifische Plattformart untersucht wurden. In dieser Arbeit wird ein umfängliches konzeptionelles Zielmodell für diese spezifische Plattformart erstellt. Zu spezifischen Zielen digitaler Plattformen für Produktionsnetzwerke zählen neben wirtschaftlichen oder technischen Zielen beispielsweise auch produktionsbezogene Marktleistungsziele wie die Gewährleistung von Produktionsflexibilität. Aufbauend darauf wird gezeigt, wie das Design der beschriebenen Plattformen einen Einfluss auf die Erreichung bestimmter Ziele hat und wie spezielle Mechanismen zur Zielerreichung beitragen.
The collection at hand is concerned with learning curve effects in hospitals as highly specialized expert organizations and comprises four papers, each focusing on a different aspect of the topic. Three papers are concerned with surgeons, and one is concerned with the staff of the emergency room in a conservative treatment.
The preface compactly addresses the steadily increasing health care costs and economic pressure, the hospital landscape in Germany as well as its development. Furthermore, the DRG lump-sum compensation and the characteristics of the health sector, which is strongly regulated by the state and in which ethical aspects must be omnipresent, are outlined. Besides, the benefit of knowing about learning curve effects in order to cut costs and to keep quality stable or even improve it, is addressed.
The first paper of the collection investigates the learning effects in a hospital which has specialized on endoprosthetics (total hip and knee replacement). Doing so, the specialized as well as the non-specialized interventions are studied. Costs are not investigated directly, but cost indicators. The indicator of costs in the short term are operating room times. The one of medium- to long-term costs is quality. It is operationalized by complications in the post-anesthesia care unit. The study estimates regression models (OLS and logit). The results indicate that the specialization comes along with advantages due to learning effects in terms of shorter operating room times and lower complication rates in endoprosthetic interventions. For the non-specialized interventions, the results are the same. There are no possibly negative effects of specialization on non-specialized surgeries, but advantageous spillover effects. Altogether, the specialization can be regarded as reasonable, as it cuts costs of all surgeries in the short, medium, and long term. The authors are Carsten Bauer, Nele Möbs, Oliver Unger, Andrea Szczesny, and Christian Ernst.
In the second paper surgeons’ learning curves effects in a teamwork vs. an individual work setting are in the focus of interest. Thus, the study combines learning curve effects with teamwork in health care, an issue increasingly discussed in recent literature. The investigated interventions are tonsillectomies (surgical excision of the palatine tonsils), a standard intervention. The indicator of costs in the short and medium to long term are again operating room times and complications as a proxy for quality respectively. Complications are secondary bleedings, which usually occur a few days after surgery. The study estimates regression models (OLS and logit). The results show that operating room times decrease with increasing surgeon’s experience. Surgeons who also operate in teams learn faster than the ones always operating on their own. Thus, operating room times are shorter for surgeons who also take part in team interventions. As a special feature, the data set contains the costs per case. This enables assuring that the assumed cost indicators are valid. The findings recommend team surgeries especially for resident physicians. The authors are Carsten Bauer, Oliver Unger, and Martin Holderried.
The third paper is dedicated to stapes surgery, a therapy for conductive hearing loss caused by otosclerosis (overflow bone growth). It is conceptually simple, but technically difficult. Therefore, it is regarded as the optimum to study learning curve effects in surgery. The paper seeks a comprehensive investigation. Thus, operating room times are employed as short-term cost indicator and quality as the medium to long term one. To measure quality, the postoperative difference between air and bone conduction threshold as well as a combination of this difference and the absence of complications. This paper also estimates different regression models (OLS and logit). Besides investigating the effects on department level, the study also considers the individual level, this means operating room times and quality are investigated for individual surgeons. This improves the comparison of learning curves, as the surgeons worked under widely identical conditions. It becomes apparent that the operating room times initially decrease with increasing experience. The marginal effect of additional experience gets smaller until the direction of the effect changes and the operating room times increase with increasing experience, probably caused by the allocation of difficult cases to the most experienced surgeons. Regarding quality, no learning curve effects are observed. The authors are Carsten Bauer, Johannes Taeger, and Kristen Rak.
The fourth paper is a systematic literature review on learning effects in the treatment of ischemic strokes. In case of stroke, every minute counts. Therefore, there is the inherent need to reduce the time from symptom onset to treatment. The article is concerned with the reduction of the time from arrival at the hospital to thrombolysis treatment, the so-called “door-to-needle time”. In the literature, there are studies on learning in a broader sense caused by a quality improvement program as well as learning in a narrower sense, in which learning curve effects are evaluated. Besides, studies on the time differences between low-volume and high-volume hospitals are considered, as the differences are probably the result of learning and economies of scale. Virtually all the 165 evaluated articles report improvements regarding the time to treatment. Furthermore, the clinical results substantiate the common association of shorter times from arrival to treatment with improved clinical outcomes. The review additionally discusses the economic implications of the results. The author is Carsten Bauer.
The preface brings forward that after the measurement of learning curve effects, further efforts are necessary for using them in order to increase efficiency, as the issue does not admit of easy, standardized solutions. Furthermore, the postface emphasizes the importance of multiperspectivity in research for the patient outcome, the health care system, and society.
Recent computing advances are driving the integration of artificial intelligence (AI)-based systems into nearly every facet of our daily lives. To this end, AI is becoming a frontier for enabling algorithmic decision-making by mimicking or even surpassing human intelligence. Thereupon, these AI-based systems can function as decision support systems (DSSs) that assist experts in high-stakes use cases where human lives are at risk. All that glitters is not gold, due to the accompanying complexity of the underlying machine learning (ML) models, which apply mathematical and statistical algorithms to autonomously derive nonlinear decision knowledge. One particular subclass of ML models, called deep learning models, accomplishes unsurpassed performance, with the drawback that these models are no longer explainable to humans. This divergence may result in an end-user’s unwillingness to utilize this type of AI-based DSS, thus diminishing the end-user’s system acceptance.
Hence, the explainable AI (XAI) research stream has gained momentum, as it develops techniques to unravel this black-box while maintaining system performance. Non-surprisingly, these XAI techniques become necessary for justifying, evaluating, improving, or managing the utilization of AI-based DSSs. This yields a plethora of explanation techniques, creating an XAI jungle from which end-users must choose. In turn, these techniques are preliminarily engineered by developers for developers without ensuring an actual end-user fit. Thus, it renders unknown how an end-user’s mental model behaves when encountering such explanation techniques.
For this purpose, this cumulative thesis seeks to address this research deficiency by investigating end-user perceptions when encountering intrinsic ML and post-hoc XAI explanations. Drawing on this, the findings are synthesized into design knowledge to enable the deployment of XAI-based DSSs in practice. To this end, this thesis comprises six research contributions that follow the iterative and alternating interplay between behavioral science and design science research employed in information systems (IS) research and thus contribute to the overall research objectives as follows: First, an in-depth study of the impact of transparency and (initial) trust on end-user acceptance is conducted by extending and validating the unified theory of acceptance and use of technology model. This study indicates both factors’ strong but indirect effects on system acceptance, validating further research incentives. In particular, this thesis focuses on the overarching concept of transparency. Herein, a systematization in the form of a taxonomy and pattern analysis of existing user-centered XAI studies is derived to structure and guide future research endeavors, which enables the empirical investigation of the theoretical trade-off between performance and explainability in intrinsic ML algorithms, yielding a less gradual trade-off, fragmented into three explainability groups. This includes an empirical investigation on end-users’ perceived explainability of post-hoc explanation types, with local explanation types performing best. Furthermore, an empirical investigation emphasizes the correlation between comprehensibility and explainability, indicating almost significant (with outliers) results for the assumed correlation. The final empirical investigation aims at researching XAI explanation types on end-user cognitive load and the effect of cognitive load on end-user task performance and task time, which also positions local explanation types as best and demonstrates the correlations between cognitive load and task performance and, moreover, between cognitive load and task time. Finally, the last research paper utilizes i.a. the obtained knowledge and derives a nascent design theory for XAI-based DSSs. This design theory encompasses (meta-) design requirements, design principles, and design features in a domain-independent and interdisciplinary fashion, including end-users and developers as potential user groups. This design theory is ultimately tested through a real-world instantiation in a high-stakes maintenance scenario.
From an IS research perspective, this cumulative thesis addresses the lack of research on perception and design knowledge for an ensured utilization of XAI-based DSS. This lays the foundation for future research to obtain a holistic understanding of end-users’ heuristic behaviors during decision-making to facilitate the acceptance of XAI-based DSSs in operational practice.
In dieser Dissertation werden ausgewählte Aspekte der Steuervermeidung und grenzüberschreitenden Besteuerung betrachtet. Im Teil B liegt der Fokus auf der Empirie zu Steuervermeidung und Gewinnverlagerung multinationaler Unternehmen mit drei einzelnen Aufsätzen. Der Teil C untersucht die unterschiedliche Besteuerung von Human- und Sachvermögen anhand der beiden fundamentalen Besteuerungsprinzipien des Äquivalenz- und des Leistungsfähigkeitsprinzips. Der letzte Aufsatz (Teil D) analysiert das Werturteilsfreiheitspostulat im Stakeholder-Ansatz und zeigt mithilfe eines Fallbeispiels, wie die Unternehmensbesteuerung in unterschiedliche Stakeholder-Ansätze integriert werden kann. Eine abschließende Gesamtwürdigung geht auf verbleibende Forschungsfragen ein (Teil E).
Somit wird in der vorliegenden Dissertation grenzüberschreitende Besteuerung anhand betriebswirtschaftlicher, besteuerungsprinzipiengestützter bzw. dogmatischer und wissenschaftstheoretischer Gesichtspunkte untersucht.
Companies are expected to act as international players and to use their capabilities to provide customized products and services quickly and efficiently. Today, consumers expect their requirements to be met within a short time and at a favorable price. Order-to-delivery lead time has steadily gained in importance for consumers. Furthermore, governments can use various emissions policies to force companies and customers to reduce their greenhouse gas emissions. This thesis investigates the influence of order-to-delivery lead time and different emission policies on the design of a supply chain. Within this work different supply chain design models are developed to examine these different influences. The first model incorporates lead times and total costs, and various emission policies are implemented to illustrate the trade-off between the different measures. The second model reflects the influence of order-to-delivery lead time sensitive consumers, and different emission policies are implemented to study their impacts. The analysis shows that the share of order-to-delivery lead time sensitive consumers has a significant impact on the design of a supply chain. Demand uncertainty and uncertainty in the design of different emission policies are investigated by developing an appropriate robust mathematical optimization model. Results show that especially uncertainties on the design of an emission policy can significantly impact the total cost of a supply chain. The effects of differently designed emission policies in various countries are investigated in the fourth model. The analyses highlight that both lead times and emission policies can strongly influence companies' offshoring and nearshoring strategies.
Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.
Increasing global competition forces organizations to improve their processes to gain a competitive advantage. In the manufacturing sector, this is facilitated through tremendous digital transformation. Fundamental components in such digitalized environments are process-aware information systems that record the execution of business processes, assist in process automation, and unlock the potential to analyze processes. However, most enterprise information systems focus on informational aspects, process automation, or data collection but do not tap into predictive or prescriptive analytics to foster data-driven decision-making. Therefore, this dissertation is set out to investigate the design of analytics-enabled information systems in five independent parts, which step-wise introduce analytics capabilities and assess potential opportunities for process improvement in real-world scenarios.
To set up and extend analytics-enabled information systems, an essential prerequisite is identifying success factors, which we identify in the context of process mining as a descriptive analytics technique. We combine an established process mining framework and a success model to provide a structured approach for assessing success factors and identifying challenges, motivations, and perceived business value of process mining from employees across organizations as well as process mining experts and consultants. We extend the existing success model and provide lessons for business value generation through process mining based on the derived findings. To assist the realization of process mining enabled business value, we design an artifact for context-aware process mining. The artifact combines standard process logs with additional context information to assist the automated identification of process realization paths associated with specific context events. Yet, realizing business value is a challenging task, as transforming processes based on informational insights is time-consuming.
To overcome this, we showcase the development of a predictive process monitoring system for disruption handling in a production environment. The system leverages state-of-the-art machine learning algorithms for disruption type classification and duration prediction. It combines the algorithms with additional organizational data sources and a simple assignment procedure to assist the disruption handling process. The design of such a system and analytics models is a challenging task, which we address by engineering a five-phase method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks. The method facilitates the integration of heterogeneous data sources through dedicated neural network input heads, which are concatenated for a prediction. An evaluation based on a real-world use-case highlights the superior performance of the resulting multi-headed network.
Even the improved model performance provides no perfect results, and thus decisions about assigning agents to solve disruptions have to be made under uncertainty. Mathematical models can assist here, but due to complex real-world conditions, the number of potential scenarios massively increases and limits the solution of assignment models. To overcome this and tap into the potential of prescriptive process monitoring systems, we set out a data-driven approximate dynamic stochastic programming approach, which incorporates multiple uncertainties for an assignment decision. The resulting model has significant performance improvement and ultimately highlights the particular importance of analytics-enabled information systems for organizational process improvement.
Purpose The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors. Design/methodology/approach Correction for attenuation as originally applied by PLSc is modified to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted. Findings In the presence of population measurement error correlation, estimated parameter bias is generally small for original and modified PLSc, with the latter outperforming the former for large sample sizes. In terms of the root mean squared error, the results are virtually identical for both original and modified PLSc. Only for relatively large sample sizes, high population measurement error correlation, and low population composite reliability are the increased standard errors associated with the modification outweighed by a smaller bias. These findings are regarded as initial evidence that original PLSc is comparatively robust with respect to misspecification of the structure of measurement error correlations within blocks of indicators. Originality/value Introducing and investigating a new approach to address measurement error correlation within blocks of indicators in PLSc, this paper contributes to the ongoing development and assessment of recent advancements in partial least squares path modeling.
Plattform für das integrierte Management von Kollaborationen in Wertschöpfungsnetzwerken (PIMKoWe)
(2022)
Das Verbundprojekt „Plattform für das integrierte Management von Kollaborationen in Wertschöpfungsnetzwerken“ (PIMKoWe – Förderkennzeichen „02P17D160“) ist ein Forschungsvorhaben im Rahmen des Forschungsprogramms „Innovationen für die Produktion, Dienstleistung und Arbeit von morgen“ der Bekanntmachung „Industrie 4.0 – Intelligente Kollaborationen in dynamischen Wertschöpfungs-netzwerken“ (InKoWe). Das Forschungsvorhaben wurde mit Mitteln des Bundesministeriums für Bildung und Forschung (BMBF) gefördert und durch den Projektträger des Karlsruher Instituts für Technologie (PTKA) betreut.
Ziel des Forschungsprojekts PIMKoWe ist die Entwicklung und Bereitstellung einer Plattformlösung zur Flexibilisierung, Automatisierung und Absicherung von Kooperationen in Wertschöpfungsnetzwerken des industriellen Sektors.
Die Welt befindet sich in einem tiefgreifenden Wandlungsprozess von einer Industrie- zu einer Wissensgesellschaft. Die Automatisierung sowohl physischer als auch kognitiver Arbeit verlagert die Nachfrage des Arbeitsmarktes zunehmend zu hoch qualifizierten Mitarbeitern, die als High Potentials bezeichnet werden. Diese zeichnen sich neben ihrer Intelligenz durch vielfältige Fähigkeiten wie Empathievermögen, Kreativität und Problemlösungskompetenzen aus. Humankapital gilt als Wettbewerbsfaktor der Zukunft, jedoch beklagten Unternehmen bereits Ende des 20. Jahrhunderts einen Mangel an Fach- und Führungspersonal, der durch die Pandemie weiter verstärkt wird. Aus diesem Grund rücken Konzepte zur Rekrutierung und Mitarbeiterbindung in den Fokus der Unternehmen.
Da ethisches und ökologisches Bewusstsein in der Bevölkerung an Bedeutung gewinnen, lässt sich annehmen, dass Bewerber zukünftig verantwortungsbewusste Arbeitgeber bevorzugen. Nachhaltigkeit bzw. Corporate Responsibility wird damit zum Wettbewerbsfaktor zur Gewinnung und Bindung von Talenten. Mit Hilfe des Ansatzes der identitätsorientierten Markenführung wird ein Verständnis davon hergestellt, wie es Unternehmen gelingt, eine starke Arbeitgebermarke aufzubauen. Anhand einer konzeptionellen, praktischen und empirischen Untersuchung am Unternehmensbeispiel Unilever werden die Auswirkungen von umfassendem ökologischem und gesellschaftlichem Engagement auf die Arbeitgeberattraktivität analysiert.
Es zeigt sich, dass Nachhaltigkeit – konkretisiert über die 17 Sustainable Develop-ment Goals (SDGs) und verankert im Kern der Marke – die erfolgreiche Führung einer Employer Brand ermöglicht. Dieses Ergebnis resultiert sowohl aus dem theoretischen als auch aus dem empirischen Teil dieser Arbeit. Im letzteren konnten unter Einsatz eines Strukturgleichungsmodells drei generelle positive Wirkzusammenhänge bestätigt werden: Bewerber fühlen sich zu verantwortungsbewussten Unternehmen hingezogen, weshalb sie einen P-O-F empfinden. Diese wahrgenommene Passung mit dem Unternehmen steigert die Arbeitgeberattraktivität aus Sicht der potenziellen Bewerber, wodurch sich wiederum die Wahrscheinlichkeit für eine Bewerbungsabsicht und die Akzeptanz eines Arbeitsplatzangebotes erhöht. Es wird damit die Annahme bestätigt, dass den Herausforderungen der Personalbeschaffung über eine konsequente nachhaltige Ausrichtung der Geschäftstätigkeit und deren glaubhafte Kommunikation über die Arbeitgebermarke begegnet werden kann.
Innovative Software kann die Position eines Unternehmens im Wettbewerb sichern. Die Einführung innovativer Software ist aber alles andere als einfach. Denn obgleich die technischen Aspekte offensichtlicher sind, dominieren organisationale Aspekte. Zu viele Softwareprojekte schlagen fehl, da die Einführung nicht gelingt, trotz Erfüllung technischer Anforderungen. Vor diesem Hintergrund ist das Forschungsziel der Masterarbeit, Risiken und Erfolgsfaktoren für die Einführung innovativer Software in Unternehmen zu finden, eine Strategie zu formulieren und dabei die Bedeutung von Schlüsselpersonen zu bestimmen.
The digital transformation facilitates new forms of collaboration between companies along the supply chain and between companies and consumers. Besides sharing information on centralized platforms, blockchain technology is often regarded as a potential basis for this kind of collaboration. However, there is much hype surrounding the technology due to the rising popularity of cryptocurrencies, decentralized finance (DeFi), and non-fungible tokens (NFTs). This leads to potential issues being overlooked. Therefore, this thesis aims to investigate, highlight, and address the current weaknesses of blockchain technology: Inefficient consensus, privacy, smart contract security, and scalability.
First, to provide a foundation, the four key challenges are introduced, and the research objectives are defined, followed by a brief presentation of the preliminary work for this thesis.
The following four parts highlight the four main problem areas of blockchain. Using big data analytics, we extracted and analyzed the blockchain data of six major blockchains to identify potential weaknesses in their consensus algorithm. To improve smart contract security, we classified smart contract functionalities to identify similarities in structure and design. The resulting taxonomy serves as a basis for future standardization efforts for security-relevant features, such as safe math functions and oracle services. To challenge privacy assumptions, we researched consortium blockchains from an adversary role. We chose four blockchains with misconfigured nodes and extracted as much information from those nodes as possible. Finally, we compared scalability solutions for blockchain applications and developed a decision process that serves as a guideline to improve the scalability of their applications.
Building on the scalability framework, we showcase three potential applications for blockchain technology. First, we develop a token-based approach for inter-company value stream mapping. By only relying on simple tokens instead of complex smart-contracts, the computational load on the network is expected to be much lower compared to other solutions. The following two solutions use offloading transactions and computations from the main blockchain. The first approach uses secure multiparty computation to offload the matching of supply and demand for manufacturing capacities to a trustless network. The transaction is written to the main blockchain only after the match is made. The second approach uses the concept of payment channel networks to enable high-frequency bidirectional micropayments for WiFi sharing. The host gets paid for every second of data usage through an off-chain channel. The full payment is only written to the blockchain after the connection to the client gets terminated.
Finally, the thesis concludes by briefly summarizing and discussing the results and providing avenues for further research.
In der Dissertation werden drei ausgewählte Reformen oder Reformbedarfe im deutschen Drei-Säulen-System der Alterssicherung untersucht:
In der Säule der gesetzlichen Altersversorgung werden Möglichkeiten zur Wiedereinsetzung des 2018 ausgesetzten Nachholfaktors in der gesetzlichen Rentenversicherung erarbeitet. Je nachdem, ob Erhöhungen des aktuellen Rentenwertes verursacht durch die Niveauschutzklausel in künftigen Jahren aufgerechnet werden sollen oder nicht, werden zwei unterschiedliche Verfahren – das Getrennte Verfahren und das Integrierte Verfahren – präsentiert, in welche sich der Nachholfaktor bei aktiver Schutzklausel und Niveauschutzklausel konsistent einfügt.
In der Säule der betrieblichen Altersversorgung werden Möglichkeiten zur Reform des steuerrechtlichen Rechnungszinsfußes von 6 % für Pensionsrückstellungen analysiert. Dabei wird betrachtet, welche Auswirkungen es für Arbeitgeber hat, wenn der Rechnungszinsfuß diskretionär einen neuen Wert erhielte, wenn er regelgebunden einem Referenzzins folgte, wenn steuerrechtlich der handelsrechtlichen Bewertung gefolgt würde, und wenn ein innovatives Tranchierungsverfahren eingeführt würde. Anschließend wird erörtert, inwieweit überhaupt gesetzgeberischer Anpassungsbedarf besteht. Es kristallisiert sich der Eindruck heraus, dass mit dem steuerrechtlichen Rechnungszinsfuß eine Gesamtkapitalrendite typisiert wird. Die Hypothese kann nicht verworfen werden, dass 6 % durchaus realistisch für deutsche Unternehmen sind.
In der Säule der privaten Altersvorsorge wird erschlossen, wann im Falle eines Riester-geförderten Erwerbs einer Immobilie in der Rentenphase des Eigenheimrentners der optimale Zeitpunkt zur Ausübung seines Wahlrechts, seine nachgelagerte Besteuerung vorzeitig zu beenden, kommt. Bei vorzeitiger Beendigung sind alle ausstehenden Beträge auf einmal, jedoch nur zu 70 % zu versteuern. Wann dieser 30%ige Nachlass vorteilhaft wird, wird demonstriert unter Variation des Wohnförderkontostands, der Renteneinkünfte, des Marktzinssatzes, des Rentenbeginns, der Überlebenswahrscheinlichkeiten sowie des Besteuerungsanteils.
Novel deep learning (DL) architectures, better data availability, and a significant increase in computing power have enabled scientists to solve problems that were considered unassailable for many years. A case in point is the “protein folding problem“, a 50-year-old grand challenge in biology that was recently solved by the DL-system AlphaFold. Other examples comprise the development of large DL-based language models that, for instance, generate newspaper articles that hardly differ from those written by humans. However, developing unbiased, reliable, and accurate DL models for various practical applications remains a major challenge - and many promising DL projects get stuck in the piloting stage, never to be completed. In light of these observations, this thesis investigates the practical challenges encountered throughout the life cycle of DL projects and proposes solutions to develop and deploy rigorous DL models.
The first part of the thesis is concerned with prototyping DL solutions in different domains. First, we conceptualize guidelines for applied image recognition and showcase their application in a biomedical research project. Next, we illustrate the bottom-up development of a DL backend for an augmented intelligence system in the manufacturing sector. We then turn to the fashion domain and present an artificial curation system for individual fashion outfit recommendations that leverages DL techniques and unstructured data from social media and fashion blogs. After that, we showcase how DL solutions can assist fashion designers in the creative process. Finally, we present our award-winning DL solution for the segmentation of glomeruli in human kidney tissue images that was developed for the Kaggle data science competition HuBMAP - Hacking the Kidney.
The second part continues the development path of the biomedical research project beyond the prototyping stage. Using data from five laboratories, we show that ground truth estimation from multiple human annotators and training of DL model ensembles help to establish objectivity, reliability, and validity in DL-based bioimage analyses.
In the third part, we present deepflash2, a DL solution that addresses the typical challenges encountered during training, evaluation, and application of DL models in bioimaging. The tool facilitates the objective and reliable segmentation of ambiguous bioimages through multi-expert annotations and integrated quality assurance. It is embedded in an easy-to-use graphical user interface and offers best-in-class predictive performance for semantic and instance segmentation under economical usage of computational resources.
The global selection of production sites is a very complex task of great strategic importance for Original Equipment Manufacturers (OEMs), not only to ensure their sustained competitiveness, but also due to the sizeable long-term investment associated with a production site. With this in mind, this work develops a process model with which OEMs can select the most appropriate production site for their specific production activity in practice. Based on a literature analysis, the process model is developed by determining all necessary preparation, by defining the properties of the selection process model, providing all necessary instructions for choosing and evaluating location factors, and by laying out the procedure of the selection process model. Moreover, the selection process model includes a discussion of location factors which are possibly relevant for OEMs when selecting a production site. This discussion contains a description and, if relevant, a macroeconomic analysis of each location factor, an explanation of their relevance for constructing and operating a production site, additional information for choosing relevant location factors, and information and instructions on evaluating them in the selection process model. To be successfully applicable, the selection process model is developed based on the assumption that the production site must not be selected in isolation, but as part of the global production network and supply chain of the OEM and, additionally, to advance the OEM’s related strategic goals. Furthermore, the selection process model is developed on the premise that a purely quantitative model cannot realistically solve an OEM’s complex selection of a production site, that the realistic analysis of the conditions at potential production sites requires evaluating the changes of these conditions over the planning horizon of the production site and that the future development of many of these conditions can only be assessed with uncertainty.
The study considers the application of text mining techniques to the analysis of curricula for study programs offered by institutions of higher education. It presents a novel procedure for efficient and scalable quantitative content analysis of module handbooks using topic modeling. The proposed approach allows for collecting, analyzing, evaluating, and comparing curricula from arbitrary academic disciplines as a partially automated, scalable alternative to qualitative content analysis, which is traditionally conducted manually. The procedure is illustrated by the example of IS study programs in Germany, based on a data set of more than 90 programs and 3700 distinct modules. The contributions made by the study address the needs of several different stakeholders and provide insights into the differences and similarities among the study programs examined. For example, the results may aid academic management in updating the IS curricula and can be incorporated into the curricular design process. With regard to employers, the results provide insights into the fulfillment of their employee skill expectations by various universities and degrees. Prospective students can incorporate the results into their decision concerning where and what to study, while university sponsors can utilize the results in their grant processes.
Today, intelligent systems that offer artificial intelligence capabilities often rely on machine learning. Machine learning describes the capacity of systems to learn from problem-specific training data to automate the process of analytical model building and solve associated tasks. Deep learning is a machine learning concept based on artificial neural networks. For many applications, deep learning models outperform shallow machine learning models and traditional data analysis approaches. In this article, we summarize the fundamentals of machine learning and deep learning to generate a broader understanding of the methodical underpinning of current intelligent systems. In particular, we provide a conceptual distinction between relevant terms and concepts, explain the process of automated analytical model building through machine learning and deep learning, and discuss the challenges that arise when implementing such intelligent systems in the field of electronic markets and networked business. These naturally go beyond technological aspects and highlight issues in human-machine interaction and artificial intelligence servitization.
This paper shows that labor demand plays an important role in the labor market reactions to a pension reform in Germany. Employers with a high share of older worker inflow compared with their younger worker inflow, employers in sectors with few investments in research and development, and employers in sectors with a high share of collective bargaining agreements allow their employees to stay employed longer after the reform. These employers offer their older employees partial retirement instead of forcing them into unemployment before early retirement because the older employees incur low substitution costs and high dismissal costs.
De exemplis deterrentibus
(2022)
Das vorliegende Buch beschäftigt sich anhand einer Sammlung von realen Fällen, die in Aufgabenform formuliert sind, mit dem leider oft gestörten Verhältnis von Theorie und Praxis in der rechtsgeprägten Unternehmensbewertung.
Es weist ähnlich wie „normale“ Fallsammlungen die jeweiligen Aufgabenstellungen und die zugehörigen Lösungen aus. Die eigentlichen Fragestellungen in den Aufgabentexten sind durch kurze Erläuterungen eingerahmt, damit jeder Fall als solcher von einem mit Bewertungsfragen halbwegs Vertrauten relativ leicht verstanden und in seiner Bedeutung eingeordnet werden kann. Dieses Vorgehen ähnelt wiederum Lehrbüchern, die Inhalte über Fälle vermitteln, nur dass hier nicht hypothetische Fälle das jeweils idealtypisch richtige Vorgehen zeigen, sondern Praxisfälle plakative Verstöße contra legem artis.
Innovative possibilities for data collection, networking, and evaluation are unleashing previously untapped potential for industrial production. However, harnessing this potential also requires a change in the way we work. In addition to expanded automation, human-machine cooperation is becoming more important: The machine achieves a reduction in complexity for humans through artificial intelligence. In fractions of a second large amounts of data of high decision quality are analyzed and suggestions are offered. The human being, for this part, usually makes the ultimate decision. He validates the machine’s suggestions and, if necessary, (physically) executes them.
Both entities are highly dependent on each other to accomplish the task in the best possible way. Therefore, it seems particularly important to understand to what extent such cooperation can be effective. Current developments in the field of artificial intelligence show that research in this area is particularly focused on neural network approaches. These are considered to be highly powerful but have the disadvantage of lacking transparency. Their inherent computational processes and the respective result reasoning remain opaque to humans. Some researchers assume that human users might therefore reject the system’s suggestions. The research domain of explainable artificial intelligence (XAI) addresses this problem and tries to develop methods to realize systems that are highly efficient and explainable.
This work is intended to provide further insights relevant to the defined goal of XAI. For this purpose, artifacts are developed that represent research achievements regarding the systematization, perception, and adoption of artificially intelligent decision support systems from a user perspective. The focus is on socio-technical insights with the aim to better understand which factors are important for effective human-machine cooperation. The elaborations predominantly represent extended grounded research. Thus, the artifacts imply an extension of knowledge in order to develop and/ or test effective XAI methods and techniques based on this knowledge. Industry 4.0, with a focus on maintenance, is used as the context for this development.
The strategic planning of Emergency Medical Service systems is directly related to the probability of surviving of the affected humans. Academic research has contributed to the evaluation of these systems by defining a variety of key performance metrics. The average response time, the workload of the system, several waiting time parameters as well as the fraction of demand that cannot immediately be served are among the most important examples. The Hypercube Queueing Model is one of the most applied models in this field. Due to its theoretical background and the implied high computational times, the Hypercube Queueing Model has only been recently used for the optimization of Emergency Medical Service systems. Likewise, only a few system performance metrics were calculated with the help of the model and the full potential therefore has not yet been reached. Most of the existing studies in the field of optimization with the help of a Hypercube Queueing Model apply the expected response time of the system as their objective function. While it leads to oftentimes balanced system configurations, other influencing factors were identified. The embedding of the Hypercube Queueing Model in the Robust Optimization as well as the Robust Goal Programming intended to offer a more holistic view through the use of different day times. It was shown that the behavior of Emergency Medical Service systems as well as the corresponding parameters are highly subjective to them. The analysis and optimization of such systems should therefore consider the different distributions of the demand, with regard to their quantity and location, in order to derive a holistic basis for the decision-making.
Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM).
First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems.
The first article, “Prescriptive Analytics for Flexible Capacity Management”, develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters.
The second article, “Prescriptive Analytics for a Multi-Shift Staffing Problem”, uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the “worlds” of queuing theory and prescriptive analytics.
The third article, “Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management” proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions.
The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation.
Aufgrund der bekannten Probleme der umlagefinanzierten gesetzlichen Rentenversicherung versucht der deutsche Gesetzgeber seit einiger Zeit, die eigenverantwortliche Altersvorsorge zu fördern. Häufig steht dabei die betriebliche Altersversorgung (bAV) im Fokus. In dieser Arbeit wird mittels Experten- und Arbeitnehmerinterviews ausführlich herausgearbeitet, wo zentrale Verbreitungshemmnisse der bAV liegen und wie diese durch Anpassung der steuer- und sozialversicherungsrechtlichen Rahmenbedingungen adressiert werden können. Wesentliche Elemente dieser Reformüberlegungen sind in das zum 01.01.2018 in Kraft getretene Betriebsrentenstärkungsgesetz eingeflossen.
Daneben wird in dieser Arbeit mithilfe einer experimentalökonomischen Analyse gezeigt, wie verschiedene Arten der Besteuerung individuelle Sparentscheidungen beeinflussen können. Dabei wird deutlich, dass Individuen die Wirkung einer nachgelagerten Besteuerung häufig nicht korrekt wahrnehmen.
As a response to the growing public awareness on the importance of organisational contributions to sustainable development, there is an increased incentive for corporations to report on their sustainability activities. In parallel with this has been the development of Sustainable HRM' which embraces a growing body of practitioner and academic literature connecting the notions of corporate sustainability to HRM. The aim of this article is to analyse corporate sustainability reporting amongst the world's largest companies and to assess the HRM aspects of sustainability within these reports in comparison to environmental aspects of sustainable management and whether organisational attributes - principally country-of-origin - influences the reporting of such practices. A focus in this article is the extent to which the reporting of various aspects of sustainability may reflect dominant models of corporate governance in the country in which a company is headquartered. The findings suggest, first and against expectations, that the overall disclosure on HRM-related performance is not lower than that on environmental performance. Second, companies report more on their internal workforce compared to their external workforce. Finally, international differences, in particular those between companies headquartered in liberal market economies and coordinated market economies, are not as apparent as expected.
The first problem is that of the optimal volume allocation in procurement. The choice of this problem was motivated by a study whose objective was to support decision-making at two procurement organizations for the procurement of Depot Medroxyprogesterone Acetate (DMPA), an injectable contraceptive. At the time of this study, only one supplier that had undergone the costly and lengthy process of WHO pre-qualification was available to these organizations. However, a new entrant supplier was expected to receive WHO qualification within the next year, thus becoming a viable second source for DMPA procurement. When deciding how to allocate the procurement volume between the two suppliers, the buyers had to consider the impact on price as well as risk. Higher allocations to one supplier yield lower prices but expose a buyer to higher supply risks, while an even allocation will result in lower supply risk but also reduce competitive pressure, resulting in higher prices. Our research investigates this single- versus dual-sourcing problem and quantifies in one model the impact of the procurement volume on competition and risk. To support decision-makers, we develop a mathematical framework that accounts for the characteristics of donor-funded global health markets and models the effects of an entrant on purchasing costs and supply risks. Our in-depth analysis provides insights into how the optimal allocation decision is affected by various parameters and explores the trade-off between competition and supply risk. For example, we find that, even if the entrant supplier introduces longer leads times and a higher default risk, the buyer still benefits from dual sourcing. However, these risk-diversification benefits depend heavily on the entrant’s in-country registration: If the buyer can ship the entrant’s product to only a selected number of countries, the buyer does not benefit from dual sourcing as much as it would if entrant’s product could be shipped to all supplied countries. We show that the buyer should be interested in qualifying the entrant’s product in countries with high demand first.
In the second problem we explore a new tendering mechanism called the postponement tender, which can be useful when buyers in the global health industry want to contract new generics suppliers with uncertain product quality. The mechanism allows a buyer to postpone part of the procurement volume’s allocation so the buyer can learn about the unknown quality before allocating the remaining volume to the best supplier in terms of both price and quality. We develop a mathematical model to capture the decision-maker’s trade-offs in setting the right split between the initial volume and the postponed volume. Our analysis shows that a buyer can benefit from this mechanism more than it can from a single-sourcing format, as it can decrease the risk of receiving poor quality (in terms of product quality and logistics performance) and even increase competitive pressure between the suppliers, thereby lowering the purchasing costs. By considering market parameters like the buyer’s size, the suppliers’ value (difference between quality and cost), quality uncertainty, and minimum order volumes, we derive optimal sourcing strategies for various market structures and explore how competition is affected by the buyer’s learning about the suppliers’ quality through the initial volume.
The third problem considers the repeated procurement problem of pharmacies in Kenya that have multi-product inventories. Coordinating orders allows pharmacies to achieve lower procurement prices by using the quantity discounts manufacturers offer and sharing fixed ordering costs, such as logistics costs. However, coordinating and optimizing orders for multiple products is complex and costly. To solve the coordinated procurement problem, also known as the Joint Replenishment Problem (JRP) with quantity discounts, a novel, data-driven inventory policy using sample-average approximation is proposed. The inventory policy is developed based on renewal theory and is evaluated using real-world sales data from Kenyan pharmacies. Multiple benchmarks are used to evaluate the performance of the approach. First, it is compared to the theoretically optimal policy --- that is, a dynamic-programming policy --- in the single-product setting without quantity discounts to show that the proposed policy results in comparable inventory costs. Second, the policy is evaluated for the original multi-product setting with quantity discounts and compared to ex-post optimal costs. The evaluation shows that the policy’s performance in the multi-product setting is similar to its performance in the single-product setting (with respect to ex-post optimal costs), suggesting that the proposed policy offers a promising, data-driven solution to these types of multi-product inventory problems.
Allocation planning describes the process of allocating scarce supply to individual customers in order to prioritize demands from more important customers, i.e. because they request a higher service-level target. A common assumption across publications is that allocation planning is performed by a single planner with the ability to decide on the allocations to all customers simultaneously. In many companies, however, there does not exist such a central planner and, instead, allocation planning is a decentral and iterative process aligned with the company's multi-level hierarchical sales organization.
This thesis provides a rigorous analytical and numerical analysis of allocation planning in such hierarchical settings. It studies allocation methods currently used in practice and shows that these approaches typically lead to suboptimal allocations associated with significant performance losses. Therefore, this thesis provides multiple new allocation approaches which show a much higher performance, but still are simple enough to lend themselves to practical application. The findings in this thesis can guide decision makers when to choose which allocation approach and what factors are decisive for their performance. In general, our research suggests that with a suitable hierarchical allocation approach, decision makers can expect a similar performance as under centralized planning.
Traditional fashion retailers are increasingly hard-pressed to keep up with their digital competitors. In this context, the re-invention of brick-and-mortar stores as smart retail environments is being touted as a crucial step towards regaining a competitive edge. This thesis describes a design-oriented research project that deals with automated product tracking on the sales floor and presents three smart fashion store applications that are tied to such localization information: (i) an electronic article surveillance (EAS) system that distinguishes between theft and non-theft events, (ii) an automated checkout system that detects customers’ purchases when they are leaving the store and associates them with individual shopping baskets to automatically initiate payment processes, and (iii) a smart fitting room that detects the items customers bring into individual cabins and identifies the items they are currently most interested in to offer additional customer services (e.g., product recommendations or omnichannel services). The implementation of such cyberphysical systems in established retail environments is challenging, as architectural constraints, well-established customer processes, and customer expectations regarding privacy and convenience pose challenges to system design. To overcome these challenges, this thesis leverages Radio Frequency Identification (RFID) technology and machine learning techniques to address the different detection tasks. To optimally configure the systems and draw robust conclusions regarding their economic value contribution, beyond technological performance criteria, this thesis furthermore introduces a service operations model that allows mapping the systems’ technical detection characteristics to business relevant metrics such as service quality and profitability. This analytical model reveals that the same system component for the detection of object transitions is well suited for the EAS application but does not have the necessary high detection accuracy to be used as a component of an automated checkout system.
Vor allem unter Geringverdienern ist die betriebliche Altersversorgung nur unterdurchschnittlich verbreitet. Mit dem zum 01.01.2018 in Kraft getretenen Betriebsrentenstärkungsgesetz und insbesondere dem sogenannten BAV-Förderbetrag (§ 100 EStG) versucht der Gesetzgeber daher, diese Altersvorsorgeform attraktiver zu gestalten und so deren Verbreitung unter Geringverdienern auszuweiten. Dass dieses Ziel zumindest aus modelltheoretischer Sicht erreicht werden kann, zeigen die Ergebnisse dieser Studie auf. Anhand eines deterministischen Rechenmodells werden die finanziellen Vor- und Nachteile verschiedener Vorsorgealternativen aufgedeckt und präzise beziffert. Daneben widmet sich die Arbeit auch den steuer-, sozialversicherungs- und arbeitsrechtlichen Regelungen der betrieblichen Altersversorgung vor und nach Inkrafttreten des Betriebsrentenstärkungsgesetzes und diskutiert darüber hinaus alternative Reformmaßnahmen.
We investigate how the demographic composition of the workforce along the sex, nationality, education, age and tenure dimensions affects job switches. Fitting duration models for workers’ job‐to‐job turnover rate that control for workplace fixed effects in a representative sample of large manufacturing plants in Germany during 1975–2016, we find that larger co‐worker similarity in all five dimensions substantially depresses job‐to‐job moves, whereas workplace diversity is of limited importance. In line with conventional wisdom, which has that birds of a feather flock together, our interpretation of the results is that workers prefer having co‐workers of their kind and place less value on diverse workplaces.
Accounting plays an essential role in solving the principal-agent problem between managers and shareholders of capital market-oriented companies through the provision of information by the manager. However, this can succeed only if the accounting information is of high quality. In this context, the perceptions of shareholders regarding earnings quality are of particular importance.
The present dissertation intends to contribute to a deeper understanding regarding earnings quality from the perspective of shareholders of capital market-oriented companies. In particular, the thesis deals with indicators of shareholders’ perceptions of earnings quality, the influence of the auditor’s independence on these perceptions, and the shareholders’ assessment of the importance of earnings quality in general. Therefore, this dissertation examines market reactions to earnings announcements, measures of earnings quality and the auditor’s independence, as well as shareholders’ voting behavior at annual general meetings.
Following the introduction and a theoretical part consisting of two chapters, which deal with the purposes of accounting and auditing as well as the relevance of shareholder voting at the annual general meeting in the context of the principal-agent theory, the dissertation presents three empirical studies.
The empirical study presented in chapter 4 investigates auditor ratification votes in a U.S. setting. The study addresses the question of whether the results of auditor ratification votes are informative regarding shareholders’ perceptions of earnings quality. Using a returns-earnings design, the study demonstrates that the results of auditor ratification votes are associated with market reactions to unexpected earnings at the earnings announcement date. Furthermore, there are indications that this association seems to be positively related to higher levels of information asymmetry between managers and shareholders. Thus, there is empirical support for the notion that the results of auditor ratification votes are earnings-related information that might help shareholders to make informed investment decisions.
Chapter 5 investigates the relation between the economic importance of the client and perceived earnings quality. In particular, it is examined whether and when shareholders have a negative perception of an auditor’s economic dependence on the client. The results from a Big 4 client sample in the U.S. (fiscal years 2010 through 2014) indicate a negative association between the economic importance of the client and shareholders’ perceptions of earnings quality. The results are interpreted to mean that shareholders are still concerned about auditor independence even ten years after the implementation of the Sarbanes-Oxley Act. Furthermore, the association between the economic importance of the client and shareholders’ perceptions of earnings quality applies predominantly to the subsample of clients that are more likely to be financially distressed. Therefore, the empirical results reveal that shareholders’ perceptions of auditor independence are conditional on the client’s circumstances.
The study presented in chapter 6 sheds light on the question of whether earnings quality influences shareholders’ satisfaction with the members of the company’s board. Using data from 1,237 annual general meetings of German listed companies from 2010 through 2015, the study provides evidence that earnings quality – measured by the absolute value of discretionary accruals – is related to shareholders’ satisfaction with the company’s board. Moreover, the findings imply that shareholders predominantly blame the management board for inferior earnings quality. Overall, the evidence that earnings quality positively influences shareholders’ satisfaction emphasizes the relevance of earnings quality.
This dissertation consists of three independent, self-contained research papers that investigate how state-of-the-art machine learning algorithms can be used in combination with operations management models to consider high dimensional data for improved planning decisions. More specifically, the thesis focuses on the question concerning how the underlying decision support models change structurally and how those changes affect the resulting decision quality.
Over the past years, the volume of globally stored data has experienced tremendous growth. Rising market penetration of sensor-equipped production machinery, advanced ways to track user behavior, and the ongoing use of social media lead to large amounts of data on production processes, user behavior, and interactions, as well as condition information about technical gear, all of which can provide valuable information to companies in planning their operations. In the past, two generic concepts have emerged to accomplish this. The first concept, separated estimation and optimization (SEO), uses data to forecast the central inputs (i.e., the demand) of a decision support model. The forecast and a distribution of forecast errors are then used in a subsequent stochastic optimization model to determine optimal decisions. In contrast to this sequential approach, the second generic concept, joint estimation-optimization (JEO), combines the forecasting and optimization step into a single optimization problem. Following this approach, powerful machine learning techniques are employed to approximate highly complex functional relationships and hence relate feature data directly to optimal decisions.
The first article, “Machine learning for inventory management: Analyzing two concepts to get from data to decisions”, chapter 2, examines performance differences between implementations of these concepts in a single-period Newsvendor setting. The paper first proposes a novel JEO implementation based on the random forest algorithm to learn optimal decision rules directly from a data set that contains historical sales and auxiliary data. Going forward, we analyze structural properties that lead to these performance differences. Our results show that the JEO implementation achieves significant cost improvements over the SEO approach. These differences are strongly driven by the decision problem’s cost structure and the amount and structure of the remaining forecast uncertainty.
The second article, “Prescriptive call center staffing”, chapter 3, applies the logic of integrating data analysis and optimization to a more complex problem class, an employee staffing problem in a call center. We introduce a novel approach to applying the JEO concept that augments historical call volume data with features like the day of the week, the beginning of the month, and national holiday periods. We employ a regression tree to learn the ex-post optimal staffing levels based on similarity structures in the data and then generalize these insights to determine future staffing levels. This approach, relying on only few modeling assumptions, significantly outperforms a state-of-the-art benchmark that uses considerably more model structure and assumptions.
The third article, “Data-driven sales force scheduling”, chapter 4, is motivated by the problem of how a company should allocate limited sales resources. We propose a novel approach based on the SEO concept that involves a machine learning model to predict the probability of winning a specific project. We develop a methodology that uses this prediction model to estimate the “uplift”, that is, the incremental value of an additional visit to a particular customer location. To account for the remaining uncertainty at the subsequent optimization stage, we adapt the decision support model in such a way that it can control for the level of trust in the predicted uplifts. This novel policy dominates both a benchmark that relies completely on the uplift information and a robust benchmark that optimizes the sum of potential profits while neglecting any uplift information.
The results of this thesis show that decision support models in operations management can be transformed fundamentally by considering additional data and benefit through better decision quality respectively lower mismatch costs. The way how machine learning algorithms can be integrated into these decision support models depends on the complexity and the context of the underlying decision problem. In summary, this dissertation provides an analysis based on three different, specific application scenarios that serve as a foundation for further analyses of employing machine learning for decision support in operations management.
Autonomous cars and artificial intelligence that beats humans in Jeopardy or Go are glamorous examples of the so-called Second Machine Age that involves the automation of cognitive tasks [Brynjolfsson and McAfee, 2014]. However, the larger impact in terms of increasing the efficiency of industry and the productivity of society might come from computers that improve or take over business decisions by using large amounts of available data. This impact may even exceed that of the First Machine Age, the industrial revolution that started with James Watt’s invention of an efficient steam engine in the late eighteenth century. Indeed, the prevalent phrase that calls data “the new oil” indicates the growing awareness of data’s importance. However, many companies, especially those in the manufacturing and traditional service industries, still struggle to increase productivity using the vast amounts of
data [for Economic Co-operation and Development, 2018].
One reason for this struggle is that companies stick with a traditional way of using data for decision support in operations management that is not well suited to automated decision-making. In traditional inventory and capacity management, some data – typically just historical demand data – is used to estimate a model that makes predictions about uncertain planning parameters, such as customer demand. The planner then has two tasks: to adjust the prediction with respect to additional information that was not part of the data but still might influence demand and to take the remaining uncertainty into account and determine a safety buffer based on the underage and overage costs. In the best case, the planner determines the safety buffer based on an optimization model that takes the costs and the distribution of historical forecast errors into account; however, these decisions are usually based on a planner’s experience and intuition, rather than on solid data analysis.
This two-step approach is referred to as separated estimation and optimization (SEO). With SEO, using more data and better models for making the predictions would improve only the first step, which would still improve decisions but would not automize (and, hence, revolutionize) decision-making. Using SEO is like using a stronger horse to pull the plow: one still has to walk behind.
The real potential for increasing productivity lies in moving from predictive to prescriptive approaches, that is, from the two-step SEO approach, which uses predictive models in the estimation step, to a prescriptive approach, which integrates the optimization problem with the estimation of a model that then provides a direct functional relationship between the data and the decision. Following Akcay et al. [2011], we refer to this integrated approach as joint estimation-optimization (JEO). JEO approaches prescribe decisions, so they can automate the decision-making process. Just as the steam engine replaced manual work, JEO approaches replace cognitive work.
The overarching objective of this dissertation is to analyze, develop, and evaluate new ways for how data can be used in making planning decisions in operations management to unlock the potential for increasing productivity. In doing so, the thesis comprises five self-contained research articles that forge the bridge from predictive to prescriptive approaches. While the first article focuses on how sensitive data like condition data from machinery can be used to make predictions of spare-parts demand, the remaining articles introduce, analyze, and discuss prescriptive approaches to inventory and capacity management.
All five articles consider approach that use machine learning and data in innovative ways to improve current approaches to solving inventory or capacity management problems. The articles show that, by moving from predictive to prescriptive approaches, we can improve data-driven operations management in two ways: by making decisions more accurate and by automating decision-making. Thus, this dissertation provides examples of how digitization and the Second Machine Age can change decision-making in companies to increase efficiency and productivity.
Die Unabhängigkeit des Abschlussprüfers ist von anhaltender Relevanz, wird jedoch immer wieder in Frage gestellt. Der Fokus von Regulierungsbehörden und Forschung liegt auf kapitalmarktorientierten Unternehmen. Die Unabhängigkeit kann besonders gefährdet sein, wenn Schutzmechanismen, wie z. B. die Haftung oder das Risiko eines Reputationsverlustes, besonders schwach ausgeprägt sind. Es kann abgeleitet werden, dass bei privaten Unternehmen das Risiko eines Reputationsverlustes im Vergleich zu kapitalmarktorientierten Unternehmen geringer ist. Weiterhin ist das Haftungsrisiko für den Abschlussprüfer in Deutschland verglichen mit angelsächsischen Ländern geringer.
Damit untersucht die Arbeit die Unabhängigkeit in einem Umfeld, in dem diese besonders gefährdet ist. Als Surrogat wird die Wahrscheinlichkeit einer Going-Concern-Modifikation („GCM“) herangezogen. GCM können als Indikator für die Prüfungsqualität besonders geeignet sein, da sie ein direktes Ergebnis der Tätigkeit des Abschlussprüfers sind und von ihm formuliert und verantwortet werden. Für das Surrogat GCM ist für Deutschland im Bereich der privaten Unternehmen bislang keine Studie bekannt.
This paper provides a critical analysis of the subadditivity axiom, which is the key condition for coherent risk measures. Contrary to the subadditivity assumption, bank mergers can create extra risk. We begin with an analysis how a merger affects depositors, junior or senior bank creditors, and bank owners. Next it is shown that bank mergers can result in higher payouts having to be made by the deposit insurance scheme. Finally, we demonstrate that if banks are interconnected via interbank loans, a bank merger could lead to additional contagion risks. We conclude that the subadditivity assumption should be rejected, since a subadditive risk measure, by definition, cannot account for such increased risks.
Advanced Analytics in Operations Management and Information Systems: Methods and Applications
(2019)
The digital transformation of business and society presents enormous potentials for companies across all sectors. Fueled by massive advances in data generation, computing power, and connectivity, modern organizations have access to gigantic amounts of data. Companies seek to establish data-driven decision cultures to leverage competitive advantages in terms of efficiency and effectiveness. While most companies focus on descriptive tools such as reporting, dashboards, and advanced visualization, only a small fraction already leverages advanced analytics (i.e., predictive and prescriptive analytics) to foster data-driven decision-making today. Therefore, this thesis set out to investigate potential opportunities to leverage prescriptive analytics in four different independent parts.
As predictive models are an essential prerequisite for prescriptive analytics, the first two parts of this work focus on predictive analytics. Building on state-of-the-art machine learning techniques, we showcase the development of a predictive model in the context of capacity planning and staffing at an IT consulting company. Subsequently, we focus on predictive analytics applications in the manufacturing sector. More specifically, we present a data science toolbox providing guidelines and best practices for modeling, feature engineering, and model interpretation to manufacturing decision-makers. We showcase the application of this toolbox on a large data-set from a German manufacturing company.
Merely using the improved forecasts provided by powerful predictive models enables decision-makers to generate additional business value in some situations. However, many complex tasks require elaborate operational planning procedures. Here, transforming additional information into valuable actions requires new planning algorithms. Therefore, the latter two parts of this thesis focus on prescriptive analytics. To this end, we analyze how prescriptive analytics can be utilized to determine policies for an optimal searcher path problem based on predictive models. While rapid advances in artificial intelligence research boost the predictive power of machine learning models, a model uncertainty remains in most settings. The last part of this work proposes a prescriptive approach that accounts for the fact that predictions are imperfect and that the arising uncertainty needs to be considered. More specifically, it presents a data-driven approach to sales-force scheduling. Based on a large data set, a model to predictive the benefit of additional sales effort is trained. Subsequently, the predictions, as well as the prediction quality, are embedded into the underlying team orienteering problem to determine optimized schedules.
The present dissertation includes three research papers dealing with the following banking topics: (dis-) incentives and risk taking, earnings management and the regulation of supervisory boards.
„Do cooperative banks suffer from moral hazard behaviour? Evidence in the context of efficiency and risk“:
We use Granger-causality techniques to evaluate the intertemporal relationships among risk, efficiency and capital. We use two different measures of bank efficiency, i.e., cost and profit efficiency, since these measures reflect different managerial abilities. One is the ability to manage costs, and the other is the ability to maximize profits. We find that lower cost and profit efficiency Granger-cause increases in liquidity risk. We also identify that credit risk negatively Granger-causes cost and profit efficiency. Most importantly, our results show a positive relationship between capital and credit risk, thus displaying that moral hazard (due to limited liability and deposit insurance) does not apply to our sample of cooperative banks. On the contrary, we find evidence that banks with low capital are able to improve their loan quality in subsequent periods. These findings may be important to regulators, who should consider banks’ business models when introducing new regulatory capital constraints.
„Earnings Management Modelling in the Banking Industry – Evaluating valuable approaches“:
Accounting research has separately studied the field of Earnings Management (EM) for non-financial and financial industries. Since EM cannot be observed directly, it is important for every research question in any setting to find a verifiable proxy for EM. However, we still lack a thorough understanding of what regressors can add value to the estimation process of EM in banks. This study tries to close this gap and analyses existing model specifications of discretionary loan loss provisions (LLP) in the banking sector to identify common pattern groups and specific patterns used. Thereupon, we use an US-dataset from 2005-2015 and apply prevalent test procedures to examine the extent of measurement errors, extreme performance and omitted-variable biases and predictive power of the discretionary proxies of each of the models. Our results indicate that a thorough understanding about the methodological modelling process of EM in the banking industry is important. The currently established models to estimate EM are appropriate yet optimizable. In particular, we identify non-performing asset patterns as the most important group, while loan loss allowances and net charge offs can add some value, though do not seem to be indispensable. In addition, our results show that non-linearity of certain regressors can be an issue, which should be addressed in future research, while we identify some omitted and possibly correlated variables that might add value to specifications in identifying non-discretionary LLP. Results also indicate that a dynamic model and endogeneity robust estimation approach is not necessarily linked to better prediction power.
„Board Regulation and its Impact on Composition and Effects – Evidence from German Cooperative Bank“:
This study employs a system GMM framework to examine the impact of potential regulatory intervention regarding the occupations of supervisory board members in cooperative banks. To achieve insights the study proceeds in two different ways. First, the author investigates the changes in board structure prior and following to the German Act to Strengthen Financial Market and Insurance Supervision (FinVAG). Second, the author estimates the influence of Ph.D. degree holders and occupational concentration on bank-risk changes in consideration of the implementation of FinVAG. Therefore, the sample consists of 246 German cooperative banks from 2006-2011. Regarding bank-risk the author applies four different measures: credit-, equity-, liquidity-risk and the Z-Score, with the former three also being addressed in FinVAG. Results indicate that the implementation of FinVAG results in structural changes in board composition, especially at the expense of farmers. In addition, the implementation affects all risk-measures and relations between risk-measures and supervisory board characteristics in a risk-reducing and therefore intended way.
To disentangle the complex relationship between board characteristics and risk measures the study utilizes a two-step system GMM estimator to account for unobserved heterogeneity, and simultaneity in order to reduce endogeneity problems. The findings may be especially relevant for stakeholders, regulators, supervisors and managers.
In our globalized world, companies operate on an international market. To concentrate on their main competencies and be more competitive, they integrate into supply chain networks. However, these potentials also bear many risks. The emergence of an international market also creates pressure from competitors, forcing companies to collaborate with new and unknown companies in dynamic supply chain networks. In many cases, this can cause a lack of trust as the application of illegal practices and the breaking of agreements through complex and nontransparent supply chain networks pose a threat.
Blockchain technology provides a transparent, decentralized, and distributed means of chaining data storage and thus enables trust in its tamper-proof storage, even if there is no trust in the cooperation partners. The use of the blockchain also provides the opportunity to digitize, automate, and monitor processes within supply chain networks in real time.
The research project "Plattform für das integrierte Management von Kollaborationen in Wertschöpfungsnetzwerken" (PIMKoWe) addresses this issue. The aim of this report is to define requirements for such a collaboration platform. We define requirements based on a literature review and expert interviews, which allow for an objective consideration of scientific and practical aspects. An additional survey validates and further classifies these requirements as “essential”, “optional”, or “irrelevant”. In total, we have derived a collection of 45 requirements from different dimensions for the collaboration platform.
Employing these requirements, we illustrate a conceptual architecture of the platform as well as introduce a realistic application scenario. The presentation of the platform concept and the application scenario can provide the foundation for implementing and introducing a blockchain-based collaboration platform into existing supply chain networks in context of the research project PIMKoWe.
Dieser Beitrag konzentriert sich auf die Entwicklung von Technologieclustern und basiert auf zwei Forschungsfragen: Was sind die Voraussetzungen für die Entwicklung von Technologieclustern gemäß der Clusterforschung? Und erfüllt die Region Mainfranken die Voraussetzungen für eine Technologieclusterbildung? Zu diesem Zweck wird eine qualitative Studie unter Bezugnahme auf verschiedene theoretische Konzepte der Clusterbildung durchgeführt. Aus diesem Grund können die folgenden Determinanten der Clusterentwicklung abgeleitet werden: die Verkehrsinfrastruktur- und Infrastrukturkomponente, die Clusterumfeldkomponente, die Universitätskomponente, die Staatskomponente und die Branchenkomponente. Die Analyse der Parameterwerte der einzelnen Clusterkomponenten zeigt, dass die Kernanforderungen der Technologieclusterentwicklung in der Region Mainfranken erfüllt sind. Dennoch ist es notwendig, die Infrastruktur, die kommerzielle und industrielle Verfügbarkeit von Land und die Verfügbarkeit von Kapital zu verbessern, um ein erfolgreiches Technologiecluster zu bilden. Im Rahmen der vorliegenden Arbeit konnte darüber hinaus das Potenzial der Technologieclusterentwicklung im Bereich der künstlichen Intelligenz analysiert werden.