Betriebswirtschaftliches Institut
Refine
Has Fulltext
- yes (117)
Is part of the Bibliography
- yes (117)
Year of publication
Document Type
- Doctoral Thesis (80)
- Journal article (21)
- Working Paper (6)
- Book (5)
- Report (3)
- Master Thesis (2)
Keywords
- Deutschland (14)
- Maschinelles Lernen (8)
- Operations Management (7)
- Supply Chain Management (7)
- Unternehmensbewertung (6)
- Entscheidungsunterstützung (5)
- Rechnungslegung (5)
- Steuerrecht (5)
- artificial intelligence (5)
- machine learning (5)
Institute
EU-Project number / Contract (GA) number
Predicting next events in predictive process monitoring enables companies to manage and control processes at an early stage and reduce their action distance. In recent years, approaches have steadily moved from classical statistical methods towards the application of deep neural network architectures, which outperform the former and enable analysis without explicit knowledge of the underlying process model. While the focus of prior research was on the long short-term memory network architecture, more deep learning architectures offer promising extensions that have proven useful for other applications of sequential data. In our work, we introduce a gated convolutional neural network and a key-value-predict attention network to the task of next event prediction. In a comprehensive evaluation study on 11 real-life benchmark datasets, we show that these two novel architectures surpass prior work in 34 out of 44 metric-dataset combinations. For our evaluation, we consider the effects of process data properties, such as sparsity, variation, and repetitiveness, and discuss their impact on the prediction quality of the different deep learning architectures. Similarly, we evaluate their classification properties in terms of generalization and handling class imbalance. Our results provide guidance for researchers and practitioners alike on how to select, validate, and comprehensively benchmark (novel) predictive process monitoring models. In particular, we highlight the importance of sufficiently diverse process data properties in event logs and the comprehensive reporting of multiple performance indicators to achieve meaningful results.
On the composition of the long tail of business processes: Implications from a process mining study
(2021)
Digital transformation forces companies to rethink their processes to meet current customer needs. Business Process Management (BPM) can provide the means to structure and tackle this change. However, most approaches to BPM face restrictions on the number of processes they can optimize at a time due to complexity and resource restrictions. Investigating this shortcoming, the concept of the long tail of business processes suggests a hybrid approach that entails managing important processes centrally, while incrementally improving the majority of processes at their place of execution. This study scrutinizes this observation as well as corresponding implications. First, we define a system of indicators to automatically prioritize processes based on execution data. Second, we use process mining to analyze processes from multiple companies to investigate the distribution of process value in terms of their process variants. Third, we examine the characteristics of the process variants contained in the short head and the long tail to derive and justify recommendations for their management. Our results suggest that the assumption of a long-tailed distribution holds across companies and indicators and also applies to the overall improvement potential of processes and their variants. Across all cases, process variants in the long tail were characterized by fewer customer contacts, lower execution frequencies, and a larger number of involved stakeholders, making them suitable candidates for distributed improvement.
This paper examines professional associations’ local responses to global demands of accounting standardisation. Our longitudinal study from 1998 to 2018 analyses how professional associations of the German audit profession engaged in an intense framing contest over the adoption of external quality controls. Drawing on the concept of strategic action field and the literature on framing, we unpack how the gap between large audit firms and small audit firms increasingly undermined the capacity of the professional associations to fulfil their dual role of governance and representation. We unveil how their failed attempt to maintain the image of an unified profession ultimately led to the creation of a new professional association representing the ‘small auditor’ professional, which successfully, albeit temporarily, took control over the field of German auditing. Our findings suggest that the passivity of small audit firms in the process of translating global regulatory regimes should not be presumed. Rather, we provide insight into how small audit firms can rebuild their own identity by actively responding to waves of global regulation. Doing so, and contrary to prior research, our case highlights that governance units within strategic action fields are not necessarily aligned with the interests of the most powerful field actors.
We develop a model of oligopoly competition involving innovation effort, market entry and production flexibility under demand uncertainty. Several heterogeneous firms make efforts to develop new prototypes; if they succeed, they hold a shared option to enter a new market under stochastic demand. We derive analytic results for the Markov perfect equilibrium accounting for development effort, market entry and production decisions and complement these by numerical analyses. Firm value—which embeds real options—is not convex increasing in demand but exhibits “competitive waves” due to market entries by rivals. A firm with a development advantage (“innovator”) exerts greater innovation effort if the market is a niche, whereas another benefiting from economies of scale (“incumbent”) invests more if the market is larger. Positive externalities benefit the incumbent in the development stage, whereas the innovator is better off in counteracting negative externalities. Demand volatility raises firm incentives to innovate as it enhances the value of firm market‐entry and production flexibility.
Die Jahresabschlussprüfung verfolgt das Ziel, die Verlässlichkeit der Rechnungslegung zu bestätigen. Folglich kann sie einen wesentlichen Beitrag zu einem hohen Informationsniveau an den Märkten leisten. Angesichts dieser großen ökonomischen Bedeutung unternimmt der deutsche Gesetzgeber zahlreiche Anstrengungen, um eine hohe Prüfungsqualität sicherzustellen.
Die Sichtung der Wirtschaftsprüferordnung zeigt hierbei, dass regulatorische Maßnahmen ergriffen werden, die am Kern der Jahresabschlussprüfung ansetzen, nämlich an den Berufsangehörigen selbst. So wurde der Zugang zum Berufsstand der vereidigten Buchprüfer mehrmals geschlossen und wiedereröffnet. Des Weiteren sind markante Anpassungen des Niveaus des Wirtschaftsprüfungsexamens im Zeitablauf zu erkennen. Bei der Jahresabschlussprüfung der Unternehmen von öffentlichem Interesse sind außerdem besondere Berufspflichten zu erfüllen. Zum einen ist diesen schweren Eingriffen in die Freiheit der Berufswahl und der Berufsausübung gemein, dass sie allesamt die Qualifikation des Abschlussprüfers adressieren. Zum anderen werden die entsprechenden Gesetzesänderungen mehrheitlich mit einer Stärkung der Prüfungsqualität begründet.
Fraglich ist, inwiefern jene Facetten der Prüferqualifikation tatsächlich einen Einfluss auf die Prüfungsqualität ausüben. Aufgrund mangelnder Evidenz ergibt sich die Notwendigkeit, eine empirische Studie am deutschen Prüfermarkt durchzuführen und somit den Beginn zur Schließung der identifizierten Forschungslücke zu setzen.
Das Ziel der vorliegenden Dissertation besteht folglich darin, den Zusammenhang zwischen der Prüferqualifikation und der Prüfungsqualität mittels Regressionsanalysen zu untersuchen. Dazu wurde ein einzigartiger Datensatz zu deutschen privaten prüfungspflichtigen Kapitalgesellschaften mit unkonsolidierten Finanz- und Prüferinformationen im Zeitraum 2006-2018 mit insgesamt 217.585 grundlegenden Beobachtungen erhoben, bereinigt und aufbereitet. Da die Prüfungsqualität nicht direkt beobachtbar ist, wird zwischen wahrgenommener Prüfungsqualität und tatsächlicher Prüfungsqualität unterschieden. Im Rahmen dieser Dissertation wird die wahrgenommene Prüfungsqualität über Fremdkapitalkosten und die tatsächliche Prüfungsqualität über absolute diskretionäre Periodenabgrenzungen approximiert.
Die Ergebnisse der Hauptregressionen zeigen überwiegend, dass kein Zusammenhang zwischen den Maßgrößen der Prüferqualifikation und der wahrgenommenen und tatsächlichen Prüfungsqualität besteht. Die Zusatz- und Sensitivitätsanalysen unterstützen diesen Befund. So können mit Blick auf die Berufszugangsregelungen keine Qualitätsunterschiede zwischen den Berufsständen der Wirtschaftsprüfer und der vereidigten Buchprüfer nachgewiesen werden. Auch innerhalb des Berufstandes der Wirtschaftsprüfer ergeben sich keine Hinweise auf ein Qualitätsgefälle zwischen den Prüfergruppen, die unterschiedliche Examensanforderungen durchlebt haben. Hinsichtlich der Berufsausübungsregelungen ist zu beobachten, dass die zusätzlichen Anforderungen an die Jahresabschlussprüfung der Unternehmen von öffentlichem Interesse nicht mit einer anderen Prüfungsqualität bei privaten Unternehmen verbunden sind. Die beschriebenen regulatorischen Schritte des Gesetzgebers im Bereich der Prüferqualifikation erscheinen somit im Lichte einer verbesserten Prüfungsqualität nicht zwingend gerechtfertigt.
In a world of constant change, uncertainty has become a daily challenge for businesses. Rapidly shifting market conditions highlight the need for flexible responses to unforeseen events. Operations Management (OM) is crucial for optimizing business processes, including site planning, production control, and inventory management. Traditionally, companies have relied on theoretical models from microeconomics, game theory, optimization, and simulation. However, advancements in machine learning and mathematical optimization have led to a new research field: data-driven OM.
Data-driven OM uses real data, especially time series data, to create more realistic models that better capture decision-making complexities. Despite the promise of this new research area, a significant challenge remains: the availability of extensive historical training data. Synthetic data, which mimics real data, has been used to address this issue in other machine learning applications.
Therefore, this dissertation explores how synthetic data can be leveraged to improve decisions for data-driven inventory management, focusing on the single-period newsvendor problem, a classic stochastic optimization problem in inventory management.
The first article, "A Meta Analysis of Data-Driven Newsvendor Approaches", presents a standardized evaluation framework for data-driven prescriptive approaches, tested through a numerical study. Findings suggest model performance is not robust, emphasizing the need for a standardized evaluation process.
The second article, "Application of Generative Adversarial Networks in Inventory Management", examines using synthetic data generated by Generative Adversarial Networks (GANs) for the newsvendor problem. This study shows GANs can model complex demand relationships, offering a promising alternative to traditional methods.
The third article, "Combining Synthetic Data and Transfer Learning for Deep Reinforcement Learning in Inventory Management", proposes a method using Deep Reinforcement Learning (DRL) with synthetic and real data through transfer learning. This approach trains a generative model to learn demand distributions, generates synthetic data, and fine-tunes a DRL agent on a smaller real dataset. This method outperforms traditional approaches in controlled and practical settings, though further research is needed to generalize these findings.
Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability.
Das vorliegende Buch beschäftigt sich anhand einer Sammlung von realen Fällen, die in Aufgabenform formuliert sind, mit dem leider oft gestörten Verhältnis von Theorie und Praxis in der rechtsgeprägten Unternehmensbewertung.
Es weist ähnlich wie „normale“ Fallsammlungen die jeweiligen Aufgabenstellungen und die zugehörigen Lösungen aus. Die eigentlichen Fragestellungen in den Aufgabentexten sind durch kurze Erläuterungen eingerahmt, damit jeder Fall als solcher von einem mit Bewertungsfragen halbwegs Vertrauten relativ leicht verstanden und in seiner Bedeutung eingeordnet werden kann. Dieses Vorgehen ähnelt wiederum Lehrbüchern, die Inhalte über Fälle vermitteln, nur dass hier nicht hypothetische Fälle das jeweils idealtypisch richtige Vorgehen zeigen, sondern Praxisfälle plakative Verstöße contra legem artis.
Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.