338 Produktion
Filtern
Volltext vorhanden
- ja (13)
Dokumenttyp
Schlagworte
- Maschinelles Lernen (5)
- Operations Management (5)
- Entscheidungsunterstützung (4)
- Prescriptive Analytics (4)
- Advanced Analytics (3)
- Big Data (3)
- Capacity Management (2)
- Einzelhandel (2)
- Kapazitätsplanung (2)
- Predictive Analytics (2)
Institut
EU-Projektnummer / Contract (GA) number
Tourism in Würzburg: Suggestions on how to enhance the travel experience for Chinese tourists
(2017)
This report provides suggestions on how to enhance the travel experience for Chinese tourists in the German city of Würzburg. Based on a user experience survey and a market research, this work includes a quantitative and competitive analysis. It further provides concrete and hands-on measurements for the city council to improve the experience of Chinese visitors coming to Würzburg.
The strategic planning of Emergency Medical Service systems is directly related to the probability of surviving of the affected humans. Academic research has contributed to the evaluation of these systems by defining a variety of key performance metrics. The average response time, the workload of the system, several waiting time parameters as well as the fraction of demand that cannot immediately be served are among the most important examples. The Hypercube Queueing Model is one of the most applied models in this field. Due to its theoretical background and the implied high computational times, the Hypercube Queueing Model has only been recently used for the optimization of Emergency Medical Service systems. Likewise, only a few system performance metrics were calculated with the help of the model and the full potential therefore has not yet been reached. Most of the existing studies in the field of optimization with the help of a Hypercube Queueing Model apply the expected response time of the system as their objective function. While it leads to oftentimes balanced system configurations, other influencing factors were identified. The embedding of the Hypercube Queueing Model in the Robust Optimization as well as the Robust Goal Programming intended to offer a more holistic view through the use of different day times. It was shown that the behavior of Emergency Medical Service systems as well as the corresponding parameters are highly subjective to them. The analysis and optimization of such systems should therefore consider the different distributions of the demand, with regard to their quantity and location, in order to derive a holistic basis for the decision-making.
Die steigende Relevanz von Einzelhandelsagglomerationen zählt zu den zentralen raumbezogenen Elementen des Strukturwandels im Einzelhandel. Sowohl geplante Einkaufszentren als auch Standortkooperationen von eigentlich in interformalem Wettbewerb stehenden Betriebsformen prägen immer mehr die Standortstrukturen des Einzelhandels. Die vorliegende Untersuchung beschäftigt sich mit dem räumlichen Einkaufsverhalten der Konsumenten im Zusammenhang mit derartigen Erscheinungen. Zunächst werden aus verschiedenen theoretischen Perspektiven (Mikroökonomie, Raumwirtschaftstheorie, verhaltenswissenschaftliche Marketing-Forschung) jene positiven Agglomerationseffekte im Einzelhandel hergeleitet, die auf dem Kundenverhalten basieren; hierbei lassen sich verschiedene Typen von Kopplungs- und Vergleichskäufen als relevante Einkaufsstrategien identifizieren. Die angenommene (positive) Wirkung von Einzelhandelsagglomerationen wird mithilfe eines ökonometrischen Marktgebietsmodells – dem Multiplicative Competitive Interaction (MCI) Model – auf der Grundlage primärempirisch erhobener Marktgebiete überprüft. Die Analyseergebnisse zeigen überwiegend positive Einflüsse des Potenzials für Kopplungs- und Vergleichskäufe auf die Kundenzuflüsse einzelner Anbieter, wenngleich sich diese in ihrer Intensität und Ausgestaltung unterscheiden. Die Untersuchung zeigt die Relevanz von Agglomerationseffekten im Einzelhandel auf, wobei ein quantitatives Modell auf der Basis des häufig verwendeten Huff-Modells formuliert wird, mit dem es möglich ist, diese Effekte zu analysieren. Konkrete Anwendungen hierfür finden sich in der betrieblichen Standortanalyse und der Verträglichkeitsbeurteilung von Einzelhandelsansiedlungen.
Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM).
First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems.
The first article, “Prescriptive Analytics for Flexible Capacity Management”, develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters.
The second article, “Prescriptive Analytics for a Multi-Shift Staffing Problem”, uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the “worlds” of queuing theory and prescriptive analytics.
The third article, “Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management” proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions.
The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation.
The present thesis analyzes whether and - if so - under which conditions mergers result in merger-specific efficiency gains. The analysis concentrates on manufacturing firms in Europe that participate in horizontal mergers as either buyer or target in the years 2005 to 2014.
The result of the present study is that mergers are idiosyncratic processes. Thus, the possibilities to define general conditions that predict merger-specific efficiency gains are limited.
However, the results of the present study indicate that efficiency gains are possible as a direct consequence of a merger. Efficiency changes can be measured by a Total Factor Productivity (TFP) approach. Significant merger-specific efficiency gains are more likely for targets than for buyers. Moreover, mergers of firms that mainly operate in the same segment are likely to generate efficiency losses. Efficiency gains most likely result from reductions in material and labor costs, especially on a short- and mid-term perspective. The analysis of conditions that predict efficiency gains indicates that firm that announce the merger themselves are capable to generate efficiency gains in a short- and mid-term perspective. Furthermore, buyers that are mid-sized firms are more likely to generate efficiency gains than small or large buyers. Results also indicate that capital intense firms are likely to generate efficiency gains after a merger.
The present study is structured as follows.
Chapter 1 motivates the analysis of merger-specific efficiency gains. The definition of conditions that reasonably likely predict when and to which extent mergers will result in merger-specific efficiency gains, would improve the merger approval or denial process.
Chapter 2 gives a literature review of some relevant empirical studies that analyzed merger-specific efficiency gains. None of the empirical studies have analyzed horizontal mergers of European firms in the manufacturing sector in the years 2005 to 2014. Thus, the present study contributes to the existing literature by analyzing efficiency gains from those mergers.
Chapter 3 focuses on the identification of mergers. The merger term is defined according to the EC Merger Regulation and the Horizontal Merger Guidelines. The definition and the requirements of mergers according to legislation provides the framework of merger identification.
Chapter 4 concentrates on the efficiency measurement methodology. Most empirical studies apply a Total Factor Productivity (TFP) approach to estimate efficiency. The TFP approach uses linear regression in combination with a control function approach. The estimation of coefficients is done by a General Method of Moments approach.
The resulting efficiency estimates are used in the analysis of merger-specific efficiency gains in chapter 5. This analysis is done separately for buyers and targets by applying a Difference-In-Difference (DID) approach.
Chapter 6 concentrates on an alternative approach to estimate efficiency, that is a Stochastic Frontier Analysis (SFA) approach. Comparable to the TFP approach, the SFA approach is a stochastic efficiency estimation methodology. In contrast to TFP, SFA estimates the production function as a frontier function instead of an average function. The frontier function allows to estimate efficiency in percent.
Chapter 7 analyses the impact of different merger- and firm-specific characteristics on efficiency changes of buyers and targets. The analysis is based on a multiple regression, which is applied for short-, mid- and long-term efficiency changes of buyers and targets.
Chapter 8 concludes.
Die Lage(qualität) stellt den wichtigsten Faktor für den Erfolg eines Standorts dar! Dies gilt spätestens seit der Entstehung der ersten Fußgängerzonen in den 1950er Jahren und der Herausbildung der 1A-Lagen als begehrte innerstädtische Unternehmensstandorte.
Verwunderlich ist jedoch, dass trotz einer weitläufigen Bekanntheit des Begriffs der Lage(qualität), bzw. der 1A-, B- und C-Lage, zum aktuellen Zeitpunkt in Theorie und Praxis nicht nur vielfältige Bezeichnungen zur Beschreibung und Klassifizierung innerstädtischer Handelsstandorte, sondern auch eine große Bandbreite an Kriterien und Methodiken bestehen, die zur Qualitätsermittlung herangezogen werden.
Im Hinblick auf die aktuell knappen kommunalen Haushaltsmittel, den steigenden Wettbewerbsdruck im Handel und die zunehmende Krisenanfälligkeit des Wirtschafts-, Finanz- und Immobiliensektors und dem daraus resultierenden Bedeutungszuwachs fundierter Standort- bzw. Lageanalysen, stellt sich die Frage, welche Kriterien aus wissenschaftlicher Sicht zur Ermittlung von Lagequalitäten geeignet sind und wie ein aus diesen bestehendes Instrumentarium auszugestalten ist.
Darüber hinaus ist vor dem Hintergrund der in den letzten Jahren wachsenden Aktivitäten zur Zentrenrevitalisierung zudem zu überprüfen, ob ein solches Lagequalitäteninstrumentarium zur Schaffung einer soliden Datenbasis eingesetzt werden könnte, welche als wesentliche Grundlage zur Evaluierung verschiedener innerstädtischer Wiederbelebungsmaßnahmen fungiert.
Diesen und weiteren im Kontext der aktuellen Innenstadt- und Einzelhandelsentwicklung auftretenden Fragestellungen geht die vorliegende Arbeit nach.
Companies are expected to act as international players and to use their capabilities to provide customized products and services quickly and efficiently. Today, consumers expect their requirements to be met within a short time and at a favorable price. Order-to-delivery lead time has steadily gained in importance for consumers. Furthermore, governments can use various emissions policies to force companies and customers to reduce their greenhouse gas emissions. This thesis investigates the influence of order-to-delivery lead time and different emission policies on the design of a supply chain. Within this work different supply chain design models are developed to examine these different influences. The first model incorporates lead times and total costs, and various emission policies are implemented to illustrate the trade-off between the different measures. The second model reflects the influence of order-to-delivery lead time sensitive consumers, and different emission policies are implemented to study their impacts. The analysis shows that the share of order-to-delivery lead time sensitive consumers has a significant impact on the design of a supply chain. Demand uncertainty and uncertainty in the design of different emission policies are investigated by developing an appropriate robust mathematical optimization model. Results show that especially uncertainties on the design of an emission policy can significantly impact the total cost of a supply chain. The effects of differently designed emission policies in various countries are investigated in the fourth model. The analyses highlight that both lead times and emission policies can strongly influence companies' offshoring and nearshoring strategies.
We propose that false beliefs about own current economic status are an important factor for explaining populist attitudes. Eliciting subjects’ receptiveness to rightwing populism and their perceived relative income positions in a representative survey of German households, we find that people with pessimistic beliefs about their income position are more attuned to populist statements. Key to understanding the misperception-populism relationship are strong gender differences in the mechanism: men are much more likely to channel their discontent into affection for populist ideas. A simple information provision does neither sustainably reduce misperception nor curb populism.
How energy conversion drives economic growth far from the equilibrium of neoclassical economics
(2014)
Energy conversion in the machines and information processors of the capital stock drives the growth of modern economies. This is exemplified for Germany, Japan, and the USA during the second half of the 20th century: econometric analyses reveal that the output elasticity, i.e. the economic weight, of energy is much larger than energyʼs share in total factor cost, while for labor just the opposite is true. This is at variance with mainstream economic theory according to which an economy should operate in the neoclassical equilibrium, where output elasticities equal factor cost shares. The standard derivation of the neoclassical equilibrium from the maximization of profit or of time-integrated utility disregards technological constraints. We show that the inclusion of these constraints in our nonlinear-optimization calculus results in equilibrium conditions, where generalized shadow prices destroy the equality of output elasticities and cost shares. Consequently, at the prices of capital, labor, and energy we have known so far, industrial economies have evolved far from the neoclassical equilibrium. This is illustrated by the example of the German industrial sector evolving on the mountain of factor costs before and during the first and the second oil price explosion. It indicates the influence of the 'virtually binding' technological constraints on entrepreneurial decisions, and the existence of 'soft constraints' as well. Implications for employment and future economic growth are discussed.
Increasing global competition forces organizations to improve their processes to gain a competitive advantage. In the manufacturing sector, this is facilitated through tremendous digital transformation. Fundamental components in such digitalized environments are process-aware information systems that record the execution of business processes, assist in process automation, and unlock the potential to analyze processes. However, most enterprise information systems focus on informational aspects, process automation, or data collection but do not tap into predictive or prescriptive analytics to foster data-driven decision-making. Therefore, this dissertation is set out to investigate the design of analytics-enabled information systems in five independent parts, which step-wise introduce analytics capabilities and assess potential opportunities for process improvement in real-world scenarios.
To set up and extend analytics-enabled information systems, an essential prerequisite is identifying success factors, which we identify in the context of process mining as a descriptive analytics technique. We combine an established process mining framework and a success model to provide a structured approach for assessing success factors and identifying challenges, motivations, and perceived business value of process mining from employees across organizations as well as process mining experts and consultants. We extend the existing success model and provide lessons for business value generation through process mining based on the derived findings. To assist the realization of process mining enabled business value, we design an artifact for context-aware process mining. The artifact combines standard process logs with additional context information to assist the automated identification of process realization paths associated with specific context events. Yet, realizing business value is a challenging task, as transforming processes based on informational insights is time-consuming.
To overcome this, we showcase the development of a predictive process monitoring system for disruption handling in a production environment. The system leverages state-of-the-art machine learning algorithms for disruption type classification and duration prediction. It combines the algorithms with additional organizational data sources and a simple assignment procedure to assist the disruption handling process. The design of such a system and analytics models is a challenging task, which we address by engineering a five-phase method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks. The method facilitates the integration of heterogeneous data sources through dedicated neural network input heads, which are concatenated for a prediction. An evaluation based on a real-world use-case highlights the superior performance of the resulting multi-headed network.
Even the improved model performance provides no perfect results, and thus decisions about assigning agents to solve disruptions have to be made under uncertainty. Mathematical models can assist here, but due to complex real-world conditions, the number of potential scenarios massively increases and limits the solution of assignment models. To overcome this and tap into the potential of prescriptive process monitoring systems, we set out a data-driven approximate dynamic stochastic programming approach, which incorporates multiple uncertainties for an assignment decision. The resulting model has significant performance improvement and ultimately highlights the particular importance of analytics-enabled information systems for organizational process improvement.