Refine
Has Fulltext
- yes (67)
Is part of the Bibliography
- yes (67)
Year of publication
Document Type
- Doctoral Thesis (53)
- Journal article (7)
- Working Paper (5)
- Other (1)
- Report (1)
Keywords
- Geldpolitik (10)
- Deutschland (7)
- Makroökonomie (5)
- Monetary Policy (5)
- Notenbank (4)
- Ungleichheit (4)
- Wettbewerb (4)
- Arbeitsmarkt (3)
- Außenhandel (3)
- Devisenmarkt (3)
Institute
- Volkswirtschaftliches Institut (67) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- B-4606-2017 (1)
A comprehensive approach for currency crises theories stressing the role of the anchor country
(2008)
The approach is based on the finding that new generations of currency crises theories always had developed ex post after popular currency crises. Discussing the main theories of currency crises shows their disparity: The First Generation of currency crises models argues based on the assumption of a chronic budget deficit that is being monetized by the domestic central bank. The result is a trade-off between an expansionary monetary policy that is focused on the internal economic balance and a fixed exchange rate which is depending on the rules of interest parity and purchasing power parity. This imbalance inevitably results in a currency crisis. Altogether, this theory argues with a disrupted external balance on the foreign exchange market. Second Generation currency crises models on the other side focus on the internal macroeconomic balance. The stability of a fixed exchange rate is depending on the economic benefit of the exchange rate system in relation to the social costs of maintaining it. As soon as social costs are increasing and showing up in deteriorating fundamentals, this leads to a speculative attack on the fixed exchange rate system. The term Third Generation of currency crises finally summarizes a variety of currency crises theories. These are also arguing psychologically to explain phenomena as contagion and spill-over effects to rationalize crises detached from the fundamental situation. Apart from the apparent inconsistency of the main theories of currency crises, a further observation is that these explanations focus on the crisis country only while international monetary transmission effects are left out of consideration. These however are a central parameter for the stability of fixed exchange rate systems, in exchange rate theory as well as in empirical observations. Altogether, these findings provide the motivation for developing a theoretical approach which integrates the main elements of the different generations of currency crises theories and which integrates international monetary transmission. Therefore a macroeconomic approach is chosen applying the concept of the Monetary Conditions Index (MCI), a linear combination of the real interest rate and the real exchange rate. This index firstly is extended for international monetary influences and called MCIfix. MCIfix illustrates the monetary conditions required for the stability of a fixed exchange rate system. The central assumption of this concept is that the uncovered interest parity is maintained. The main conclusion is that the MCIfix only depends on exogenous parameters. In a second step, the analysis integrates the monetary policy requirements for achieving an internal macroeconomic stability. By minimizing a loss function of social welfare, a MCI is derived which pictures the economically optimal monetary policy MCIopt. Instability in a fixed exchange rate system occurs as soon as the monetary conditions for an internal and external balance are deviating. For discussing macroeconomic imbalances, the central parameters determining the MCIfix (and therefore the relation of MCIfix to MCIopt) are discussed: the real interest rate of the anchor country, the real effective exchange rate and a risk premium. Applying this theory framework, four constellations are discussed where MCIfix and MCIopt fall apart in order to show the central bank’s possibilities for reacting and the consequences of that behaviour. The discussion shows that the integrative approach manages to incorporate the central elements of traditional currency crises theories and that it includes international monetary transmission instead of reducing the discussion on an inconsistent domestic monetary policy. The theory framework for fixed exchange rates is finally applied in four case studies: the currency crises in Argentina, the crisis in the Czech Republic, the Asian currency crisis and the crisis of the European Monetary System. The case studies show that the developed monetary framework achieves integration of different generations of crises theories and that the monetary policy of the anchor country plays a decisive role in destabilising fixed exchange rate systems.
This dissertation contributes to the empirical analysis of economic development. The continuing poverty in many Sub-Saharan-African countries as well as the declining trend in growth in the advanced economies that was initiated around the turn of the millennium raises a number of new questions which have received little attention in recent empirical studies. Is culture a decisive factor for economic development? Do larger financial markets trigger positive stimuli with regard to incomes, or is the recent increase in their size in advanced economies detrimental to economic growth? What causes secular stagnation, i.e. the reduction in growth rates of the advanced economies observable over the past 20 years? What is the role of inequality in the growth process, and how do governmental attempts to equalize the income distribution affect economic development? And finally: Is the process of democratization accompanied by an increase in living standards? These are the central questions of this doctoral thesis.
To facilitate the empirical analysis of the determinants of economic growth, this dissertation introduces a new method to compute classifications in the field of social sciences. The approach is based on mathematical algorithms of machine learning and pattern recognition. Whereas the construction of indices typically relies on arbitrary assumptions regarding the aggregation strategy of the underlying attributes, utilization of Support Vector Machines transfers the question of how to aggregate the individual components into a non-linear optimization problem.
Following a brief overview of the theoretical models of economic growth provided in the first chapter, the second chapter illustrates the importance of culture in explaining the differences in incomes across the globe. In particular, if inhabitants have a lower average degree of risk-aversion, the implementation of new technology proceeds much faster compared with countries with a lower tendency towards risk. However, this effect depends on the legal and political framework of the countries, their average level of education, and their stage of development.
The initial wealth of individuals is often not sufficient to cover the cost of investments in both education and new technologies. By providing loans, a developed financial sector may help to overcome this shortage. However, the investigations in the third chapter show that this mechanism is dependent on the development levels of the economies. In poor countries, growth of the financial sector leads to better education and higher investment levels. This effect diminishes along the development process, as intermediary activity is increasingly replaced by speculative transactions. Particularly in times of low technological innovation, an increasing financial sector has a negative impact on economic development. In fact, the world economy is currently in a phase of this kind. Since the turn of the millennium, growth rates in the advanced economies have experienced a multi-national decline, leading to an intense debate about "secular stagnation" initiated at the beginning of 2015. The fourth chapter deals with this phenomenon and shows that the growth potentials of new technologies have been gradually declining since the beginning of the 2000s.
If incomes are unequally distributed, some individuals can invest less in education and technological innovations, which is why the fifth chapter identifies an overall negative effect of inequality on growth. This influence, however, depends on the development level of countries. While the negative effect is strongly pronounced in poor economies with a low degree of equality of opportunity, this influence disappears during the development process. Accordingly, redistributive polices of governments exert a growth-promoting effect in developing countries, while in advanced economies, the fostering of equal opportunities is much more decisive.
The sixth chapter analyzes the growth effect of the political environment and shows that the ambiguity of earlier studies is mainly due to unsophisticated measurement of the degree of democratization. To solve this problem, the chapter introduces a new method based on mathematical algorithms of machine learning and pattern recognition. While the approach can be used for various classification problems in the field of social sciences, in this dissertation it is applied for the problem of democracy measurement. Based on different country examples, the chapter shows that the resulting SVMDI is superior to other indices in modeling the level of democracy. The subsequent empirical analysis emphasizes a significantly positive growth effect of democracy measured via SVMDI.
A theory of managed floating
(2003)
After the experience with the currency crises of the 1990s, a broad consensus has emerged among economists that such shocks can only be avoided if countries that decided to maintain unrestricted capital mobility adopt either independently floating exchange rates or very hard pegs (currency boards, dollarisation). As a consequence of this view which has been enshrined in the so-called impossible trinity all intermediate currency regimes are regarded as inherently unstable. As far as the economic theory is concerned, this view has the attractive feature that it not only fits with the logic of traditional open economy macro models, but also that for both corner solutions (independently floating exchange rates with a domestically oriented interest rate policy; hard pegs with a completely exchange rate oriented monetary policy) solid theoretical frameworks have been developed. Above all the IMF statistics seem to confirm that intermediate regimes are indeed less and less fashionable by both industrial countries and emerging market economies. However, in the last few years an anomaly has been detected which seriously challenges this paradigm on exchange rate regimes. In their influential cross-country study, Calvo and Reinhart (2000) have shown that many of those countries which had declared themselves as ‘independent floaters’ in the IMF statistics were charaterised by a pronounced ‘fear of floating’ and were actually heavily reacting to exchange rate movements, either in the form of an interest rate response, or by intervening in foreign exchange markets. The present analysis can be understood as an approach to develop a theoretical framework for this managed floating behaviour that – even though it is widely used in practice – has not attracted very much attention in monetary economics. In particular we would like to fill the gap that has recently been criticised by one of the few ‘middle-ground’ economists, John Williamson, who argued that “managed floating is not a regime with well-defined rules” (Williamson, 2000, p. 47). Our approach is based on a standard open economy macro model typically employed for the analysis of monetary policy strategies. The consequences of independently floating and market determined exchange rates are evaluated in terms of a social welfare function, or, to be more precise, in terms of an intertemporal loss function containing a central bank’s final targets output and inflation. We explicitly model the source of the observable fear of floating by questioning the basic assumption underlying most open economy macro models that the foreign exchange market is an efficient asset market with rational agents. We will show that both policy reactions to the fear of floating (an interest rate response to exchange rate movements which we call indirect managed floating, and sterilised interventions in the foreign exchange markets which we call direct managed floating) can be rationalised if we allow for deviations from the assumption of perfectly functioning foreign exchange markets and if we assume a central bank that takes these deviations into account and behaves so as to reach its final targets. In such a scenario with a high degree of uncertainty about the true model determining the exchange rate, the rationale for indirect managed floating is the monetary policy maker’s quest for a robust interest rate policy rule that performs comparatively well across a range of alternative exchange rate models. We will show, however, that the strategy of indirect managed floating still bears the risk that the central bank’s final targets might be negatively affected by the unpredictability of the true exchange rate behaviour. This is where the second policy measure comes into play. The use of sterilised foreign exchange market interventions to counter movements of market determined exchange rates can be rationalised by a central bank’s effort to lower the risk of missing its final targets if it only has a single instrument at its disposal. We provide a theoretical model-based foundation of a strategy of direct managed floating in which the central bank targets, in addition to a short-term interest rate, the nominal exchange rate. In particular, we develop a rule for the instrument of intervening in the foreign exchange market that is based on the failure of foreign exchange market to guarantee a reliable relationship between the exchange rate and other fundamental variables.
Subject of the present study is the agent-based computer simulation of Agent Island. Agent Island is a macroeconomic model, which belongs to the field of monetary theory. Agent-based modeling is an innovative tool that made much progress in other scientific fields like medicine or logistics. In economics this tool is quite new, and in monetary theory to this date virtual no agent-based simulation model has been developed. It is therefore the topic of this study to close this gap to some extend. Hence, the model integrates in a straightforward way next to the common private sectors (i.e. households, consumer goods firms and capital goods firms) and as an innovation a banking system, a central bank and a monetary circuit. Thereby, the central bank controls the business cycle via an interest rate policy; the according mechanism builds on the seminal idea of Knut Wicksell (natural rate of interest vs. money rate of interest). In addition, the model contains also many Keynesian features and a flow-of-funds accounting system in the tradition of Wolfgang Stützel. Importantly, one objective of the study is the validation of Agent Island, which means that the individual agents (i.e. their rules, variables and parameters) are adjusted in such a way that on the aggregate level certain phenomena emerge. The crucial aspect of the modeling and the validation is therefore the relation between the micro and macro level: Every phenomenon on the aggregate level (e.g. some stylized facts of the business cycle, the monetary transmission mechanism, the Phillips curve relationship, the Keynesian paradox of thrift or the course of the business cycle) emerges out of individual actions and interactions of the many thousand agents on Agent Island. In contrast to models comprising a representative agent, we do not apply a modeling on the aggregate level; and in contrast to orthodox GE models, true interaction between heterogeneous agents takes place (e.g. by face-to-face-trading).
This paper examines the potential reinforcement of motivated beliefs when individuals with identical biases communicate. We propose a controlled online experiment that allows to manipulate belief biases and the communication environment. We find that communication, even among like-minded individuals, diminishes motivated beliefs if it takes place in an environment without previously declared external opinions. In the presence of external plural opinions, however, communication does not reduce but rather aggravates motivated beliefs. Our results indicate a potential drawback of the plurality of opinions - it may create communication environments wherein motivated beliefs not only persist but also become contagious within social networks.
Die Arbeit setzt sich mit Unterschieden des geldpolitischen Transmissionsprozesses im Verarbeitenden Gewerbe der Bundesrepublik Deutschland auseinander. Dazu wird der Sektor nach der Systematik der BACH-Datenbank der europäischen Kommission in 10 Branchen eingeteilt. An eine kurze Betrachtung der Industrie aus makro- und mikroökonomischer Sicht schließt sich die Beantwortung der ersten Frage an: Reagieren die Industriebranchen unterschiedlich auf geldpolitische Impulse? Monetäre Innovationen werden mit Anstiegen der kurzfristigen Geldmarktzinsen abgebildet. Damit konzentriert sich die Analyse auf die Auswirkungen von restriktiven Maßnahmen. Als Referenzgrößen wurden die Produktion und die Erzeugerpreise ausgewählt. Die Analyse der Auswirkungen auf die Produktionsentwicklung zeigt, dass ein Großteil der Industriezweige erwartungsgemäß mit Rückführungen auf Zinserhöhungen reagiert. Die stärksten Produktionseinbußen ergeben sich hierbei in der Branche Herstellung elektrischer Geräte, in der Grundlegenden Metallverarbeitung und im Industriezweig Metallerzeugnisse mit Maschinenbau. Dagegen sind die in vielen Branchen entdeckten kurzfristigen Preisanstiege auf den ersten Blick ein Rätsel. Denn die Notenbank verfolgt ihre Absicht – nämlich die Stabilisierung der Verbraucherpreise – mit einer restriktiven Ausrichtung, wenn die Preise Gefahr laufen, stärker als zielkonform anzusteigen. Die vorliegenden Ergebnisse sprechen daher dafür, dass in der kurzen Frist jedoch zusätzlicher Preisdruck auf vorgelagerter Stufe erzeugt wird. Wie können die unterschiedlichen Auswirkungen auf die Branchen erklärt werden? Dieser Frage widmet sich der zweite Hauptblock der Arbeit. In einem ersten Schritt werden die relevanten Transmissionstheorien diskutiert. Die empirische Überprüfung ausgewählter Transmissionstheorien mit Branchendaten hat dabei einige grundlegende Einsichten ans Licht gebracht. Erstens korreliert die Stärke der Outputveränderung deutlich mit der Zinssensitivität der Nachfrage nach den produzierten Gütern der Branche. Zweitens können die beobachteten Preisanstiege vereinzelt mit einer Dominanz der Geldpolitik als Angebotsschock erklärt werden. Zu einem großen Teil bleibt die identifizierte Preisreaktion aber ein Rätsel. Und drittens scheint der Bilanzkanal – zumindest gemäß der hier gewählten Identifikationsstrategie – nicht grundsätzlich geeignet zu sein, die Anpassungsprozesse in den untersuchten Branchen zu erklären. Dies sollte daran liegen, dass dieser Transmissionskanal Bonitätscharakteristika und -veränderungen auf Unternehmensebene als Vehikel der Übertragung sieht.
This study investigates the credit channel in the transmission of monetary policy in Germany by means of a structural analysis of aggregate bank loan data. We base our analysis on a stylized model of the banking firm, which specifies the loan supply decisions of banks in the light of expectations about the future course of monetary policy. Using the model as a guide, we apply a vector error correction model (VECM), in which we identify long-run cointegration relationships that can be interpreted as loan supply and loan demand equations. In this way, the identification problem inherent in reduced form approaches based on aggregate data is explicitly addressed. The short-run dynamics is explored by means of innovation analysis, which displays the reaction of the variables in the system to a monetary policy shock. The main implication of our results is that the credit channel in Germany appears to be effective, as we find that loan supply effects in addition to loan demand effects contribute to the propagation of monetary policy measures.
Ziel dieser Arbeit ist die Untersuchung der Bedeutung der Spezifikation für Ratingmodelle zur Prognose von Kreditausfallwahrscheinlichkeiten. Ausgehend von dem in der Bankenpraxis etablierten Logit-Modell werden verschiedene Modellerweiterungen diskutiert und hinsichtlich ihrer Eigenschaften als Ratingmodelle empirisch und simulationsbasiert untersucht. Die Interpretierbarkeit und die Prognosegüte der Modelle werden dabei gleichermaßen berücksichtigt. Besonderes Augenmerk wird auf Mixed Logit-Modelle zur Abbildung individueller Heterogenität gelegt. Die Ergebnisse zeigen, dass die Spezifikation einen wichtigen Einfluss auf die Eigenschaften von Ratingmodellen hat und dass insbesondere mit Hilfe von Mixed Logit-Ansätzen sinnvoll interpretierbare Ratingmodelle mit guten Prognoseeigenschaften erlangt werden können.
This dissertation focuses on the drivers of international capital flows to emerging markets, as well as the determinants of crises in emerging markets. Particular emphasis is devoted to the role of U.S. monetary policy. The dissertation consists of three independent chapters.
Chapter 1 is a survey of the voluminous empirical literature on the drivers of capital flows to emerging markets. The contribution of the survey is to provide a comprehensive assessment of what we can say with relative confidence about the empirical drivers of EM capital flows. The evidence is structured based on the recognition that the drivers of capital flows vary over time and across different types of capital flows. The drivers are classified using the traditional framework for external and domestic factors (often referred to as “push versus pull” drivers), which is augmented by a distinction between cyclical and structural factors. Push factors are found to matter most for portfolio flows, somewhat less for banking flows, and least for foreign direct investment (FDI). Pull factors matter for all three components, but most for banking flows. A historical perspective suggests that the recent literature may have overemphasized the importance of cyclical factors at the expense of longer-term structural trends.
Chapter 2 undertakes an empirical analysis of the drivers of portfolio flows to emerging markets, focusing on the role of Fed policy. A time series model is estimated to analyze two different concepts of high frequency portfolio flows, including monthly data on flows into investment funds and a novel dataset on monthly portfolio flows obtained from individual national sources. The evidence presented in this chapter suggests a more nuanced interpretation of the role of U.S. monetary policy. In the existing literature, it is traditionally argued that Fed policy tightening is unambiguously negative for capital flows to emerging markets. By contrast, the findings presented in this dissertation suggest that it is the surprise element of monetary policy that affects EM portfolio inflows. A shift in market expectations towards easier future U.S. monetary policy leads to greater foreign portfolio inflows and vice versa. Given current market expectations of sustained increases in the federal funds rate in coming years, EM portfolio flows could be boosted by a slower pace of Fed tightening than currently expected or could be reduced by a faster pace of Fed tightening.
Chapter 3 examines the role of U.S. monetary policy in determining the incidence of emerging market crises. A negative binomial count model and a panel logit model are estimated to analyze the determinants of currency crises, banking crises, and sovereign defaults in a group of 27 emerging economies. The estimation results suggest that the probability of crises is substantially higher (1) when the federal funds rate is above its natural level, (2) during Fed policy tightening cycles, and (3) when market participants are surprised by signals that the Fed will tighten policy faster than previously expected. These findings contrast with the existing literature, which generally views domestic factors as the dominant determinants of emerging market crises. The findings also point to a heightened risk of emerging market crises in the coming years if the Fed continues to tighten monetary policy.
The aim of this thesis is to examine the competition patterns that exist between originators and generics by focusing on the articulations between regulation and incentives to innovate.
Once the characteristics of regulation in pharmaceutical markets is reviewed in the first chapter and an analysis of some current challenges related to cost-containment measures and innovation issues is performed, then in the second chapter, an empirical study is performed to investigate substitution patterns. Based on the EC´s merger decisions in the pharmaceutical sector from 1989 to 2011, this study stresses the key criteria to define the scope of the relevant product market based on substitution patterns and shows the trend towards a narrower market in time.
Chapters three and four aim to analyse in depth two widespread measures, the internal reference pricing system in off-patent markets, and risk-sharing schemes in patent-protected markets. By taking into account informational advantages of originators over generics, the third chapter shows the extent to which the implementation of a reference price for off-patent markets can contribute in promoting innovation.
Finally, in the fourth chapter, the modeling of risk-sharing schemes explains how such schemes can help in solving moral hazard and adverse selection issues by continuously giving pharmaceutical companies incentives to innovate and supplying medicinal products of a higher quality.
This dissertation deals with composite-based methods for structural equation models with latent variables and their enhancement. It comprises five chapters. Besides a brief introduction in the first chapter, the remaining chapters consisting of four essays cover the results of my PhD studies.Two of the essays have already been published in an international journal.
The first essay considers an alternative way of construct modeling in structural equation modeling.While in social and behavioral sciences theoretical constructs are typically modeled as common factors, in other sciences the common factor model is an inadequate way construct modeling due to its assumptions. This essay introduces the confirmatory composite analysis (CCA) analogous to confirmatory factor analysis (CFA). In contrast to CFA, CCA models theoretical constructs as composites instead of common factors. Besides the theoretical presentation of CCA and its assumptions, a Monte Carlo simulation is conducted which demonstrates that misspecifications of the composite model can be detected by the introduced test for overall model fit.
The second essay rises the question of how parameter differences can be assessed in the framework of partial least squares path modeling. Since the standard errors of the estimated parameters have no analytical closed-form, the t- and F-test known from regression analysis cannot be directly used to test for parameter differences. However, bootstrapping provides a solution to this problem. It can be employed to construct confidence intervals for the estimated parameter differences, which can be used for making inferences about the parameter difference in the population. To guide practitioners, guidelines were developed and demonstrated by means of empirical examples.
The third essay answers the question of how ordinal categorical indicators can be dealt with in partial least squares path modeling. A new consistent estimator is developed which combines the polychoric correlation and partial least squares path modeling to appropriately deal with the qualitative character of ordinal categorical indicators. The new estimator named ordinal consistent partial least squares combines consistent partial least squares with ordinal partial least squares. Besides its derivation, a Monte Carlo simulation is conducted which shows that the new estimator performs well in finite samples. Moreover, for illustration, an empirical example is estimated by ordinal consistent partial least squares.
The last essay introduces a new consistent estimator for polynomial factor models.
Similarly to consistent partial least squares, weights are determined to build stand-ins for the latent variables, however a non-iterative approach is used.
A Monte Carlo simulation shows that the new estimator behaves well in finite samples.
Structural equation modeling (SEM) has been used and developed for decades across various domains and research fields such as, among others, psychology, sociology, and business research. Although no unique definition exists, SEM is best understood as the entirety of a set of related theories, mathematical models, methods, algorithms, and terminologies related to analyzing the relationships between theoretical entities -- so-called concepts --, their statistical representations -- referred to as constructs --, and observables -- usually called indicators, items or manifest variables.
This thesis is concerned with aspects of a particular strain of research within SEM -- namely, composite-based SEM. Composite-based SEM is defined as SEM involving linear compounds, i.e., linear combinations of observables when estimating parameters of interest.
The content of the thesis is based on a working paper (Chapter 2), a published refereed journal article (Chapter 3), a working paper that is, at the time of submission of this thesis, under review for publication (Chapter 4), and a steadily growing documentation that I am writing for the R package cSEM (Chapter 5). The cSEM package -- written by myself and my former colleague at the University of Wuerzburg, Florian Schuberth -- provides functions to estimate, analyze, assess, and test nonlinear, hierarchical and multigroup structural equation models using composite-based approaches and procedures.
In Chapter 1, I briefly discuss some of the key SEM terminology.
Chapter 2 is based on a working paper to be submitted to the Journal of Business Research titled “Assessing overall model fit of composite models in structural equation modeling”. The article is concerned with the topic of overall model fit assessment of the composite model. Three main contributions to the literature are made. First, we discuss the concept of model fit in SEM in general and composite-based SEM in particular. Second, we review common fit indices and explain if and how they can be applied to assess composite models. Third, we show that, if used for overall model fit assessment, the root mean square outer residual covariance (RMS_theta) is identical to another well-known index called the standardized root mean square residual (SRMR).
Chapter 3 is based on a journal article published in Internet Research called “Measurement error correlation within blocks of indicators in consistent partial least squares: Issues and remedies”. The article enhances consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors. This is achieved by modifying the correction for attenuation as originally applied by PLSc to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted. The paper is joint work with Florian Schuberth and Theo Dijkstra.
Chapter 4 is based on a journal article under review for publication in Industrial Management & Data Systems called “Estimating and testing second-order constructs using PLS-PM: the case of composites of composites”. The purpose of this article is threefold: (i) evaluate and compare common approaches to estimate models containing second-order constructs modeled as composites of composites, (ii) provide and statistically assess a two-step testing procedure to test the overall model fit of such models, and (iii) formulate recommendation for practitioners based on our findings. Moreover, a Monte Carlo simulation to compare the approaches in terms of Fisher consistency, estimated bias, and RMSE is conducted. The paper is joint work with Florian Schuberth and Jörg Henseler.
In recent decades the international migration has increased worldwide. The influx of people from different cultures and ethnic groups poses new challenges to the labor market and the welfare state of the host countries and causes changes in the social fabric. In general, immigration benefits the economy of the host country. However, these gains from immigration are unevenly distributed among the native population. Natives who are in direct competition with the new workers expect wage losses and a higher probability of getting unemployed, whereas remaining natives foresee either no feedback effects or even wage gains. On the other hand, the tax and transfer system benefits disproportionally from an influx of highly skilled immigrants. Examinations of 20 European countries in 2010 show that a higher proportion of low-skilled immigrants in the immediate neighborhood of the natives increases the difference in the demand for redistribution between high-skilled and low-skilled natives. Thus, high-skilled natives are more opposed to an expansion of the governmental redistribution. On the one hand, a higher proportion of low-skilled immigrants generates a higher fiscal burden on the welfare state. On the other hand, high-skilled natives' wages increase due to an influx of low-skilled immigrants, since relative supply of high-skilled labor increases.
In addition to the economic impact of immigration, the inflow of new citizens is accompanied by natives' fear of changes in the social environment as well as in symbolic values, such as cultural identity or natives' set of values. The latter might generate negative attitudes towards immigrants and increase the demand for a more restrictive immigration policy. On the other hand, more interethnic contact due to a higher ethnic diversity could reduce natives' information gaps, prejudices and stereotypes. This, in turn, could enhance more tolerance and solidarity towards immigrants among natives. Examinations of 18 European countries in 2014 show that more interethnic contact during everyday life reduces both the natives' social distance from immigrants and their fear of social upheaval by the presence of immigrants. However, natives' social distance from immigrants has no effect on their preference for redistribution, but their perceived threat to the national culture and social life by the presence of immigrants has a significantly negative impact on their demand for redistribution. Thus, natives’ concern about the preservation of symbolic norms and values affects the solidarity channel of their redistribution preference.
An individual's upward mobility over time or in relation to his or her parents determines his or her attitude towards the welfare state as well as the transfer of his or her opinions to his or her own children. With regard to intergenerational income mobility, Germany shows a value in the international midfield; higher than the United States (lower mobility) and lower than the Scandinavian countries (higher mobility). For example, if a father's lifetime income increases by 10 percent, his son's lifetime income increases by 4.9 percent in the United States and by 3.1 percent in Germany. Additionally, in Germany, fathers' lifetime income tends to show a higher impact on their sons' income if their incomes are higher. In the United States, fathers' lifetime incomes have a stronger influence on their sons' income at the lower and the upper end of the income distribution compared to the middle.
Taking a closer look at the intragenerational wage mobility and wage inequality in Germany, the development at the current edge is rather sobering. Since 2000 there is a steady decline in wage mobility. Furthermore, wage mobility in the services sector has been significantly lower than in the manufacturing sector since the beginning of the 2000s. This result is mainly driven by the decrease of wage mobility in the health care and social services sector. Moreover, a worker's unemployment spells and occupation have become more important in the meantime. Since 2006 the increase in the German wage inequality has markedly slowed down and wage growth between 2006 and 2013 has been even polarized, i.e. wages at the lower and at the upper end of the wage distribution have increased more than wages in the middle. However, this development can be partly attributed to the computerization and automation of the production processes. Although, there was substitution of manual routine tasks between 2001 and 2013, cognitive routine tasks are still more pronounced in the middle and at the upper end of the wage distribution. Furthermore, the latter experienced an increase in wage mobility since 2000. On the other hand, manual non-routine tasks are localized disproportionally in the middle and at the lower end of the wage distribution. Thus, the wage gains of these occupations at the lower end were compensated for by the wage losses in the middle.
This study describes the Chinese growth model over the past 40 years. We show that China's growth model, with its dominant role of the banking system and "the banker", is a perfect illustration of the necessity and power of Schumpeter's "monetary analysis". This approach has allowed us to elaborate theoretically and empirically the uniqueness of the Chinese model. In our empirical analysis, we use a new dataset of Chinese provincial data to analyze the impact of the financial system, especially banks, on Chinese economic development. We also empirically assess the role of the financial system in Chinese industrial policy and provide case studies of the effects of industrial policy in specific sectors. Finally, we also discuss macroeconomic dimensions of the Chinese growth process and lessons that can be drawn from the Chinese experience for other countries.
The main subject of this dissertation is the analysis of the impact of the creation of the Eurozone on its member countries. This analysis comprises two studies that analyze this research agenda from different perspectives.
The first study compares the monetary policy of the ECB with the respective monetary policy of selected central banks of the European Monetary System (EMS). More precisely, conditional on aggregate demand and supply shocks, are there differences in the systematic central bank reaction function of the ECB and the four most important central banks of the EMS (Germany, France, Italy and Spain).
The second study analyzes the built-up of internal and external imbalances in Spain, i.e., on the housing market and in the current account, during the run-up to the financial crisis in 2007/08. The analysis differentiates between domestic Spain-specific factors and foreign Eurozone-factors that led to the macroeconomic imbalances.
The third and last study develops a price-theoretic credit supply model. In order to validate the model empirically, a credit market is estimated on the basis of data from the German credit market for enterprises. Finally, the results from the empirical exercise are compared to the predictions of the theoretic model.
Methodologically, all studies draw heavily on time series methods such as (multi-country) vector autoregressions (VARs) and time series regressions.
Diese empirische Arbeit untersucht Determinanten des Renteneintritts. Sie basiert auf einem Optionswertmodell, um die Bedeutung finanzieller Überlegungen für ein Aufschieben des Renteneintritts zu analysieren. Zusätzlich wird der Einfluss institutioneller Rahmenbedingungen betrachtet. Ein neu verfügbarer Datensatz des Verbands Deutscher Rentenversicherungsträger wird dazu verwendet. Die Ergebnisse zeigen, dass Arbeitslosigkeit und Krankheit zu einem großen Teil einen frühen Renteneintritt erklären. Zusätzlich hat der Optionswert einen großen Erklärungsgehalt.
Dezentrale, wettbewerblich organisierte föderale Ordnungen, bei denen zentrale Kompetenzen auf niedrigen institutionellen Ebenen liegen und in denen Gebietskörperschaften eine vergleichsweise geringe Größe aufweisen, sind mit beträchtlichen Vorteilen verbunden. So ist es besser möglich, den Präferenzen der Bürger gerecht zu werden. Außerdem wird ein höheres Wirtschaftswachstum angeregt. Die in der Theorie genannten Nachteile (unausgeschöpfte Größenvorteile, negative Auswirkungen externer Effekte, race to the bottom bei öffentlichen Leistungen und Sozialstaat) finden hingegen nur geringe empirische Bestätigung. Vor diesem Hintergrund ist der kooperative Föderalismus der Bundesrepublik Deutschland kritisch zu bewerten. Insbesondere der Länderfinanzausgleich als Kernelement der bundesstaatlichen Ordnung in Deutschland ist ineffizient und bremst das Wirtschaftswachstum. Um von den Vorteilen dezentraler, wettbewerblicher föderaler Ordnungen profitieren zu können, sollte den Bundesländern insbesondere substanzielle Finanzautonomie eingeräumt werden. Die Heterogenität politischer Präferenzen abhängig von gewählter staatlicher Ebene, Größe von Gebietskörperschaften und simulierten Länderneugliederungen wurde anhand von Bundestagswahlergebnissen untersucht. Die entsprechende Analyse befindet sich als Anhang an dieser Stelle, während die Dissertation in gedruckter Form erschienen ist.
Im Rahmen dieser Arbeit wird ein Modell entwickelt, welches auf Basis von länderübergreifenden Forderungs- und Verbindlichkeitsstrukturen die internationale Vernetzung der Banken abbildet. Die Analyse offenbart, dass systemische Risiken im Allgemeinen von wenigen Instituten ausgehen. Zudem wird aufgezeigt, dass solche Risiken vornehmlich in Banken aus Volkwirtschaften auftreten, in denen die Finanzindustrie eine exponierte Stellung einnimmt. Auf der anderen Seite sind die Institute aus diesen Ökonomien auch überproportional anfällig gegenüber systemischen Schocks und somit erhöhten Ansteckungsgefahren ausgesetzt. Systemische Risiken gehen nicht nur von Großbanken aus, sondern auch der Ausfall mittelgroßer oder gar kleiner Institute kann erhebliche Konsequenzen für das Gesamtsystem nach sich ziehen. Darüber hinaus ist ersichtlich, dass höhere systemische Risiken von Banken ausgehen, die einen hohen Verflechtungsgrad innerhalb des Bankensystems haben. Die potentiellen Schäden für das Gesamtsystem sind umso höher, je mehr signifikante Geschäftsbeziehungen eine Bank zu anderen Banken aufweist. Systemische Risiken können nicht grundsätzlich innerhalb eines nationalen Bankensystems isoliert werden, denn ein Großteil der Folgeausfälle erfolgt länderübergreifend. Die Analyse bringt zudem zu Tage, dass seit dem Jahr 2006 systemische Risiken im Allgemeinen zurückgingen.
In der vorliegenden Arbeit werden zunächst regulatorische Instrumente zur Reduzierung systemischer Risiken für alle Banken vorgestellt. Es lässt sich konstatieren, dass Eigenkapitalerhöhungen die Widerstands- und Verlustabsorptionsfähigkeit der Banken maßgeblich stärken würden. Auch können durch geeignete Großkreditvorschriften Risiken für das Gesamtsystem reduziert werden. Um das System entscheidend zu stabilisieren, müssten diese Instrumente allerdings erheblich von den aktuellen Bestimmungen abweichen. Die Untersuchungen zeigen, dass eine Eigenkapitalausstattung der Banken von 12% der risikoungewichteten Bilanz (Leverage Ratio) oder Großkreditvorschriften für Exposures zu einzelnen Gegenparteien von höchstens 18% des haftenden Eigenkapitals maßgeblich zu einer adäquaten bzw. notwendigen Finanzmarktstabilität beitragen können.
Diese Arbeit befasst sich ferner mit möglichen regulatorischen Ansätzen zur Reduzierung systemischer Risiken speziell für systemrelevante Banken. Eine mögliche regulatorische Alternative könnte eine Kombination sowohl höherer Eigenkapitalvorschriften als auch verschärfter Großkreditvorschriften darstellen. Durch eine Leverage Ratio von mindestens 9% für nicht-systemrelevante Institute und eine höhere Quote von 11% für systemrelevante Banken, kombiniert mit einem maximalen Exposure zwischen zwei Vertragsparteien von 23% sowie zu systemrelevanten Banken von maximal 18%, ließe sich das systemische Risiko im Bankensystem entscheidend senken.
Untersuchungsgegenstand dieser Arbeit sind die Wirkungen flexibler Entgeltkomponenten (Leistungslohn, Erfolgsbeteiligung, Kapitalbeteiligung) auf der betriebs- und volkswirtschaftlichen Ebene. Ausgangspunkt ist dabei die Analyse der herrschenden Arbeitslosigkeit im Hinblick auf die Ursachen und die Gründe ihrer Verfestigung und Persistenz. Dabei wird festgestellt, dass die existierende Arbeitslosigkeit über verschiedene Theorien erklärt und in mehrere Bestandteile zerlegt werden kann. Ein erheblicher Teil der Arbeitslosigkeit kann auf unflexible, überhöhte Reallöhne zurückgeführt werden. Unterschiedliche Einflüsse verhindern dann ein Absinken des Lohnniveaus auf ein vollbeschäftigungskonformes Niveau. Strukturelle Ursachen, i. S. v. Fehlentwicklungen auf dem Arbeitsmarkt und ungenügenden Anpassungskapazitäten an veränderte Rahmenbedingungen sind eine weitere Begründung für die hohe und nachhaltige Arbeitslosigkeit. Entgelte, die in ihrer Höhe und in ihrer sektoralen, regionalen und berufsbezogenen Ausrichtungen flexibel sind, können einen maßgeblichen Beitrag zum Abbau dieser Arbeitslosigkeit leisten. Aufbauend auf diese volkswirtschaftlichen Ansatz werden im folgenden Kapitel wesentliche betriebswirtschaftlichen Aspekte aus Sicht von Unternehmern und Arbeitnehmer dargestellt. Auf diesen Grundlagen aufbauend werden 3 Formen der Entgeltflexibilisierung im Hinblick auf ihre gesamt- und betriebswirtschaftlichen Wirkungen betrachtet. Leistungslöhne orientieren sich entweder am quantitativ messbaren Output oder der qualitativen Arbeitsbewertung eines Arbeitnehmers. Sie tragen somit unmittelbar zu einer produktivitätsorientierten Entlohnung bei, mit positiven Effekten für Unternehmen und Gesamtwirtschaft. Seit Martin Weitzman‘s kontrovers diskutierter "Beteiligungsgesellschaft" werden Erfolgsbeteiligungen als ein Weg gesehen um Arbeitslosigkeit abzubauen. Von der Verbindung zwischen Unternehmenserfolg und Entlohnung profitieren Arbeitnehmer, Unternehmen und Arbeitslose. Kapitalbeteiligungen haben keinen direkten Einfluss auf die Arbeitslosigkeit. Indirekt tragen sie jedoch zur erhöhter Motivation und Identifikation in den Unternehmen bei. Auch die Bereitstellung von Kapital kann je nach Modell Arbeitsplätze sichern. Neben diesen drei Hauptformen werden auch Investivlöhne und Aktienoptionen als Wege der Flexibilisierung betrachtet. Dabei war festzustellen, Investivlöhne (zur Überwindung des Eigenkapitalmangels vieler Unternehmen) analoge Wirkungen zu Erfolgs- und Kapitalbeteiligung aufweist. Aktienoptionen hingegen betreffen in der Regel nur kleiner Gruppen von Arbeitnehmern und Führungskräften. Ein letztes Kapitel zeigt die Gestaltung eines optimalen Entgeltsystems. Dieses weist neben einen Grundentgelt, einen Leistungslohn sowie eine Erfolgsbeteiligung auf, die durch eine optionale Kapitalbeteiligung ergänzt werden können. Dabei wird noch einmal betont, dass eine flexiblere Gestaltung von Entgelten keine alleinige Lösung der Arbeitslosigkeit bietet. Vielmehr müssen strukturelle Reformen, die die Machtstrukturen am Arbeitsmarkt, die Höhe und Ausgestaltung der Lohnersatzleistungen sowie die Stärkung des Wirtschaftswachstums mit der Flexibilisierung Hand in Hand gehen.
Die grundlegende Idee dieser Abhandlung liegt in der Vorstellung begründet, dass sich Wettbewerbspolitik nicht auf den Wettkampf konzentrieren sollte. Die traditionelle Vorgehensweise analysiert Wettbewerbsbeschränkungen auf einzelnen Märkten und fordert gegebenenfalls ein wettbewerbspolitisches Eingreifen. Zumeist wird dabei die Existenz des 'spirit of competition' und damit ein aktiver Wettkampf gefordert. Diese Sichtweise ist jedoch symptomatisch auf den einzelnen Markt gerichtet. Stattdessen sollten die grundlegenden Rahmenbedingungen analysiert werden. Wettbewerbspolitik würde sich somit auf die Schaffung von Wettbewerbschancen konzentrieren. Gerade die Umsetzung einer derart gestalteten Wettbewerbspolitik dürfte schwierig sein und erfordert insbesondere ein politökonomisches Fundament. Daher wird hier ein konkreter Vorschlag konkretisiert, der Elemente der direkte Demokratie, der Gewaltenteilung und eine verstärkte politische Meinungsbildung beinhaltet. Die herkömmliche Wettbewerbspolitik unterliegt folglich drei grundlegenden Mängel: Zunächst ist sie durch eine mangelnde Zielorientierung und zahlreiche Zielkonflikte gekennzeichnet. Weiterhin ist sie symptomatisch auf Wettbewerbsbeschränkungen auf einzelnen Märkten konzentriert und vernachlässigt die jeweils relevanten Rahmenbedingungen. Schließlich wird die Wahl angemessener wettbewerbspolitischer Träger häufig vernachlässigt. Die Ziele dieser Arbeit sind darauf basierend die Begründung der Notwendigkeit einer Neuausrichtung, die Ausarbeitung der Grundzüge eines alternativen wettbewerbspolitischen Ansatzes und eine Abgrenzung dieses von geläufigen wettbewerbspolitischen Konzeptionen. Zur Analyse dient die Ableitung eines Referenzschemas auf fünf Ebenen der Wettbewerbspolitik. Dabei werden sieben Fallstudien in die Abhandlung integriert.