Refine
Has Fulltext
- yes (50)
Is part of the Bibliography
- yes (50)
Year of publication
Document Type
- Doctoral Thesis (37)
- Journal article (6)
- Working Paper (6)
- Report (1)
Language
- English (50) (remove)
Keywords
- Geldpolitik (8)
- Makroökonomie (5)
- Monetary Policy (4)
- Ungleichheit (4)
- Außenhandel (3)
- Devisenmarkt (3)
- Einkommensverteilung (3)
- Globalisierung (3)
- Mergers and Acquisitions (3)
- Notenbank (3)
Institute
- Volkswirtschaftliches Institut (50) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- B-4606-2017 (1)
This paper examines the potential reinforcement of motivated beliefs when individuals with identical biases communicate. We propose a controlled online experiment that allows to manipulate belief biases and the communication environment. We find that communication, even among like-minded individuals, diminishes motivated beliefs if it takes place in an environment without previously declared external opinions. In the presence of external plural opinions, however, communication does not reduce but rather aggravates motivated beliefs. Our results indicate a potential drawback of the plurality of opinions - it may create communication environments wherein motivated beliefs not only persist but also become contagious within social networks.
The Macroeconomic Dimensions of Credit: A Comprehensive Analysis of Finance, Inequality and Growth
(2024)
Schumpeter's monetary growth theory is particularly influential for the modern understanding of the macroeconomic role of banks and credit. Based on this theory, this dissertation examines the macroeconomic role of the financial system, especially credit, in (1) generating economic growth, (2) directing economic resources and (3) distributing wealth.
Chapter 3 first shows empirically that 1) there is a positive correlation between the growth of credit and economic growth, even for developed countries, 2) no empirical correlation between household saving and economic growth can be established, and 3) there are both positive, negative and insignificant effects of credit on economic growth at country-specific level. Thus, there is broad empirical support for Schumpeter's monetary hypotheses.
A particularly interesting application of Schumpeter's growth theory can be seen in China. The results of the empirical study suggest that there is generally a positive correlation between credit and economic growth in China, that is, however, not linear in terms of regions, time and size of the financial system. Furthermore, the results in Chapter 4 suggest that credit-financed industrial policy in China may have contributed to more investment and GDP growth, although there are non-linearities between individual industries and types of companies.
Finally, Chapter 5 raises the question of the role of the financial system in the distribution of wealth. While credit to households and companies, together with indicators of working and saving behavior and the age structure of the population, are the most important determinants of wealth inequality, there are also various non-linearities in the relationship between credit and wealth inequality, including in relation to the level of development of financial systems and home ownership ratios.
Expanding on a general equilibrium model of offshoring, we analyze the effects of a unilateral emissions tax increase on the environment, income, and inequality. Heterogeneous firms allocate labor across production tasks and emissions abatement, while only the most productive can benefit from lower labor and/or emissions costs abroad and offshore. We find a non-monotonic effect on global emissions, which decline if the initial difference in emissions taxes is small. For a sufficiently large difference, global emissions rise, implying emissions leakage of more than 100%. The underlying driver is a global technique effect: While the emissions intensity of incumbent non-offshoring firms declines, the cleanest firms start offshoring. Moreover, offshoring firms become dirtier, induced by a reduction in the foreign effective emissions tax in general equilibrium. Implementing a BCA prevents emissions leakage, reduces income inequality in the reforming country, but raises inequality across countries.
A lot of countries have recently published updated hydrogen strategies, often including more ambitious targets for hydrogen production. In parallel, accompanying ramp-up mechanisms are increasingly coming into focus with the first ones already being released. However, these proposals usually translate mechanisms from renewable energy (RE) policy without considering the specific uncertainties, spillovers, and externalities of integrating hydrogen electrolysis into electricity grids. This article details how different aspects of a policy can address the specific issues, namely funding, risk-mitigation, and the complex relation with electricity markets. It shows that, compared to RE policy, subsidies need to emphasize the input side more strongly as price risks and intermittency from electricity markets are more prominent than from hydrogen markets. Also, it proposes a targeted mechanism to capture the positive externality of mitigating excess electricity in the grid while keeping investment security high. Economic policy should consider such approaches before massively scaling support and avoid the design shortcomings experienced with early RE policy.
We study nominal exchange rate dynamics in the aftermath of U.S. monetary policy announcements. Using high-frequency interest rate and stock price movements around FOMC announcements, we distinguish between pure monetary policy shocks and information shocks, which are associated with new information contained in the announcements. Contractionary pure policy shocks give rise to a strong, but transitory, appreciation on impact. Information shocks also appreciate the exchange rate, but the effect builds up only slowly over time and is highly persistent. Thus, we conclude that although the short-run effects on the exchange rate are primarily due to pure policy shocks, the medium-run response is driven by information effects.
We propose that false beliefs about own current economic status are an important factor for explaining populist attitudes. Eliciting subjects’ receptiveness to rightwing populism and their perceived relative income positions in a representative survey of German households, we find that people with pessimistic beliefs about their income position are more attuned to populist statements. Key to understanding the misperception-populism relationship are strong gender differences in the mechanism: men are much more likely to channel their discontent into affection for populist ideas. A simple information provision does neither sustainably reduce misperception nor curb populism.
The necessary adjustments to prominent measures of the neutral rate of interest following the COVID pandemic sparked a wide-ranging debate on the measurement and usefulness of r-star. Due to high uncertainty about relevant determinants, trend patterns and the correct estimation method, we propose in this paper a simple alternative approach derived from a standard macro model. Starting from a loss function, neutral periods can be determined in which a neutral real interest rate is observable. Using these values, a medium-term trend for a neutral interest rate can be determined. An application to the USA shows that our simple calculation of a neutral interest rate delivers comparable results to existing studies. A Taylor rule based on our neutral interest rate also does a fairly good job of explaining US monetary policy over the past 60 years.
This study describes the Chinese growth model over the past 40 years. We show that China's growth model, with its dominant role of the banking system and "the banker", is a perfect illustration of the necessity and power of Schumpeter's "monetary analysis". This approach has allowed us to elaborate theoretically and empirically the uniqueness of the Chinese model. In our empirical analysis, we use a new dataset of Chinese provincial data to analyze the impact of the financial system, especially banks, on Chinese economic development. We also empirically assess the role of the financial system in Chinese industrial policy and provide case studies of the effects of industrial policy in specific sectors. Finally, we also discuss macroeconomic dimensions of the Chinese growth process and lessons that can be drawn from the Chinese experience for other countries.
International trade is highly imbalanced both in terms of values and in terms of embodied carbon emissions. We show that the persistent current value trade imbalance patterns contribute to a higher level of global emissions compared to a world of balanced international trade. Specifically, we build a Ricardian quantitative trade model including sectoral input-output linkages, trade imbalances, fossil fuel extraction, and carbon emissions from fossil fuel combustion and use this framework to simulate counterfactual changes to countries' trade balances. For individual countries, the emission effects of removing their trade imbalances depend on the carbon intensities of their production and consumption patterns, as well as on their fossil resource abundance. Eliminating the Russian trade surplus and the US trade deficit would lead to the largest environmental benefits in terms of lower global emissions. Globally, the simultaneous removal of all trade imbalances would lower world carbon emissions by 0.9 percent or 295 million tons of carbon dioxide.
Salience bias and overwork
(2022)
In this study, we enrich a standard principal–agent model with hidden action by introducing salience-biased perception on the agent's side. The agent's misguided focus on salient payoffs, which leads the agent's and the principal's probability assessments to diverge, has two effects: First, the agent focuses too much on obtaining a bonus, which facilitates incentive provision. Second, the principal may exploit the diverging probability assessments to relax participation. We show that salience bias can reverse the nature of the inefficiency arising from moral hazard; i.e., the principal does not necessarily provide insufficient incentives that result in inefficiently low effort but instead may well provide excessive incentives that result in inefficiently high effort.
This thesis is about composite-based structural equation modeling. Structural equation modeling in general can be used to model both theoretical concepts and their relations to one another. In traditional factor-based structural equation modeling, these theoretical concepts are modeled as common factors, i.e., as latent variables which explain the covariance structure of their observed variables. In contrast, in composite-based structural equation modeling, the theoretical concepts can be modeled both as common factors and as composites, i.e., as linear combinations of observed variables that convey all the information between their observed variables and all other variables in the model. This thesis presents some methodological advancements in the field of composite-based structural equation modeling. In all, this thesis is made up of seven chapters. Chapter 1 provides an overview of the underlying model, as well as explicating the meaning of the term composite-based structural equation modeling. Chapter 2 gives guidelines on how to perform Monte Carlo simulations in the statistic software R using the package “cSEM” with various estimators in the context of composite-based structural equation modeling. These guidelines are illustrated by an example simulation study that investigates the finite sample behavior of partial least squares path modeling (PLS-PM) and consistent partial least squares (PLSc) estimates, particularly regarding the consequences of sample correlations between measurement errors on statistical inference. The third Chapter presents estimators of composite-based structural equation modeling that are robust in responding to outlier distortion. For this purpose, estimators of composite-based structural equation modeling, PLS-PM and PLSc, are adapted. Unlike the original estimators, these adjustments can avoid distortion that could arise from random outliers in samples, as is demonstrated through a simulation study. Chapter 4 presents an approach to performing predictions based on models estimated with ordinal partial least squares and ordinal consistent partial least squares. Here, the observed variables lie on an ordinal categorical scale which is explicitly taken into account in both estimation and prediction. The prediction performance is evaluated by means of a simulation study. In addition, the chapter gives guidelines on how to perform such predictions using the R package “cSEM”. This is demonstrated by means of an empirical example. Chapter 5 introduces confirmatory composite analysis (CCA) for research in “Human Development”. Using CCA, composite models can be estimated and assessed. This chapter uses the Henseler-Ogasawara specification for composite models, allowing, for example, the maximum likelihood method to be used for parameter estimation. Since the maximum likelihood estimator based on the Henseler-Ogasawara specification has limitations, Chapter 6 presents another specification of the composite model by means of which composite models can be estimated with the maximum likelihood method. The results of this maximum likelihood estimator are compared with those of PLS-PM, thus showing that this maximum likelihood estimator gives valid results even in finite samples. The last chapter, Chapter 7, gives an overview of the development and different strands of composite-based structural equation modeling. Additionally, here I examine the contribution the previous chapters make to the wider distribution of composite-based structural equation modeling.
Within three self-contained studies, this dissertation studies the impact and interactions between different macroeconomic policy measures in the context of financial markets empirically and quantitatively. The first study of this dissertation sheds light on the financial market effects of unconventional central bank asset purchase programs in the Eurozone, in particular sovereign bond asset purchase programs. The second study quantifies the direct implications of unconventional monetary policy on decisions by German public debt management regarding the maturity structure of gross issuance. The third study provides novel evidence on the role of private credit markets in the propagation of public spending toward private consumption in the U.S. economy. Across these three studies a set of different time-series econometric methods is applied including error correction models and event study frameworks to analyze contemporaneous interactions in financial and macroeconomic data in the context of unconventional monetary policy, as well as vector auto regressions (VARs) and local projections to trace the dynamic consequences of macroeconomic policies over time.
Over the last few decades, hours worked per capita have declined substantially in many OECD economies. Using the standard neoclassical growth model with endogenous work–leisure choice, we assess the role of trend growth slowdown in accounting for the decline in hours worked. In the model, a permanent reduction in technological growth decreases steady‐state hours worked by increasing the consumption–output ratio. Our empirical analysis exploits cross‐country variation in the timing and size of the decline in technological growth to show that technological growth has a highly significant positive effect on hours. A decline in the long‐run trend of technological growth by 1 percentage point is associated with a decline in trend hours worked in the range of 1–3%. This result is robust to controlling for taxes, which have been found in previous studies to be an important determinant of hours. Our empirical finding is quantitatively in line with the one implied by a calibrated version of the model, though evidence for the model’s implication that the effect on hours works via changes in the consumption–output ratio is rather mixed.
This paper examines situations where two vertically integrated firms consider supplying an input to an independent downstream competitor via privately observed contracts. We identify equilibria where competition in the upstream market emerges—the downstream competitor gets supplied—as well as when the downstream firm does not receive the input and is excluded from the market. The likelihood of the outcome in which the downstream firm does not get supplied depends not only on demand parameters, but also on contractual flexibility and observability. We show that when contracts are unobservable, downstream entry will occur less often. Furthermore, our results suggest that permitting contracts that enable the contracting parties to coordinate their behavior in the downstream market may improve welfare by increasing the likelihood that the downstream firm is supplied.
This thesis contributes to the understanding of the labor market effects of international trade, with a focus on the effects on wage and earnings inequality. The thesis draws on high-quality micro data and applies modern econometric techniques and theoretical concepts to improve our understanding of the distributional effects of international trade. The thesis focuses on the effects in Germany and the USA.
The contribution of this dissertation is to empirically analyze the link between income distribution, sectoral financial balances, and the current account. Firstly, it examines the relationship between the personal and the functional income distribution which may have rather different implications for aggregate demand and the current account. Secondly, it analyzes the importance of different sectors of the economy for current account balances and tests whether households are able to fully pierce the institutional veils of the corporate and the government sector. Thirdly, it investigates how changes in the personal and the functional income distribution affect the saving and investment decisions of the household and the corporate sector, and hence the current account. Finally, it shows how different growth regimes are linked to different patterns of personal and functional income distribution, and how differences in wage bargaining institutions contribute to explaining these different patterns of income distribution.
Structural equation modeling (SEM) has been used and developed for decades across various domains and research fields such as, among others, psychology, sociology, and business research. Although no unique definition exists, SEM is best understood as the entirety of a set of related theories, mathematical models, methods, algorithms, and terminologies related to analyzing the relationships between theoretical entities -- so-called concepts --, their statistical representations -- referred to as constructs --, and observables -- usually called indicators, items or manifest variables.
This thesis is concerned with aspects of a particular strain of research within SEM -- namely, composite-based SEM. Composite-based SEM is defined as SEM involving linear compounds, i.e., linear combinations of observables when estimating parameters of interest.
The content of the thesis is based on a working paper (Chapter 2), a published refereed journal article (Chapter 3), a working paper that is, at the time of submission of this thesis, under review for publication (Chapter 4), and a steadily growing documentation that I am writing for the R package cSEM (Chapter 5). The cSEM package -- written by myself and my former colleague at the University of Wuerzburg, Florian Schuberth -- provides functions to estimate, analyze, assess, and test nonlinear, hierarchical and multigroup structural equation models using composite-based approaches and procedures.
In Chapter 1, I briefly discuss some of the key SEM terminology.
Chapter 2 is based on a working paper to be submitted to the Journal of Business Research titled “Assessing overall model fit of composite models in structural equation modeling”. The article is concerned with the topic of overall model fit assessment of the composite model. Three main contributions to the literature are made. First, we discuss the concept of model fit in SEM in general and composite-based SEM in particular. Second, we review common fit indices and explain if and how they can be applied to assess composite models. Third, we show that, if used for overall model fit assessment, the root mean square outer residual covariance (RMS_theta) is identical to another well-known index called the standardized root mean square residual (SRMR).
Chapter 3 is based on a journal article published in Internet Research called “Measurement error correlation within blocks of indicators in consistent partial least squares: Issues and remedies”. The article enhances consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors. This is achieved by modifying the correction for attenuation as originally applied by PLSc to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted. The paper is joint work with Florian Schuberth and Theo Dijkstra.
Chapter 4 is based on a journal article under review for publication in Industrial Management & Data Systems called “Estimating and testing second-order constructs using PLS-PM: the case of composites of composites”. The purpose of this article is threefold: (i) evaluate and compare common approaches to estimate models containing second-order constructs modeled as composites of composites, (ii) provide and statistically assess a two-step testing procedure to test the overall model fit of such models, and (iii) formulate recommendation for practitioners based on our findings. Moreover, a Monte Carlo simulation to compare the approaches in terms of Fisher consistency, estimated bias, and RMSE is conducted. The paper is joint work with Florian Schuberth and Jörg Henseler.
Economists (should) care about regions! On the one hand this is true because macroeconomic shocks have vastly different effects across regions. The pressing topics of robotization
and artificial intelligence, Brexit, or U.S. tariffs will affect Würzburg differently than Berlin,
implying varying interests among its population, firms and politicians. On the other hand,
shocks in individual regions, such as inventions, bankruptcies or the attraction of a major
plant can, through trade and input-output linkages, magnify to aggregate effects of macroe-
conomic importance. Yet, regional heterogeneities in Germany and the complicated network
of linkages that connect regions are still not well documented nor understood. A fact that
is especially true for local labor markets that are of core interest to regional policy makers
and that also feature substantial heterogeneity.
This thesis provides a thorough quantification of such heterogeneities and an in-depth analysis of the sources and mechanisms that drive these differences.
The present thesis analyzes whether and - if so - under which conditions mergers result in merger-specific efficiency gains. The analysis concentrates on manufacturing firms in Europe that participate in horizontal mergers as either buyer or target in the years 2005 to 2014.
The result of the present study is that mergers are idiosyncratic processes. Thus, the possibilities to define general conditions that predict merger-specific efficiency gains are limited.
However, the results of the present study indicate that efficiency gains are possible as a direct consequence of a merger. Efficiency changes can be measured by a Total Factor Productivity (TFP) approach. Significant merger-specific efficiency gains are more likely for targets than for buyers. Moreover, mergers of firms that mainly operate in the same segment are likely to generate efficiency losses. Efficiency gains most likely result from reductions in material and labor costs, especially on a short- and mid-term perspective. The analysis of conditions that predict efficiency gains indicates that firm that announce the merger themselves are capable to generate efficiency gains in a short- and mid-term perspective. Furthermore, buyers that are mid-sized firms are more likely to generate efficiency gains than small or large buyers. Results also indicate that capital intense firms are likely to generate efficiency gains after a merger.
The present study is structured as follows.
Chapter 1 motivates the analysis of merger-specific efficiency gains. The definition of conditions that reasonably likely predict when and to which extent mergers will result in merger-specific efficiency gains, would improve the merger approval or denial process.
Chapter 2 gives a literature review of some relevant empirical studies that analyzed merger-specific efficiency gains. None of the empirical studies have analyzed horizontal mergers of European firms in the manufacturing sector in the years 2005 to 2014. Thus, the present study contributes to the existing literature by analyzing efficiency gains from those mergers.
Chapter 3 focuses on the identification of mergers. The merger term is defined according to the EC Merger Regulation and the Horizontal Merger Guidelines. The definition and the requirements of mergers according to legislation provides the framework of merger identification.
Chapter 4 concentrates on the efficiency measurement methodology. Most empirical studies apply a Total Factor Productivity (TFP) approach to estimate efficiency. The TFP approach uses linear regression in combination with a control function approach. The estimation of coefficients is done by a General Method of Moments approach.
The resulting efficiency estimates are used in the analysis of merger-specific efficiency gains in chapter 5. This analysis is done separately for buyers and targets by applying a Difference-In-Difference (DID) approach.
Chapter 6 concentrates on an alternative approach to estimate efficiency, that is a Stochastic Frontier Analysis (SFA) approach. Comparable to the TFP approach, the SFA approach is a stochastic efficiency estimation methodology. In contrast to TFP, SFA estimates the production function as a frontier function instead of an average function. The frontier function allows to estimate efficiency in percent.
Chapter 7 analyses the impact of different merger- and firm-specific characteristics on efficiency changes of buyers and targets. The analysis is based on a multiple regression, which is applied for short-, mid- and long-term efficiency changes of buyers and targets.
Chapter 8 concludes.
This dissertation deals with composite-based methods for structural equation models with latent variables and their enhancement. It comprises five chapters. Besides a brief introduction in the first chapter, the remaining chapters consisting of four essays cover the results of my PhD studies.Two of the essays have already been published in an international journal.
The first essay considers an alternative way of construct modeling in structural equation modeling.While in social and behavioral sciences theoretical constructs are typically modeled as common factors, in other sciences the common factor model is an inadequate way construct modeling due to its assumptions. This essay introduces the confirmatory composite analysis (CCA) analogous to confirmatory factor analysis (CFA). In contrast to CFA, CCA models theoretical constructs as composites instead of common factors. Besides the theoretical presentation of CCA and its assumptions, a Monte Carlo simulation is conducted which demonstrates that misspecifications of the composite model can be detected by the introduced test for overall model fit.
The second essay rises the question of how parameter differences can be assessed in the framework of partial least squares path modeling. Since the standard errors of the estimated parameters have no analytical closed-form, the t- and F-test known from regression analysis cannot be directly used to test for parameter differences. However, bootstrapping provides a solution to this problem. It can be employed to construct confidence intervals for the estimated parameter differences, which can be used for making inferences about the parameter difference in the population. To guide practitioners, guidelines were developed and demonstrated by means of empirical examples.
The third essay answers the question of how ordinal categorical indicators can be dealt with in partial least squares path modeling. A new consistent estimator is developed which combines the polychoric correlation and partial least squares path modeling to appropriately deal with the qualitative character of ordinal categorical indicators. The new estimator named ordinal consistent partial least squares combines consistent partial least squares with ordinal partial least squares. Besides its derivation, a Monte Carlo simulation is conducted which shows that the new estimator performs well in finite samples. Moreover, for illustration, an empirical example is estimated by ordinal consistent partial least squares.
The last essay introduces a new consistent estimator for polynomial factor models.
Similarly to consistent partial least squares, weights are determined to build stand-ins for the latent variables, however a non-iterative approach is used.
A Monte Carlo simulation shows that the new estimator behaves well in finite samples.
This dissertation consists of three contributions. Each addresses one specific aspect of intergenerational income mobility and is intended to be a stand-alone analysis. All chapters use comparable data for Germany and the United States to conduct country comparisons. As there are usually a large number of studies available for the United States, this approach is useful for comparing the empirical results to the existing literature.
The first part conducts a direct country comparison of the structure and extent of intergenerational income mobility in Germany and the United States. In line with existing results, the estimated intergenerational income mobility of 0.49 in the United States is significantly higher than that of 0.31 in Germany. While the results for the intergenerational rank mobility are relatively similar, the level of intergenerational income share mobility is higher in the United States than in Germany. There are no significant indications of a nonlinear run of intergenerational income elasticity. A final decomposition of intergenerational income inequality shows both greater income mobility and stronger progressive income growth for Germany compared to the United States. Overall, no clear ranking of the two countries can be identified. To conclude, several economic policy recommendations to increase intergenerational income mobility in Germany are discussed.
The second part examines the transmission channels of intergenerational income persistence in Germany and the United States. In principle, there are two ways in which well-off families may influence the adult incomes of their children: first through direct investments in their children's human capital (investment effect ), and second through the indirect transmission of human capital from parents to children (endowment effect ). In order to disentangle these two effects, a descriptive as well as a structural decomposition method are utilized. The results suggest that the investment effect and the endowment effect each account for approximately half of the estimated intergenerational income elasticity in Germany, while the investment effect is substantially more influential in the United States with a share of around 70 percent. With regard to economic policy, these results imply that equality of opportunity for children born to poor parents cannot be reached by the supply of financial means alone. Conversely, an efficient policy must additionally substitute for the missing direct transmission of human capital within socio-economically weak families.
The third part explicitly focuses on the intergenerational income mobility among daughters. The restriction to men is commonly made in the empirical literature due to women‘s lower labor market participation. While most men work full-time, the majority of (married) women still work only part-time or not at all. Especially with the occurrence of assortative mating, daughters from well-off families are likely to marry rich men and might decide to reduce their labor supply as a result. Thus, the individual labor income of a daughter might not be a good indicator for her actual economic status. The baseline regression analysis shows a higher intergenerational income elasticity in Germany and a lower intergenerational income elasticity in the United States for women as compared to men. However, a separation by marital status reveals that in both countries unmarried women exhibit a higher intergenerational income elasticity than unmarried men, while married women feature a lower intergenerational income elasticity than married men. The reason for the lower mobility of unmarried women turns out to be a stronger human capital transmission from fathers to daughters than to sons. The higher mobility of married women is driven by a weaker human capital transmission and a higher labor supply elasticity with respect to spousal income for women as compared to men. In order to further study the effects of assortative mating, the subsample of married children is analyzed by different types of income. It shows that the estimated intergenerational income elasticity of children's household incomes is even higher than that of their individual incomes. This can be seen as an indication for strong assortative mating. If household income is interpreted as a measure of children‘s actual economic welfare, there are barely any differences between sons and daughters. The intergenerational income elasticity of spousal income with respect to parental income is again relatively high, which in turn supports the hypothesis of strong assortative mating. The elasticity of the sons-in-law with respect to their fathers-in-law in Germany is even higher than that of the sons with respect to their own fathers.
As a consequence of the financial crisis in 2008/09, some economists have expressed doubts about the adequacy of theoretical models, especially those that claim to model financial markets and banks. Because of these doubts, some economists are following a new paradigm based on a monetary theory rather than a commodity theory. The main difference between these two views is that in the commodity theory money does not play an essential role, whereas in a money economy every transaction is settled with money. It is therefore essential to clarify whether a theory that includes money comes to other conclusions than a theory that leaves money out.
Based on this problem, the second chapter compares the conclusions from the commodity logic of the financial system - modeled by the loanable funds theory - with the monetary logic. Following the review of the conclusions, I describe three theories about banks. The so-called endogenous money creation theory, in which the central banks control the lending of banks through prices, describes our world best.
In the third chapter, I use the endogenous money creation theory for modelling the bank credit market. In this model, banks act according to profit maximization, whereby income from lending business is generated and the costs of credit default risk and refinancing costs (including regulatory requirements) are incurred. These are the determinants of the supply of credit, which meets the demand for credit on the credit market. Credit demand is determined by borrowers who borrow from banks for consumption or investment purposes. The interplay between loan supply and demand for credit results in the equilibrium loan interest rate and the equilibrium loan volume that banks grant to non-banks. The supply and demand sides interacting on the credit market are empirically estimated for Germany over the period 1999-2014 based on the theoretical model using a disequilibirum framework, showing that the determinants from the theoretical model are statistically significant.
Building on the theoretical banking model, the fourth chapter extends the model to include the bond market. In contrast to the description in the commodity theory, the bank loan market and the bond market are fundamentally different. On the one hand, banks create money according to the endogenous money creation theory. Once the money is in circulation, non-banks can redistribute it by either using it for the purchase of goods or borrowing it for longer periods. Due to the focus on the financial system in this dissertation, the case is considered in which money is lent over the longer term. The motive of the suppliers in the bond market, i.e. those who want to lend money, is similar to that of banks, driven by profit maximization. Suppliers can generate income from interest on bonds. Costs arise from the opportunity costs of holding money as deposits, the credit default of the debtor and price losses due to changes in interest rates. The logic described is based on the idea that banks create money, i.e. they are originators of money, and the money is redistributed on the bond market and thus used several times. The two markets are linked on both the supply and demand sides. On the one hand, banks refinance themselves on the bond market in order to reduce the maturity transformation resulting from lending. In addition, consumers of credit have the option of requesting either bank loans or loans on the bond market.
After the description of the theoretical framework of the financial system consisting of the banking and bond market, the fifth chapter follows the application of the model for Quantitative Easing. It should be noted here that Quantitative Easing already influences the behaviour of credit consumers and suppliers when the central bank announces it. The four major central banks (Bank of Japan, Bank of England, Federal Reserve Bank and European Central Bank) have used the unconventional instrument of buying up bonds due to the continuing recession and the already low short-term interest rates. In the theoretical model, the central bank already influences bond market rates through the announcement, resulting in decreasing risk premiums, as the central bank acts as a lender of confidence, decreasing interest expectations (at least in the short term) and decreasing long-term interest rates overall. These three hypotheses are tested using empirical methods for the Euro area.
Within three self-contained chapters, this dissertation provides new insights into the macroeconomic consequences of income inequality from a global perspective. Following an introduction, which summarizes the main findings and offers a brief overview of trends in income distribution, Chapter 2 evaluates the relationship between the labor share of income and the evolution of aggregate demand. Chapter 3 analyzes the link between income inequality and aggregate saving; and Chapter 4 directly estimates the effect of inequality and public redistribution on economic growth.
This dissertation is concerned with the empirical investigation of the link between globalization and labor market outcomes as well as the determinants of governmental redistribution, with a special focus on the effects of culture and diversity on the welfare state. In recent years, globalization has been criticized for adverse structural effects, e.g. increasing employment volatility and higher inequality.
Following the introduction, the second chapter investigates the relationship between growing import penetration and manufacturing employment growth in 12 OECD countries between 1995 and 2011, accounting for various model specifications, different measures of import penetration, and alternative estimation strategies. The application of the latest version of the World Input-Output Database (WIOD), which has only recently become available, enables measurement of the effect of increases in imported intermediates according to their country of origin. The findings emphasize a weak positive overall impact of growing trade on manufacturing employment. However, while intermediate inputs from China and the new EU members are substitutes for manufacturing employment in highly developed countries, imports from the EU-27 complement domestic manufacturing production. The three-level mixed model utilized implies that the hierarchical structure of the data plays only a minor role, and controlling for endogeneity leaves the results unchanged.
The findings point to ambiguous effects of globalization on labor market outcomes which increase the demand for equalizing public policies. Accordingly, the following chapter examines the relationship between income inequality and redistribution, accounting for the shape of the income distribution, different development levels, and subjective perceptions. Cross-national inequality datasets that have become available only recently allow for the assessment of the link for various sample compositions and several model specifications. The empirical results confirm the Meltzer-Richard hypothesis, but suggest that the relationship between market inequality and redistribution is even stronger when using perceived inequality measures. The findings emphasize a decisive role of the middle class, while also identifying a negative impact of top incomes. The Meltzer-Richard effect is less pronounced in developing economies with less sophisticated political rights, illustrating that it is the political channel through which higher inequality translates into more redistribution.
Chapter (4) extends the framework developed in the preceding chapter by studying the effects of culture and diversity on governmental redistribution for a large sample of countries. To disentangle culture from institutions, the analysis employs regional instruments as well as data on linguistic differences, the frequency of blood types, and the prevalence of the pathogen Toxoplasma Gondii. Redistribution is higher in countries with (1) loose family ties and individualistic attitudes, (2) a high prevalence of trust and tolerance, (3) low acceptance of unequally distributed power and obedience, and (4) a prevalent belief that success is the result of luck and connections. Apart from their direct effects, these traits also exert an indirect impact by influencing the transmission of inequality to redistribution. Finally, the findings indicate that redistribution and diversity in terms of culture, ethnic groups, and religion stand in a non-linear relationship, where moderate levels of diversity impede redistribution and higher levels offset the generally negative effect.
In recent decades the international migration has increased worldwide. The influx of people from different cultures and ethnic groups poses new challenges to the labor market and the welfare state of the host countries and causes changes in the social fabric. In general, immigration benefits the economy of the host country. However, these gains from immigration are unevenly distributed among the native population. Natives who are in direct competition with the new workers expect wage losses and a higher probability of getting unemployed, whereas remaining natives foresee either no feedback effects or even wage gains. On the other hand, the tax and transfer system benefits disproportionally from an influx of highly skilled immigrants. Examinations of 20 European countries in 2010 show that a higher proportion of low-skilled immigrants in the immediate neighborhood of the natives increases the difference in the demand for redistribution between high-skilled and low-skilled natives. Thus, high-skilled natives are more opposed to an expansion of the governmental redistribution. On the one hand, a higher proportion of low-skilled immigrants generates a higher fiscal burden on the welfare state. On the other hand, high-skilled natives' wages increase due to an influx of low-skilled immigrants, since relative supply of high-skilled labor increases.
In addition to the economic impact of immigration, the inflow of new citizens is accompanied by natives' fear of changes in the social environment as well as in symbolic values, such as cultural identity or natives' set of values. The latter might generate negative attitudes towards immigrants and increase the demand for a more restrictive immigration policy. On the other hand, more interethnic contact due to a higher ethnic diversity could reduce natives' information gaps, prejudices and stereotypes. This, in turn, could enhance more tolerance and solidarity towards immigrants among natives. Examinations of 18 European countries in 2014 show that more interethnic contact during everyday life reduces both the natives' social distance from immigrants and their fear of social upheaval by the presence of immigrants. However, natives' social distance from immigrants has no effect on their preference for redistribution, but their perceived threat to the national culture and social life by the presence of immigrants has a significantly negative impact on their demand for redistribution. Thus, natives’ concern about the preservation of symbolic norms and values affects the solidarity channel of their redistribution preference.
An individual's upward mobility over time or in relation to his or her parents determines his or her attitude towards the welfare state as well as the transfer of his or her opinions to his or her own children. With regard to intergenerational income mobility, Germany shows a value in the international midfield; higher than the United States (lower mobility) and lower than the Scandinavian countries (higher mobility). For example, if a father's lifetime income increases by 10 percent, his son's lifetime income increases by 4.9 percent in the United States and by 3.1 percent in Germany. Additionally, in Germany, fathers' lifetime income tends to show a higher impact on their sons' income if their incomes are higher. In the United States, fathers' lifetime incomes have a stronger influence on their sons' income at the lower and the upper end of the income distribution compared to the middle.
Taking a closer look at the intragenerational wage mobility and wage inequality in Germany, the development at the current edge is rather sobering. Since 2000 there is a steady decline in wage mobility. Furthermore, wage mobility in the services sector has been significantly lower than in the manufacturing sector since the beginning of the 2000s. This result is mainly driven by the decrease of wage mobility in the health care and social services sector. Moreover, a worker's unemployment spells and occupation have become more important in the meantime. Since 2006 the increase in the German wage inequality has markedly slowed down and wage growth between 2006 and 2013 has been even polarized, i.e. wages at the lower and at the upper end of the wage distribution have increased more than wages in the middle. However, this development can be partly attributed to the computerization and automation of the production processes. Although, there was substitution of manual routine tasks between 2001 and 2013, cognitive routine tasks are still more pronounced in the middle and at the upper end of the wage distribution. Furthermore, the latter experienced an increase in wage mobility since 2000. On the other hand, manual non-routine tasks are localized disproportionally in the middle and at the lower end of the wage distribution. Thus, the wage gains of these occupations at the lower end were compensated for by the wage losses in the middle.
This dissertation contributes to the empirical analysis of economic development. The continuing poverty in many Sub-Saharan-African countries as well as the declining trend in growth in the advanced economies that was initiated around the turn of the millennium raises a number of new questions which have received little attention in recent empirical studies. Is culture a decisive factor for economic development? Do larger financial markets trigger positive stimuli with regard to incomes, or is the recent increase in their size in advanced economies detrimental to economic growth? What causes secular stagnation, i.e. the reduction in growth rates of the advanced economies observable over the past 20 years? What is the role of inequality in the growth process, and how do governmental attempts to equalize the income distribution affect economic development? And finally: Is the process of democratization accompanied by an increase in living standards? These are the central questions of this doctoral thesis.
To facilitate the empirical analysis of the determinants of economic growth, this dissertation introduces a new method to compute classifications in the field of social sciences. The approach is based on mathematical algorithms of machine learning and pattern recognition. Whereas the construction of indices typically relies on arbitrary assumptions regarding the aggregation strategy of the underlying attributes, utilization of Support Vector Machines transfers the question of how to aggregate the individual components into a non-linear optimization problem.
Following a brief overview of the theoretical models of economic growth provided in the first chapter, the second chapter illustrates the importance of culture in explaining the differences in incomes across the globe. In particular, if inhabitants have a lower average degree of risk-aversion, the implementation of new technology proceeds much faster compared with countries with a lower tendency towards risk. However, this effect depends on the legal and political framework of the countries, their average level of education, and their stage of development.
The initial wealth of individuals is often not sufficient to cover the cost of investments in both education and new technologies. By providing loans, a developed financial sector may help to overcome this shortage. However, the investigations in the third chapter show that this mechanism is dependent on the development levels of the economies. In poor countries, growth of the financial sector leads to better education and higher investment levels. This effect diminishes along the development process, as intermediary activity is increasingly replaced by speculative transactions. Particularly in times of low technological innovation, an increasing financial sector has a negative impact on economic development. In fact, the world economy is currently in a phase of this kind. Since the turn of the millennium, growth rates in the advanced economies have experienced a multi-national decline, leading to an intense debate about "secular stagnation" initiated at the beginning of 2015. The fourth chapter deals with this phenomenon and shows that the growth potentials of new technologies have been gradually declining since the beginning of the 2000s.
If incomes are unequally distributed, some individuals can invest less in education and technological innovations, which is why the fifth chapter identifies an overall negative effect of inequality on growth. This influence, however, depends on the development level of countries. While the negative effect is strongly pronounced in poor economies with a low degree of equality of opportunity, this influence disappears during the development process. Accordingly, redistributive polices of governments exert a growth-promoting effect in developing countries, while in advanced economies, the fostering of equal opportunities is much more decisive.
The sixth chapter analyzes the growth effect of the political environment and shows that the ambiguity of earlier studies is mainly due to unsophisticated measurement of the degree of democratization. To solve this problem, the chapter introduces a new method based on mathematical algorithms of machine learning and pattern recognition. While the approach can be used for various classification problems in the field of social sciences, in this dissertation it is applied for the problem of democracy measurement. Based on different country examples, the chapter shows that the resulting SVMDI is superior to other indices in modeling the level of democracy. The subsequent empirical analysis emphasizes a significantly positive growth effect of democracy measured via SVMDI.
The main subject of this dissertation is the analysis of the impact of the creation of the Eurozone on its member countries. This analysis comprises two studies that analyze this research agenda from different perspectives.
The first study compares the monetary policy of the ECB with the respective monetary policy of selected central banks of the European Monetary System (EMS). More precisely, conditional on aggregate demand and supply shocks, are there differences in the systematic central bank reaction function of the ECB and the four most important central banks of the EMS (Germany, France, Italy and Spain).
The second study analyzes the built-up of internal and external imbalances in Spain, i.e., on the housing market and in the current account, during the run-up to the financial crisis in 2007/08. The analysis differentiates between domestic Spain-specific factors and foreign Eurozone-factors that led to the macroeconomic imbalances.
The third and last study develops a price-theoretic credit supply model. In order to validate the model empirically, a credit market is estimated on the basis of data from the German credit market for enterprises. Finally, the results from the empirical exercise are compared to the predictions of the theoretic model.
Methodologically, all studies draw heavily on time series methods such as (multi-country) vector autoregressions (VARs) and time series regressions.
This book produces three main results. First, from publicly available statistics, it can be inferred that the interest rate risk from on-balance sheet term transformation of banks in Germany exceeds the euro area average and is bound to increase even further. German banks push for shorter-term funding and hardly counteract the increased demand for longer-term loans. Within Germany, savings banks and cooperative banks are particularly engaged. Second, the supervisory interest rate shock scenarios are found to be increasingly detached both from the historic and the forecasted development of interest rates in Germany. In particular, German banks have been exposed to fewer and smaller adverse changes of the term structure. This increasingly limits the informative content of mere exposure measures such as the Basel interest rate coefficient when used as risk measures as is common practice in banking supervision and economic research. An impact assessment further supports the conclusion that the least that is required is a more comprehensive set of shock scenarios. Third and finally, there is a reasonable theoretical rationale and there is strong empirical evidence for banks' search for yield in interest rate risk. In addition to the established positive link between the term spread and the taking of interest rate risk by banks an additional negative link can be explained theoretically and there is significant empirical evidence for its existence and relevance. There is even a threshold of income below which banks' search for yield in interest rate risk surfaces openly.
The dissertation deals with the market and welfare effects of different business practices and the firm's incentives to use them: resale price maintenance, revenue sharing of a platform operator, membership fees to buyers using a platform and patent licensing.
In the second chapter we investigate the incentives of two manufacturers with common retailers to use resale price maintenance (RPM). Retailers provide product specific services that increase demand and manufacturers use minimum RPM to compete for favorable services for their products. Minimum RPM increases consumer pricesby voiding retailer price competition and can create a prisoner’s dilemma for manufacturers without increasing, and possibly even decreasing the overall service level. If manufacturer market power is asymmetric, minimum RPM tends to distort the allocation of sales services towards the high-priced products of the manufacturer with more market power. These results challenge the service argument as an efficiency defense for minimum RPM.
The third chapter deals with trade platforms whose operators not only allow third party sellers to offer their products to consumers, but also offer products themselves. In this context, the platform operator faces a hold-up problem if he uses classical two-part tariffs only (which previous literature on two-sided markets has focused on) as potential competition between the platform operator and sellers reduces platform attractiveness. Since some sellers refuse to join the platform, some products that are not known to the platform operator will not be offered at all. We discuss the effects of different platform tariffs on this hold-up problem. We find that revenue-based fees lower the platform operator's incentives to compete with sellers, increasing platform attractiveness. Therefore, charging such proportional fees can be profitable, what may explain why several trade platforms indeed charge proportional fees.
The fourth chapter investigates the optimal tariff system in a model in which buyers are heterogeneous. A platform model is presented in which transactions are modeled explicitly and buyers can differ in their expected valuations when they decide to join the platform. The main effect that the model identifies is that the participation decision sorts buyers according to their expected valuations. This affects the pricing of sellers. Furthermore diffing form the usual approach, in which buyers are ex-ante homogeneous, the platform does not internalize the full transaction surplus. Hence it does not implement the socially efficient price on the platform, also it has control of the price with the transaction fee.
The fifth chapter investigates the effects of licensing on the market outcome after the patent has expired. In a setting with endogenous entry, a licensee has a head start over the competition which translated into a first mover advantage if strategies are strategic substitutes. As competitive strategies quantities and informative advertising are considered explicitly. We find that although licensing increases the joint profit of the patentee and licensee, this does not necessarily come from a reduction in consumer surplus or other firms profits. For the case of quantity competition we show that licensing is welfare improving. For the case of informative advertising, however, we show that licensing increases prices and is thus detrimental to consumer surplus.
This dissertation studies the interrelations between housing markets and monetary policy from three different perspectives. First, it identifies housing finance specific shocks and analyzes their impact on the broader economy and, most importantly, the systematic monetary policy reaction to such mortgage sector disturbances. Second, it investigates the implications of the institutional arrangement of a currency union for the potential buildup of a housing bubble in a member country of the monetary union by, inter alia, fostering border-crossing capital flows and ultimately residential investment activity. This dissertation, third, quantifies the effects of autonomous monetary policy shifts on the macroeconomy and, in particular, on housing markets by conditioning on financial sector conditions. From a methodological perspective, the dissertation draws on time-series econometrics like vector autoregressions (VARs) or local projections models.
This article introduces a new consistent variance-based estimator called ordinal consistent partial least squares (OrdPLSc). OrdPLSc completes the family of variance-based estimators consisting of PLS, PLSc, and OrdPLS and permits to estimate structural equation models of composites and common factors if some or all indicators are measured on an ordinal categorical scale. A Monte Carlo simulation (N =500) with different population models shows that OrdPLSc provides almost unbiased estimates. If all constructs are modeled as common factors, OrdPLSc yields estimates close to those of its covariance-based counterpart, WLSMV, but is less efficient. If some constructs are modeled as composites, OrdPLSc is virtually without competition.
This dissertation focuses on the drivers of international capital flows to emerging markets, as well as the determinants of crises in emerging markets. Particular emphasis is devoted to the role of U.S. monetary policy. The dissertation consists of three independent chapters.
Chapter 1 is a survey of the voluminous empirical literature on the drivers of capital flows to emerging markets. The contribution of the survey is to provide a comprehensive assessment of what we can say with relative confidence about the empirical drivers of EM capital flows. The evidence is structured based on the recognition that the drivers of capital flows vary over time and across different types of capital flows. The drivers are classified using the traditional framework for external and domestic factors (often referred to as “push versus pull” drivers), which is augmented by a distinction between cyclical and structural factors. Push factors are found to matter most for portfolio flows, somewhat less for banking flows, and least for foreign direct investment (FDI). Pull factors matter for all three components, but most for banking flows. A historical perspective suggests that the recent literature may have overemphasized the importance of cyclical factors at the expense of longer-term structural trends.
Chapter 2 undertakes an empirical analysis of the drivers of portfolio flows to emerging markets, focusing on the role of Fed policy. A time series model is estimated to analyze two different concepts of high frequency portfolio flows, including monthly data on flows into investment funds and a novel dataset on monthly portfolio flows obtained from individual national sources. The evidence presented in this chapter suggests a more nuanced interpretation of the role of U.S. monetary policy. In the existing literature, it is traditionally argued that Fed policy tightening is unambiguously negative for capital flows to emerging markets. By contrast, the findings presented in this dissertation suggest that it is the surprise element of monetary policy that affects EM portfolio inflows. A shift in market expectations towards easier future U.S. monetary policy leads to greater foreign portfolio inflows and vice versa. Given current market expectations of sustained increases in the federal funds rate in coming years, EM portfolio flows could be boosted by a slower pace of Fed tightening than currently expected or could be reduced by a faster pace of Fed tightening.
Chapter 3 examines the role of U.S. monetary policy in determining the incidence of emerging market crises. A negative binomial count model and a panel logit model are estimated to analyze the determinants of currency crises, banking crises, and sovereign defaults in a group of 27 emerging economies. The estimation results suggest that the probability of crises is substantially higher (1) when the federal funds rate is above its natural level, (2) during Fed policy tightening cycles, and (3) when market participants are surprised by signals that the Fed will tighten policy faster than previously expected. These findings contrast with the existing literature, which generally views domestic factors as the dominant determinants of emerging market crises. The findings also point to a heightened risk of emerging market crises in the coming years if the Fed continues to tighten monetary policy.
The standard property rights approach is focused on ex ante investment incentives, while there are no transaction costs that might restrain ex post negotiations. We explore the implications of such transaction costs. Prominent conclusions of the property rights theory may be overturned: A party may have stronger investment incentives when a non investing party is the owner, and joint ownership can be the uniquely optimal ownership structure. Intuitively, an ownership structure that is unattractive in the standard model may now be desirable, because it implies large gains from trade, such that the parties are more inclined to incur the transaction costs.
The aim of this thesis is to examine the competition patterns that exist between originators and generics by focusing on the articulations between regulation and incentives to innovate.
Once the characteristics of regulation in pharmaceutical markets is reviewed in the first chapter and an analysis of some current challenges related to cost-containment measures and innovation issues is performed, then in the second chapter, an empirical study is performed to investigate substitution patterns. Based on the EC´s merger decisions in the pharmaceutical sector from 1989 to 2011, this study stresses the key criteria to define the scope of the relevant product market based on substitution patterns and shows the trend towards a narrower market in time.
Chapters three and four aim to analyse in depth two widespread measures, the internal reference pricing system in off-patent markets, and risk-sharing schemes in patent-protected markets. By taking into account informational advantages of originators over generics, the third chapter shows the extent to which the implementation of a reference price for off-patent markets can contribute in promoting innovation.
Finally, in the fourth chapter, the modeling of risk-sharing schemes explains how such schemes can help in solving moral hazard and adverse selection issues by continuously giving pharmaceutical companies incentives to innovate and supplying medicinal products of a higher quality.
This dissertation deals with the contract choice of upstream suppliers as well as the consequences on competition and efficiency in a dynamic setting with inter-temporal externalities.
The introduction explains the motivation of the analysis and the comparison of different contract types, as for example standard contracts like simple two-part tariffs and additional specifications as contracts referencing the quantity of the contract-offering firm or the relative purchase level. The features of specific market structures should be considered in the analysis of specific vertical agreements and their policy implications. In particular, the role of dynamic changes regarding demand and cost parameters may have an influence on the results observed.
In the first model, a dominant upstream supplier and a non-strategic rival sell their products to a single downstream firm. The rival supplier faces learning effects which decrease the rival’s costs with respect to its previous sales. Therefore, learning effects represent a dynamic competitive threat to the dominant supplier. In this setup, the dominant supplier can react on inter-temporal externalities by specifying its contract to the downstream firm. The model shows that by offering market-share discounts, instead of simple two-part tariffs or quantity discounts, the dominant supplier maximizes long-run profits, and restricts the efficiency gains of its rival. If demand is linear, the market-share discount lowers consumer surplus and welfare.
The second model analyzes the strategic use of bilateral contracts in a sequential bargaining game. A dominant upstream supplier and its rival sequentially negotiate with a single downstream firm. The contract choice of the dominant supplier as well as the rival supplier’s reaction are investigated. In a single-period sequential contracting game, menus of simple two-part tariffs achieve the industry profit maximizing outcome. In a dynamic setting where the suppliers sequentially negotiate in each period, the dominant supplier uses additional contractual terms that condition on the rival’s quantity. Due to the first-mover advantage of the first supplier, the rival supplier is restricted in its contract choice. The consequences of the dominant supplier’s contract choice depend on bargaining power. In particular, market-share contracts can be efficiency enhancing and welfare-improving whenever the second supplier has a relatively high bargaining position vis-`a-vis the downstream firm. For a relatively low bargaining position of the rival supplier, the result is similar to the one determined in the first model. We show that results depend on the considered negotiating structure.
The third model studies the contract choice of two upstream competitors that simultaneously deal with a common buyer. In a complete information setting where both suppliers get to know whether further negotiations fail or succeed, a singleperiod model solves for the industry-profit maximizing outcome as long as contractual terms define at least a wholesale price and a fixed fee. In contrast, this collusive outcome cannot be achieved in a two-period model with inter-temporal externalities.
We characterize the possible market scenarios, their outcomes and consequences on competition and efficiency. Our results demonstrate that in case a rival supplier is restricted in its contract choice, the contract specification of a dominant supplier can partially exclude the competitor. Whenever equally efficient suppliers can both strategically choose contract specifications, the rivals defend their market shares by adapting appropriate contractual conditions.
The final chapter provides an overview of the main findings and presents some concluding remarks.
This dissertation deals with certain business strategies that have become particularly relevant with the spread and development of new information technologies.
The introduction explains the motivation, discusses different ways of defining the term "two-sided market", and briefly summarizes the subsequent essays.
The first essay examines the effects of product information on the pricing and advertising decision of a seller who offers an experience good whose quality is unknown to consumers prior to purchase. It comprises of two theoretical models which differ with respect to their view on advertising. The analysis addresses the question how the availability of additional, potentially misleading information affects the seller's quality-dependent pricing and advertising decision.
In the first model, in which both advertising and product reviews make consumers aware about product existence, the seller's optimal price turns out to be increasing in product quality. However, under certain circumstances, also the seller of a low-quality product prefers setting a high price. Within the given framework, the relationship between product quality and advertising depends on the particular parameter constellation.
In the second model, some consumers are assumed to interpret price as a signal of quality, while others rely on information provided by product reviews. Consequently, and differently from the first part, pricing may indirectly inform consumers about product quality. On the one hand, in spite of asymmetric information on product quality, equilibria exist that feature full information pricing, which is in line with previous results presented by the signaling literature. On the other hand, potentially misleading product reviews may rationalize further pricing patterns. Moreover, assuming that firms can manipulate product reviews by investing in concealed marketing, equilibria can arise in which a high price signals low product quality. However, in these extreme cases, only a few (credulous) consumers consider buying the product.
The second essay deals with trade platforms whose operators not only allow sellers to offer their products to consumers, but also offer products themselves. In this context, the platform operator faces a hold-up problem if he sets classical two-part tariffs (on which previous literature on two-sided markets focussed) as potential competition between the platform operator and sellers reduces platform attractiveness. Since some sellers refuse to join the platform, products whose existence is not known to the platform operator in the first place and which can only be established by better informed sellers may not be offered at all. However, revenue-based fees lower the platform operator's incentives to compete with sellers, increasing platform attractiveness. Therefore, charging such proportional fees can be profitable, what may explain why several trade platforms indeed do charge proportional fees.
The third essay examines settings in which sellers can be active both on an intermediary's trade platform and in other sales channels. It explores the sellers' incentives to set different prices across sales channels within the given setup. Afterwards, it analyzes the intermediary's tariff decision, taking into account the implications on consumers' choice between different sales channels. The analysis particularly focusses on the effects of a no-discrimination rule which several intermediaries impose, but which appears to be controversial from a competition policy view. It identifies under which circumstances the intermediary prefers restricting sellers' pricing decisions by imposing a no-discrimination rule, attaining direct control over the split-up of customers on sales channels. Moreover, it illustrates that such rules can have both positive and negative effects on welfare within the given framework.
This dissertation provides both empirically and theoretically new insights into the economic effects of housing and housing finance within NK DSGE models. Chapter 1 studies the drivers of the recent housing cycle in Ireland by developing and estimating a two-country NK DSGE model of the European Economic and Monetary Union (EMU). It finds that housing preference (demand) and technology shocks are the most important drivers of real house prices and real residential investment. In particular, housing preference shocks account for about 87% of the variation in real house prices and explain about 60% of the variation in real residential investment. A robustness analysis finally shows that a good part of the variation of the estimated housing preference shocks can be explained by unmodeled demand factors that have been considered in the empirical literature as important determinants of Irish house prices. Chapter 2 deals with the implications of cross-country mortgage market heterogeneity for the EMU. The chapter shows that a change in cross-country institutional characteristics of mortgage markets, such as the loan-to-value (LTV) ratio, is likely to be an important driver of an asymmetric development in the housing market and real economic activity of member states. Chapter 3 asks whether monetary policy shocks can trigger boom-bust periods in house prices and create persistent business cycles. The chapter addresses this question by implementing behavioral expectations into an otherwise standard NK DSGE model with housing and a collateral constraint. Key to the approach in chapter 3 is that agents form heterogeneous and biased expectations on future real house prices. Model simulations and impulse response functions suggest that these assumptions have strong implications for the transmission of monetary policy shocks. It is shown that monetary policy shocks might trigger pronounced waves of optimism, respectively, pessimism that drive house prices and the broader economy, all in a self-reinforcing fashion. The chapter shows that in an environment in which behavioral mechanisms play a role an augmented Taylor rule that incorporates house prices is superior, because it limits the scope of self-fulfilling waves of optimism and pessimism to arise. Chapter 4 challenges the view that the observed negative correlation between the Federal Funds rate and the interest rate implied by consumption Euler equations is systematically linked to monetary policy. Using a Monte Carlo experiment based on an estimated NK DSGE model, this chapter shows that risk premium shocks have the capability to drive a wedge between the interest rate targeted by the central bank and the implied Euler equation interest rate, so that the correlation between actual and implied rates is negative. Chapter 4 concludes by arguing that the implementation of collateral constraints tied to housing values is a promising way to strengthen the empirical performance of consumption Euler equations.
This thesis comprises three essays that study the impact of trade unions on occupational health and safety (OHS). The first essay proposes a theoretical model that highlights the crucial role that unions have played throughout history in making workplaces safer. Firms traditionally oppose better health standards. Workplace safety is costly for firms but increases the average health of workers and thereby the aggregate labour supply. A laissez-faire approach in which firms set safety standards is suboptimal as workers are not fully informed of health risks associated with their jobs. Safety standards set by better-informed trade unions are output and welfare increasing. The second essay extends the model to a two-country world consisting of the capital-rich "North" and the capital-poor "South". The North has trade unions that set high OHS standards. There are no unions in the South and OHS standards are low. Trade between these two countries can imply a reduction in safety standards in the North, lowering the positive welfare effects of trade. Moreover, when trade unions are also established in the South, northern OHS standards might be further reduced. The third essay studies the impact of unions on OHS from an empirical perspective. It focuses on one component of OHS: occupational injuries. A literature summary including 25 empirical studies shows that most studies associate unions with less fatal occupational injuries. This is in perfect line with the anecdotal evidence and the basic model from the first essay. However, the literature summary also shows that most empirical studies associate unions with more nonfatal occupational injuries. This puzzling result has been explained in the literature by (1) lower underreporting in unionized workplaces, (2) unions being more able to organize hazardous workplaces, and (3) unionized workers preferring higher wages at the expense of better working conditions. Using individual-level panel data, this essay presents evidence against all these three explanations. However, it cannot reject the hypothesis that workers reduce their precautionary behaviour when they join a trade union. Hence, the puzzle seems to be due to a strong moral hazard effect. These empirical results suggest that the basic model from the first essay needs to be extended to account for this moral hazard effect.
China’s monetary policy aims to reach two final targets: a paramount economical target (i.e. price stability) and a less important political target (i.e. economic growth). The main actor of monetary policy is the central bank, the People’s Bank of China (PBC). But the PBC is a non-independent central bank. The State Council approves the goals of monetary policy. Very limited instrument independence means that interest rates cannot be set at the PBC’s discretion, and in-sufficient personal independence fails to insulate central bank officials from political influence. Monetary policy in China applies to two sets of monetary policy instruments: (i) instruments of the PBC; and (ii) non-central bank policy instruments. The instruments of the PBC include price-based indirect and quantity-based direct instruments. Non-central bank policy instruments include price and wage controls. The simultaneous usage of all these instruments leads to various distortions that ultimately prevent the interest rate channel of monetary transmission from functioning. Moreover, the strong influences of quantity-based direct instruments and non-central bank policy instruments bring into question the approach of indirect monetary policy in general. The PBC officially follows the monetary targeting approach with monetary aggregates as intermediate targets. Domestic loan growth and the exchange rate are defined as additional intermediate targets. In an in-depth analysis of the intermediate targets two main issues are primarily explored: (i) Are the intermediate targets of the Chinese monetary policy controllable? (ii) Is a sufficient relationship between these targets and the inflation rate observable? It is then shown that monetary aggregates are very difficult to control, but they have a satisfactory relationship with the inflation rate. Similarly, domestic loan growth is difficult to control – a fact largely attributed to the interest rate elasticity of loans – while there is a particularly close relationship between credit growth and the inflation rate. The exchange rate as an intermediate target can be controlled through foreign exchange market interventions; at the same time the exchange rate appears to have a significant relationship to the domestic inflation rate. Discussing the special issue of sterilizing foreign exchange inflows, the study concludes that between 2002 and 2008 not only no costs were incurred by sterilization operations, but that the central bank was actually able to realize a profit through foreign exchange market interventions. Based on this, it is concluded that the exchange rate target has not adversely affected the domestic orientation of monetary policy on the whole. The final part of the study examines whether there are any alternative monetary policy approaches that may be able to describe the policy approach in China; special focus is placed on nominal GDP targeting, the Taylor rule, and inflation targeting. A literature review reveals that the concept of nominal GDP targeting may be able to detect inflationary tendencies in the economy and, in combination with other indicators, it could be a suitable concept to assess the overall economic situation. The author calculates a Taylor rule for China from 1994 to 2008 and concludes that there is no close relationship between the PBC lending and the Taylor rate. The author then designs an augmented Taylor rule expanded to include a credit component (credit-augmented Taylor rule). The study shows that the augmented Taylor rule does not perform much better than the original one, but that it maps high inflationary periods relatively well. This is attributed to direct interventions into the credit markets, which have played a major role in combating inflationary cycles over the past decades. The analysis ends with an introduction of the concept of inflation targeting and an examination of whether this could describe monetary policy in China. It is clear that the PBC does not currently follow the inflation targeting approach, although the Chinese authorities could actually be able to influence inflation expectations effectively, not least through direct instruments such as price controls. The author notes that the PBC indeed had a good track record of fighting inflation between 1994 and 2008, and that this may now indicate a good time to think about introducing inflation targeting in China. The central conclusion of the study is that the proven gradual approach to economic and monetary reforms in China is reaching its limit. To break the vicious cycle that relies on the continuous use of quantity-based instruments to compensate for the ineffective price-based instruments – which in turn arises from the simultaneous use of both types of instruments – a complete shift away from quantity-based instruments is needed. Only then the approach of indirect monetary policy, which was officially introduced in 1998, could come into full play.
Since the beginning, central banks have used a wide range of instruments to achieve their ultimate purpose of price stability. One measure in the authorities toolbox is a foreign exchange market intervention. The discussion about this instrument has come a long way. So far, the discussion relied mainly on industrialized countries' experiences. The negative outcomes of most studies with respect to the effectiveness of the intervention tool, opened up a discussion, whether interventions should be used by the authorities to manage exchange rate aspects. Consequently, the question about the dynamics of foreign exchange market interventions is now open to the subject-matter of developing and emerging market countries. Monetary policy in those countries often constitutes an active management of exchange rates. However, the basic discussions about intervention dynamics have had one essential drawback. Neither the primary literature of industrialized countries nor studies dealing with developing countries have considered the fact that intervention purposes and the corresponding effects are likely to vary over time. This thesis is designed to provide the reader with essential issues of central bank interventions, and aims to give further, as well as new contributions, in terms of empirical research on interventions in emerging markets. The main objectives of this study are the analysis of central bank intervention motives, and the corresponding effects on exchange rates in emerging markets. The time dependency of both issues is explicitly considered, which states a novelty in academic research of central bank interventions. Additionally, the outcomes are discussed against the background of underlying economic and monetary policy fundamentals. This could well serve as a starting point for further research.
This thesis analyzes the relationship between market concentration and efficiency of the market outcome in a differentiated good context from different points of view. The first chapter introduces the objectives of competition policy and antitrust authorities and outlines the importance of market concentration. Chapter 2 analyzes the relationship between social surplus and market heterogeneity in a differentiated Cournot oligopoly. Market heterogeneity is due to differently efficient firms, each of them producing one variety of a differentiated good. All firms exhibit constant but different marginal costs without fixed costs. Consumers preferences are given by standard quadratic utility originated by Dixit (1979). Since preferences are quasi-linear social surplus is the measure for Pareto-optimality. The main finding is that consumer suprlus as well as producer surplus increase with the variance of marginal costs. The third chapter analyzes the relationship between the cost structure and market concentration measured by the Herfindahl-Hirschman Index. Market concentration increases with the variance of marginal costs as well as the mean of marginal costs. Chapter four analyzes welfare implications of present antitrust enforcement policy on basis of the same theoretical model. European as well as the US Merger Guidelines presume a negative impact of market concentration on the competitiveness of the market and, therefore, on the efficiency of the market outcome. The results of the previous chapters indicate that this assumption is false. The main finding is that post-merger joint profit of the insider increase with the size of the merger. Moreover, there is a negative relationship between the size of the merger and efficiency of the market outcome. Present antitrust enforcement policy increases the disparity of output levels and enforces the removal of the least efficient firm of the market. The welfare gains can be traced back on these two effects. Therefore, neither a minimum of market concentration nor a maximum of product diversity is necessarily welfare enhancing even in absence of fixed costs.
This thesis deals with three selected dimensions of strategic behavior, namely investment in R&D, mergers and acquisitions, and inventory decisions in dynamic oligopolies. The question the first essay addresses is how the market structure evolves due to innovative activities when firms' level of technological competence is valuable for more than one project. The focus of the work is the analysis of the effect of learning-by-doing and organizational forgetting in R&D on firms' incentives to innovate. A dynamic step-by-step innovation model with history dependency is developed. Firms can accumulate knowledge by investing in R&D. As a benchmark without knowledge accumulation it is shown that relaxing the usual assumption of imposed imitation yields additional strategic effects. Therefore, the leader's R&D effort increases with the gap as she is trying to avoid competition in the future. When firms gain experience by performing R&D, the resulting effect of knowledge induces technological leaders to rest on their laurels which allows followers to catch up. Contrary to the benchmark case the leader's innovation effort declines with the lead. This causes an equilibrium where the incentives to innovate are highest when competition is most intense. Using a model of oligopoly in general equilibrium the second essay analyzes the integration of economies that might be accompanied by cross-border merger waves. Studying economies which prior to trade were in stable equilibrium where mergers were not profitable, we show that globalization can trigger cross-border merger waves for a sufficiently large heterogeneity in marginal cost. In partial equilibrium, consumers benefit from integration even when a merger wave is triggered which considerably lowers intensity of competition. Welfare increases. In contrast, in general equilibrium where interactions between markets and therefore effects on factor prices are considered, gains from trade can only be realized by reallocation of resources. The higher the technological dissimilarity between countries the better can efficiency gains be realized in integrated general equilibrium. The overall welfare effect of integration is positive when all firms remain active but indeterminate when firms exit or are absorbed due to a merger wave. It is possible for decreasing competition to dominate the welfare gain from more efficient resource allocation across sectors. Allowing for firms' entry alters results as in an integrated world coexistence of firms of different countries is never possible. Comparative advantages with respect to entry and production are important for realizing efficiency gains from trade. The third essay analyzes the interaction between price and inventory decisions in an oligopoly industry and its implications for the dynamics of prices. The work extends existing literature and especially the work of Hall and Rust (2007) to endogenous prices and strategic oligopoly competition. We show that the optimal decision rule is an (S,s) order policy and prices and inventories are strategic substitutes. Fixed ordering costs generate infrequent orders. Additionally, with strategic competition in prices, (S,s) inventory behavior together with demand uncertainty generates cyclical pattern in prices The last chapter presents some concluding remarks on the results of the essays.
This thesis analyzes the 2001-2006 labor market reforms in Germany. The aim of this work is twofold. First, an overview of the most important reform measures and the intended effects is given. Second, two specific and very fundamental amendments, namely the merging of unemployment assistance and social benefits, as well as changes in the duration of unemployment insurance benefits, are analyzed in detail to evaluate their effects on individuals and the entire economy. Using a matching model with optimal search intensity and Semi-Markov methods, the effects of these two amendments on the duration of unemployment, optimal search intensity and unemployment are analyzed.
A comprehensive approach for currency crises theories stressing the role of the anchor country
(2008)
The approach is based on the finding that new generations of currency crises theories always had developed ex post after popular currency crises. Discussing the main theories of currency crises shows their disparity: The First Generation of currency crises models argues based on the assumption of a chronic budget deficit that is being monetized by the domestic central bank. The result is a trade-off between an expansionary monetary policy that is focused on the internal economic balance and a fixed exchange rate which is depending on the rules of interest parity and purchasing power parity. This imbalance inevitably results in a currency crisis. Altogether, this theory argues with a disrupted external balance on the foreign exchange market. Second Generation currency crises models on the other side focus on the internal macroeconomic balance. The stability of a fixed exchange rate is depending on the economic benefit of the exchange rate system in relation to the social costs of maintaining it. As soon as social costs are increasing and showing up in deteriorating fundamentals, this leads to a speculative attack on the fixed exchange rate system. The term Third Generation of currency crises finally summarizes a variety of currency crises theories. These are also arguing psychologically to explain phenomena as contagion and spill-over effects to rationalize crises detached from the fundamental situation. Apart from the apparent inconsistency of the main theories of currency crises, a further observation is that these explanations focus on the crisis country only while international monetary transmission effects are left out of consideration. These however are a central parameter for the stability of fixed exchange rate systems, in exchange rate theory as well as in empirical observations. Altogether, these findings provide the motivation for developing a theoretical approach which integrates the main elements of the different generations of currency crises theories and which integrates international monetary transmission. Therefore a macroeconomic approach is chosen applying the concept of the Monetary Conditions Index (MCI), a linear combination of the real interest rate and the real exchange rate. This index firstly is extended for international monetary influences and called MCIfix. MCIfix illustrates the monetary conditions required for the stability of a fixed exchange rate system. The central assumption of this concept is that the uncovered interest parity is maintained. The main conclusion is that the MCIfix only depends on exogenous parameters. In a second step, the analysis integrates the monetary policy requirements for achieving an internal macroeconomic stability. By minimizing a loss function of social welfare, a MCI is derived which pictures the economically optimal monetary policy MCIopt. Instability in a fixed exchange rate system occurs as soon as the monetary conditions for an internal and external balance are deviating. For discussing macroeconomic imbalances, the central parameters determining the MCIfix (and therefore the relation of MCIfix to MCIopt) are discussed: the real interest rate of the anchor country, the real effective exchange rate and a risk premium. Applying this theory framework, four constellations are discussed where MCIfix and MCIopt fall apart in order to show the central bank’s possibilities for reacting and the consequences of that behaviour. The discussion shows that the integrative approach manages to incorporate the central elements of traditional currency crises theories and that it includes international monetary transmission instead of reducing the discussion on an inconsistent domestic monetary policy. The theory framework for fixed exchange rates is finally applied in four case studies: the currency crises in Argentina, the crisis in the Czech Republic, the Asian currency crisis and the crisis of the European Monetary System. The case studies show that the developed monetary framework achieves integration of different generations of crises theories and that the monetary policy of the anchor country plays a decisive role in destabilising fixed exchange rate systems.
This thesis deals with the economics of innovation. In a general introduction we illustrate how several aspects of competition policy are linked to firms' innovation incentives. In three individual essays we analyze more specific issues. The first essay deals with interdependencies of mergers and innovation incentives. This is particularly relevant as both topics are central elements of a firm's competitive strategy. The essay focuses on the impact of mergers on innovative activity and competition in the product market. Possible inefficiencies due to organizational problems of mergers are accounted for. We show that optimal investment strategies depend on the resulting market structure and differ significantly from insider to outsider. In our linear model mergers turn out to increase social surplus. The second essay analyzes the different competitive advantages of large and small firms in innovation competition. While large firms typically have a better access to product markets, small firms often have a superior research efficiency. These distinct advantages immediately lead to the question of cooperations between firms. In our model we allow large firms to acquire small firms. In a pre-contest acquisition game large firms bid sequentially for small firms in order to combine respective advantages. Innovation competition is modeled as a patent contest. Sequential bidding allows the first large firms to bid strategically to induce a reaction of its competitor. For high efficiencies large firms prefer to acquire immediately, leading to a symmetric market structure. For low efficiencies strategic waiting of the first large firm leads to an asymmetric market structure even though the initial situation is symmetric. Furthermore, acquisitions increase the chances for successful innovation. The third essay deals with government subsidies to innovation. Government subsidies for research and development are intended to promote projects with high returns to society but too little private returns to be beneficial for private investors. Apart from the direct funding of these projects, government grants may serve as a signal of good investments for private investors. We use a simple signaling model to capture this phenomenon and allow for two types of risk classes. The agency has a preference for high risk projects as they promise high expected social returns, whereas banks prefer low risk projects with high private returns. In a setup where the subsidy can only be used to distinguish between high and low risk projects, government agency's signal is not very helpful for banks' investment decision. However, if the subsidy is accompanied by a quality signal, it may lead to increased or better selected private investments. The last chapter summarizes the main findings and presents some concluding remarks on the results of the essays.
Subject of the present study is the agent-based computer simulation of Agent Island. Agent Island is a macroeconomic model, which belongs to the field of monetary theory. Agent-based modeling is an innovative tool that made much progress in other scientific fields like medicine or logistics. In economics this tool is quite new, and in monetary theory to this date virtual no agent-based simulation model has been developed. It is therefore the topic of this study to close this gap to some extend. Hence, the model integrates in a straightforward way next to the common private sectors (i.e. households, consumer goods firms and capital goods firms) and as an innovation a banking system, a central bank and a monetary circuit. Thereby, the central bank controls the business cycle via an interest rate policy; the according mechanism builds on the seminal idea of Knut Wicksell (natural rate of interest vs. money rate of interest). In addition, the model contains also many Keynesian features and a flow-of-funds accounting system in the tradition of Wolfgang Stützel. Importantly, one objective of the study is the validation of Agent Island, which means that the individual agents (i.e. their rules, variables and parameters) are adjusted in such a way that on the aggregate level certain phenomena emerge. The crucial aspect of the modeling and the validation is therefore the relation between the micro and macro level: Every phenomenon on the aggregate level (e.g. some stylized facts of the business cycle, the monetary transmission mechanism, the Phillips curve relationship, the Keynesian paradox of thrift or the course of the business cycle) emerges out of individual actions and interactions of the many thousand agents on Agent Island. In contrast to models comprising a representative agent, we do not apply a modeling on the aggregate level; and in contrast to orthodox GE models, true interaction between heterogeneous agents takes place (e.g. by face-to-face-trading).
The development of free floating exchange rates can hardly be explained by macroeconomic fundamentals as supposed by traditional economic theories. Therefore, prominent economists yet conclude that there exists an ‘exchange rate disconnect puzzle’ (see Obstfeld and Rogoff [2000]). The observable exchange rate trends are often attributed to an excessive speculative trading behavior of foreign exchange market participants. In this study we deal with psychological factors, which may be important for understanding the observable exchange rate movements. Thus, our study belongs to the new research field of behavioral economics, which considers the relevance of psychological factors in economic contexts. The main objective of behavioral economists is to develop a more realistic view of the actual human behavior in the context of economics. Therefore, behavioral economists often refer to the work of behavioral decision theorists, who introduced new concepts under the general heading of bounded rationality. Central to the concept of bounded rationality is the assumption that humans’ actual behavior deviates from the ideal of economic rationality due to at least two reasons: first, decisions are usually based on an incomplete information basis (limited information) and, second, the information processing of human beings is limited by their computational capacities (limited cognitive resources). Due to these limitations people are forced to apply simplification mechanisms in information processing. Important simplification mechanisms, which play a decisive role in the process judgment and decision making, are simple heuristics. Simple heuristics can principally be characterized as simple rules of thumb, which allow quick and efficient decisions even under a high degree of uncertainty. In this study, our aim is to analyze the relevance of simple heuristics in the context of foreign exchange markets. In our view, the decision situation in foreign exchange markets can serve as a prime example for decision situations in which simple heuristics are especially relevant as the complexity of the decision situation is very high. The study is organized as follows. In Chapter II, we deal with the exchange rate disconnect puzzle. In particular, we discuss and check the main implications of the traditional economic approach for explaining exchange rate movements. The asset market theory of exchange rate determination implies that exchange rates are mainly driven by the development of macroeconomic fundamentals. Furthermore the asset market theory assumes that foreign exchange market participants form rational expectations concerning future exchange rate developments and that exchange rates are determined in efficient markets. Overall the empirical evidence suggests that the traditional approach for explaining exchange rate changes is at odds with the data. Chapter III addresses the existence of long and persistent trends in exchange rate time series. Overall, our empirical analysis reveals that exchange rates show a clear tendency to move in long and persistent trends. Furthermore, we discuss the relevance of speculation in foreign exchange markets. With regard to the impact of speculation, economic theory states that speculation can have either a stabilizing effect or a destabilizing effect on exchange rates. At the end of Chapter III, we examine the Keynesian view on the functioning of asset markets. In Chapter IV we explore the main insights from the new research field of behavioral economics. A main building block of behavioral economics is the concept of bounded rationality first introduced by Herbert Simon [1955]. In the centre of the concept of bounded rationality is a psychological analysis of the actual human judgment and decision behavior. In Chapter IV, we discuss the concept of bounded rationality in detail and illustrate important insights of behavioral decision theories. In particular, we deal with the relevance of simple heuristics in the context of foreign exchange markets. Chapter V provides experimental and empirical evidence for the suggested relevance of simple heuristics in foreign exchange markets. In the first experiment, we deal with the human expectation formation. We compare point forecasts of the EUR/USD exchange rate surveyed from professional analysts and experimentally generated point forecasts of students for a simulated exchange rate time series. The results show that the forecasting performance of both groups differs substantially. Afterwards we analyze the nature of expectation formation of both groups in detail to reveal similarities and differences, which allow us to draw reasonable explanations for the differences in the forecasting performances. In the second experiment, we analyze the expectation formation in an experimental foreign exchange market. This approach allows us to consider the relevance of expectation feedback as individuals’ expectations directly influence the actual realization of the time series. Thus, Keynes’ predictions on the importance of conventions in asset markets can be analyzed. Overall, both experiments reveal that the human beings tend to apply simple trend heuristics, when forming their expectations about future exchange rates. In the empirical part of Chapter V we deal with the usefulness of such simple trend heuristics in real world. Only if simple trend heuristics lead to profits in the specific environment of foreign exchange markets, their application can be recommended. Thus, we analyze the profitability of simple technical analysis tools in foreign exchange markets. Finally, Chapter VI provides concluding remarks.
A theory of managed floating
(2003)
After the experience with the currency crises of the 1990s, a broad consensus has emerged among economists that such shocks can only be avoided if countries that decided to maintain unrestricted capital mobility adopt either independently floating exchange rates or very hard pegs (currency boards, dollarisation). As a consequence of this view which has been enshrined in the so-called impossible trinity all intermediate currency regimes are regarded as inherently unstable. As far as the economic theory is concerned, this view has the attractive feature that it not only fits with the logic of traditional open economy macro models, but also that for both corner solutions (independently floating exchange rates with a domestically oriented interest rate policy; hard pegs with a completely exchange rate oriented monetary policy) solid theoretical frameworks have been developed. Above all the IMF statistics seem to confirm that intermediate regimes are indeed less and less fashionable by both industrial countries and emerging market economies. However, in the last few years an anomaly has been detected which seriously challenges this paradigm on exchange rate regimes. In their influential cross-country study, Calvo and Reinhart (2000) have shown that many of those countries which had declared themselves as ‘independent floaters’ in the IMF statistics were charaterised by a pronounced ‘fear of floating’ and were actually heavily reacting to exchange rate movements, either in the form of an interest rate response, or by intervening in foreign exchange markets. The present analysis can be understood as an approach to develop a theoretical framework for this managed floating behaviour that – even though it is widely used in practice – has not attracted very much attention in monetary economics. In particular we would like to fill the gap that has recently been criticised by one of the few ‘middle-ground’ economists, John Williamson, who argued that “managed floating is not a regime with well-defined rules” (Williamson, 2000, p. 47). Our approach is based on a standard open economy macro model typically employed for the analysis of monetary policy strategies. The consequences of independently floating and market determined exchange rates are evaluated in terms of a social welfare function, or, to be more precise, in terms of an intertemporal loss function containing a central bank’s final targets output and inflation. We explicitly model the source of the observable fear of floating by questioning the basic assumption underlying most open economy macro models that the foreign exchange market is an efficient asset market with rational agents. We will show that both policy reactions to the fear of floating (an interest rate response to exchange rate movements which we call indirect managed floating, and sterilised interventions in the foreign exchange markets which we call direct managed floating) can be rationalised if we allow for deviations from the assumption of perfectly functioning foreign exchange markets and if we assume a central bank that takes these deviations into account and behaves so as to reach its final targets. In such a scenario with a high degree of uncertainty about the true model determining the exchange rate, the rationale for indirect managed floating is the monetary policy maker’s quest for a robust interest rate policy rule that performs comparatively well across a range of alternative exchange rate models. We will show, however, that the strategy of indirect managed floating still bears the risk that the central bank’s final targets might be negatively affected by the unpredictability of the true exchange rate behaviour. This is where the second policy measure comes into play. The use of sterilised foreign exchange market interventions to counter movements of market determined exchange rates can be rationalised by a central bank’s effort to lower the risk of missing its final targets if it only has a single instrument at its disposal. We provide a theoretical model-based foundation of a strategy of direct managed floating in which the central bank targets, in addition to a short-term interest rate, the nominal exchange rate. In particular, we develop a rule for the instrument of intervening in the foreign exchange market that is based on the failure of foreign exchange market to guarantee a reliable relationship between the exchange rate and other fundamental variables.
This study investigates the credit channel in the transmission of monetary policy in Germany by means of a structural analysis of aggregate bank loan data. We base our analysis on a stylized model of the banking firm, which specifies the loan supply decisions of banks in the light of expectations about the future course of monetary policy. Using the model as a guide, we apply a vector error correction model (VECM), in which we identify long-run cointegration relationships that can be interpreted as loan supply and loan demand equations. In this way, the identification problem inherent in reduced form approaches based on aggregate data is explicitly addressed. The short-run dynamics is explored by means of innovation analysis, which displays the reaction of the variables in the system to a monetary policy shock. The main implication of our results is that the credit channel in Germany appears to be effective, as we find that loan supply effects in addition to loan demand effects contribute to the propagation of monetary policy measures.