Betriebswirtschaftliches Institut
Refine
Has Fulltext
- yes (80)
Is part of the Bibliography
- yes (80)
Year of publication
Document Type
- Doctoral Thesis (80) (remove)
Keywords
- Deutschland (14)
- Maschinelles Lernen (7)
- Operations Management (7)
- Entscheidungsunterstützung (5)
- Rechnungslegung (5)
- Steuerrecht (5)
- Accounting (4)
- Audit Quality (4)
- Entscheidungsunterstützungssystem (4)
- Prescriptive Analytics (4)
Institute
EU-Project number / Contract (GA) number
Die Jahresabschlussprüfung verfolgt das Ziel, die Verlässlichkeit der Rechnungslegung zu bestätigen. Folglich kann sie einen wesentlichen Beitrag zu einem hohen Informationsniveau an den Märkten leisten. Angesichts dieser großen ökonomischen Bedeutung unternimmt der deutsche Gesetzgeber zahlreiche Anstrengungen, um eine hohe Prüfungsqualität sicherzustellen.
Die Sichtung der Wirtschaftsprüferordnung zeigt hierbei, dass regulatorische Maßnahmen ergriffen werden, die am Kern der Jahresabschlussprüfung ansetzen, nämlich an den Berufsangehörigen selbst. So wurde der Zugang zum Berufsstand der vereidigten Buchprüfer mehrmals geschlossen und wiedereröffnet. Des Weiteren sind markante Anpassungen des Niveaus des Wirtschaftsprüfungsexamens im Zeitablauf zu erkennen. Bei der Jahresabschlussprüfung der Unternehmen von öffentlichem Interesse sind außerdem besondere Berufspflichten zu erfüllen. Zum einen ist diesen schweren Eingriffen in die Freiheit der Berufswahl und der Berufsausübung gemein, dass sie allesamt die Qualifikation des Abschlussprüfers adressieren. Zum anderen werden die entsprechenden Gesetzesänderungen mehrheitlich mit einer Stärkung der Prüfungsqualität begründet.
Fraglich ist, inwiefern jene Facetten der Prüferqualifikation tatsächlich einen Einfluss auf die Prüfungsqualität ausüben. Aufgrund mangelnder Evidenz ergibt sich die Notwendigkeit, eine empirische Studie am deutschen Prüfermarkt durchzuführen und somit den Beginn zur Schließung der identifizierten Forschungslücke zu setzen.
Das Ziel der vorliegenden Dissertation besteht folglich darin, den Zusammenhang zwischen der Prüferqualifikation und der Prüfungsqualität mittels Regressionsanalysen zu untersuchen. Dazu wurde ein einzigartiger Datensatz zu deutschen privaten prüfungspflichtigen Kapitalgesellschaften mit unkonsolidierten Finanz- und Prüferinformationen im Zeitraum 2006-2018 mit insgesamt 217.585 grundlegenden Beobachtungen erhoben, bereinigt und aufbereitet. Da die Prüfungsqualität nicht direkt beobachtbar ist, wird zwischen wahrgenommener Prüfungsqualität und tatsächlicher Prüfungsqualität unterschieden. Im Rahmen dieser Dissertation wird die wahrgenommene Prüfungsqualität über Fremdkapitalkosten und die tatsächliche Prüfungsqualität über absolute diskretionäre Periodenabgrenzungen approximiert.
Die Ergebnisse der Hauptregressionen zeigen überwiegend, dass kein Zusammenhang zwischen den Maßgrößen der Prüferqualifikation und der wahrgenommenen und tatsächlichen Prüfungsqualität besteht. Die Zusatz- und Sensitivitätsanalysen unterstützen diesen Befund. So können mit Blick auf die Berufszugangsregelungen keine Qualitätsunterschiede zwischen den Berufsständen der Wirtschaftsprüfer und der vereidigten Buchprüfer nachgewiesen werden. Auch innerhalb des Berufstandes der Wirtschaftsprüfer ergeben sich keine Hinweise auf ein Qualitätsgefälle zwischen den Prüfergruppen, die unterschiedliche Examensanforderungen durchlebt haben. Hinsichtlich der Berufsausübungsregelungen ist zu beobachten, dass die zusätzlichen Anforderungen an die Jahresabschlussprüfung der Unternehmen von öffentlichem Interesse nicht mit einer anderen Prüfungsqualität bei privaten Unternehmen verbunden sind. Die beschriebenen regulatorischen Schritte des Gesetzgebers im Bereich der Prüferqualifikation erscheinen somit im Lichte einer verbesserten Prüfungsqualität nicht zwingend gerechtfertigt.
In a world of constant change, uncertainty has become a daily challenge for businesses. Rapidly shifting market conditions highlight the need for flexible responses to unforeseen events. Operations Management (OM) is crucial for optimizing business processes, including site planning, production control, and inventory management. Traditionally, companies have relied on theoretical models from microeconomics, game theory, optimization, and simulation. However, advancements in machine learning and mathematical optimization have led to a new research field: data-driven OM.
Data-driven OM uses real data, especially time series data, to create more realistic models that better capture decision-making complexities. Despite the promise of this new research area, a significant challenge remains: the availability of extensive historical training data. Synthetic data, which mimics real data, has been used to address this issue in other machine learning applications.
Therefore, this dissertation explores how synthetic data can be leveraged to improve decisions for data-driven inventory management, focusing on the single-period newsvendor problem, a classic stochastic optimization problem in inventory management.
The first article, "A Meta Analysis of Data-Driven Newsvendor Approaches", presents a standardized evaluation framework for data-driven prescriptive approaches, tested through a numerical study. Findings suggest model performance is not robust, emphasizing the need for a standardized evaluation process.
The second article, "Application of Generative Adversarial Networks in Inventory Management", examines using synthetic data generated by Generative Adversarial Networks (GANs) for the newsvendor problem. This study shows GANs can model complex demand relationships, offering a promising alternative to traditional methods.
The third article, "Combining Synthetic Data and Transfer Learning for Deep Reinforcement Learning in Inventory Management", proposes a method using Deep Reinforcement Learning (DRL) with synthetic and real data through transfer learning. This approach trains a generative model to learn demand distributions, generates synthetic data, and fine-tunes a DRL agent on a smaller real dataset. This method outperforms traditional approaches in controlled and practical settings, though further research is needed to generalize these findings.
The collection at hand is concerned with learning curve effects in hospitals as highly specialized expert organizations and comprises four papers, each focusing on a different aspect of the topic. Three papers are concerned with surgeons, and one is concerned with the staff of the emergency room in a conservative treatment.
The preface compactly addresses the steadily increasing health care costs and economic pressure, the hospital landscape in Germany as well as its development. Furthermore, the DRG lump-sum compensation and the characteristics of the health sector, which is strongly regulated by the state and in which ethical aspects must be omnipresent, are outlined. Besides, the benefit of knowing about learning curve effects in order to cut costs and to keep quality stable or even improve it, is addressed.
The first paper of the collection investigates the learning effects in a hospital which has specialized on endoprosthetics (total hip and knee replacement). Doing so, the specialized as well as the non-specialized interventions are studied. Costs are not investigated directly, but cost indicators. The indicator of costs in the short term are operating room times. The one of medium- to long-term costs is quality. It is operationalized by complications in the post-anesthesia care unit. The study estimates regression models (OLS and logit). The results indicate that the specialization comes along with advantages due to learning effects in terms of shorter operating room times and lower complication rates in endoprosthetic interventions. For the non-specialized interventions, the results are the same. There are no possibly negative effects of specialization on non-specialized surgeries, but advantageous spillover effects. Altogether, the specialization can be regarded as reasonable, as it cuts costs of all surgeries in the short, medium, and long term. The authors are Carsten Bauer, Nele Möbs, Oliver Unger, Andrea Szczesny, and Christian Ernst.
In the second paper surgeons’ learning curves effects in a teamwork vs. an individual work setting are in the focus of interest. Thus, the study combines learning curve effects with teamwork in health care, an issue increasingly discussed in recent literature. The investigated interventions are tonsillectomies (surgical excision of the palatine tonsils), a standard intervention. The indicator of costs in the short and medium to long term are again operating room times and complications as a proxy for quality respectively. Complications are secondary bleedings, which usually occur a few days after surgery. The study estimates regression models (OLS and logit). The results show that operating room times decrease with increasing surgeon’s experience. Surgeons who also operate in teams learn faster than the ones always operating on their own. Thus, operating room times are shorter for surgeons who also take part in team interventions. As a special feature, the data set contains the costs per case. This enables assuring that the assumed cost indicators are valid. The findings recommend team surgeries especially for resident physicians. The authors are Carsten Bauer, Oliver Unger, and Martin Holderried.
The third paper is dedicated to stapes surgery, a therapy for conductive hearing loss caused by otosclerosis (overflow bone growth). It is conceptually simple, but technically difficult. Therefore, it is regarded as the optimum to study learning curve effects in surgery. The paper seeks a comprehensive investigation. Thus, operating room times are employed as short-term cost indicator and quality as the medium to long term one. To measure quality, the postoperative difference between air and bone conduction threshold as well as a combination of this difference and the absence of complications. This paper also estimates different regression models (OLS and logit). Besides investigating the effects on department level, the study also considers the individual level, this means operating room times and quality are investigated for individual surgeons. This improves the comparison of learning curves, as the surgeons worked under widely identical conditions. It becomes apparent that the operating room times initially decrease with increasing experience. The marginal effect of additional experience gets smaller until the direction of the effect changes and the operating room times increase with increasing experience, probably caused by the allocation of difficult cases to the most experienced surgeons. Regarding quality, no learning curve effects are observed. The authors are Carsten Bauer, Johannes Taeger, and Kristen Rak.
The fourth paper is a systematic literature review on learning effects in the treatment of ischemic strokes. In case of stroke, every minute counts. Therefore, there is the inherent need to reduce the time from symptom onset to treatment. The article is concerned with the reduction of the time from arrival at the hospital to thrombolysis treatment, the so-called “door-to-needle time”. In the literature, there are studies on learning in a broader sense caused by a quality improvement program as well as learning in a narrower sense, in which learning curve effects are evaluated. Besides, studies on the time differences between low-volume and high-volume hospitals are considered, as the differences are probably the result of learning and economies of scale. Virtually all the 165 evaluated articles report improvements regarding the time to treatment. Furthermore, the clinical results substantiate the common association of shorter times from arrival to treatment with improved clinical outcomes. The review additionally discusses the economic implications of the results. The author is Carsten Bauer.
The preface brings forward that after the measurement of learning curve effects, further efforts are necessary for using them in order to increase efficiency, as the issue does not admit of easy, standardized solutions. Furthermore, the postface emphasizes the importance of multiperspectivity in research for the patient outcome, the health care system, and society.
Recent computing advances are driving the integration of artificial intelligence (AI)-based systems into nearly every facet of our daily lives. To this end, AI is becoming a frontier for enabling algorithmic decision-making by mimicking or even surpassing human intelligence. Thereupon, these AI-based systems can function as decision support systems (DSSs) that assist experts in high-stakes use cases where human lives are at risk. All that glitters is not gold, due to the accompanying complexity of the underlying machine learning (ML) models, which apply mathematical and statistical algorithms to autonomously derive nonlinear decision knowledge. One particular subclass of ML models, called deep learning models, accomplishes unsurpassed performance, with the drawback that these models are no longer explainable to humans. This divergence may result in an end-user’s unwillingness to utilize this type of AI-based DSS, thus diminishing the end-user’s system acceptance.
Hence, the explainable AI (XAI) research stream has gained momentum, as it develops techniques to unravel this black-box while maintaining system performance. Non-surprisingly, these XAI techniques become necessary for justifying, evaluating, improving, or managing the utilization of AI-based DSSs. This yields a plethora of explanation techniques, creating an XAI jungle from which end-users must choose. In turn, these techniques are preliminarily engineered by developers for developers without ensuring an actual end-user fit. Thus, it renders unknown how an end-user’s mental model behaves when encountering such explanation techniques.
For this purpose, this cumulative thesis seeks to address this research deficiency by investigating end-user perceptions when encountering intrinsic ML and post-hoc XAI explanations. Drawing on this, the findings are synthesized into design knowledge to enable the deployment of XAI-based DSSs in practice. To this end, this thesis comprises six research contributions that follow the iterative and alternating interplay between behavioral science and design science research employed in information systems (IS) research and thus contribute to the overall research objectives as follows: First, an in-depth study of the impact of transparency and (initial) trust on end-user acceptance is conducted by extending and validating the unified theory of acceptance and use of technology model. This study indicates both factors’ strong but indirect effects on system acceptance, validating further research incentives. In particular, this thesis focuses on the overarching concept of transparency. Herein, a systematization in the form of a taxonomy and pattern analysis of existing user-centered XAI studies is derived to structure and guide future research endeavors, which enables the empirical investigation of the theoretical trade-off between performance and explainability in intrinsic ML algorithms, yielding a less gradual trade-off, fragmented into three explainability groups. This includes an empirical investigation on end-users’ perceived explainability of post-hoc explanation types, with local explanation types performing best. Furthermore, an empirical investigation emphasizes the correlation between comprehensibility and explainability, indicating almost significant (with outliers) results for the assumed correlation. The final empirical investigation aims at researching XAI explanation types on end-user cognitive load and the effect of cognitive load on end-user task performance and task time, which also positions local explanation types as best and demonstrates the correlations between cognitive load and task performance and, moreover, between cognitive load and task time. Finally, the last research paper utilizes i.a. the obtained knowledge and derives a nascent design theory for XAI-based DSSs. This design theory encompasses (meta-) design requirements, design principles, and design features in a domain-independent and interdisciplinary fashion, including end-users and developers as potential user groups. This design theory is ultimately tested through a real-world instantiation in a high-stakes maintenance scenario.
From an IS research perspective, this cumulative thesis addresses the lack of research on perception and design knowledge for an ensured utilization of XAI-based DSS. This lays the foundation for future research to obtain a holistic understanding of end-users’ heuristic behaviors during decision-making to facilitate the acceptance of XAI-based DSSs in operational practice.
In dieser Dissertation werden ausgewählte Aspekte der Steuervermeidung und grenzüberschreitenden Besteuerung betrachtet. Im Teil B liegt der Fokus auf der Empirie zu Steuervermeidung und Gewinnverlagerung multinationaler Unternehmen mit drei einzelnen Aufsätzen. Der Teil C untersucht die unterschiedliche Besteuerung von Human- und Sachvermögen anhand der beiden fundamentalen Besteuerungsprinzipien des Äquivalenz- und des Leistungsfähigkeitsprinzips. Der letzte Aufsatz (Teil D) analysiert das Werturteilsfreiheitspostulat im Stakeholder-Ansatz und zeigt mithilfe eines Fallbeispiels, wie die Unternehmensbesteuerung in unterschiedliche Stakeholder-Ansätze integriert werden kann. Eine abschließende Gesamtwürdigung geht auf verbleibende Forschungsfragen ein (Teil E).
Somit wird in der vorliegenden Dissertation grenzüberschreitende Besteuerung anhand betriebswirtschaftlicher, besteuerungsprinzipiengestützter bzw. dogmatischer und wissenschaftstheoretischer Gesichtspunkte untersucht.
Companies are expected to act as international players and to use their capabilities to provide customized products and services quickly and efficiently. Today, consumers expect their requirements to be met within a short time and at a favorable price. Order-to-delivery lead time has steadily gained in importance for consumers. Furthermore, governments can use various emissions policies to force companies and customers to reduce their greenhouse gas emissions. This thesis investigates the influence of order-to-delivery lead time and different emission policies on the design of a supply chain. Within this work different supply chain design models are developed to examine these different influences. The first model incorporates lead times and total costs, and various emission policies are implemented to illustrate the trade-off between the different measures. The second model reflects the influence of order-to-delivery lead time sensitive consumers, and different emission policies are implemented to study their impacts. The analysis shows that the share of order-to-delivery lead time sensitive consumers has a significant impact on the design of a supply chain. Demand uncertainty and uncertainty in the design of different emission policies are investigated by developing an appropriate robust mathematical optimization model. Results show that especially uncertainties on the design of an emission policy can significantly impact the total cost of a supply chain. The effects of differently designed emission policies in various countries are investigated in the fourth model. The analyses highlight that both lead times and emission policies can strongly influence companies' offshoring and nearshoring strategies.
Increasing global competition forces organizations to improve their processes to gain a competitive advantage. In the manufacturing sector, this is facilitated through tremendous digital transformation. Fundamental components in such digitalized environments are process-aware information systems that record the execution of business processes, assist in process automation, and unlock the potential to analyze processes. However, most enterprise information systems focus on informational aspects, process automation, or data collection but do not tap into predictive or prescriptive analytics to foster data-driven decision-making. Therefore, this dissertation is set out to investigate the design of analytics-enabled information systems in five independent parts, which step-wise introduce analytics capabilities and assess potential opportunities for process improvement in real-world scenarios.
To set up and extend analytics-enabled information systems, an essential prerequisite is identifying success factors, which we identify in the context of process mining as a descriptive analytics technique. We combine an established process mining framework and a success model to provide a structured approach for assessing success factors and identifying challenges, motivations, and perceived business value of process mining from employees across organizations as well as process mining experts and consultants. We extend the existing success model and provide lessons for business value generation through process mining based on the derived findings. To assist the realization of process mining enabled business value, we design an artifact for context-aware process mining. The artifact combines standard process logs with additional context information to assist the automated identification of process realization paths associated with specific context events. Yet, realizing business value is a challenging task, as transforming processes based on informational insights is time-consuming.
To overcome this, we showcase the development of a predictive process monitoring system for disruption handling in a production environment. The system leverages state-of-the-art machine learning algorithms for disruption type classification and duration prediction. It combines the algorithms with additional organizational data sources and a simple assignment procedure to assist the disruption handling process. The design of such a system and analytics models is a challenging task, which we address by engineering a five-phase method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks. The method facilitates the integration of heterogeneous data sources through dedicated neural network input heads, which are concatenated for a prediction. An evaluation based on a real-world use-case highlights the superior performance of the resulting multi-headed network.
Even the improved model performance provides no perfect results, and thus decisions about assigning agents to solve disruptions have to be made under uncertainty. Mathematical models can assist here, but due to complex real-world conditions, the number of potential scenarios massively increases and limits the solution of assignment models. To overcome this and tap into the potential of prescriptive process monitoring systems, we set out a data-driven approximate dynamic stochastic programming approach, which incorporates multiple uncertainties for an assignment decision. The resulting model has significant performance improvement and ultimately highlights the particular importance of analytics-enabled information systems for organizational process improvement.
Die Welt befindet sich in einem tiefgreifenden Wandlungsprozess von einer Industrie- zu einer Wissensgesellschaft. Die Automatisierung sowohl physischer als auch kognitiver Arbeit verlagert die Nachfrage des Arbeitsmarktes zunehmend zu hoch qualifizierten Mitarbeitern, die als High Potentials bezeichnet werden. Diese zeichnen sich neben ihrer Intelligenz durch vielfältige Fähigkeiten wie Empathievermögen, Kreativität und Problemlösungskompetenzen aus. Humankapital gilt als Wettbewerbsfaktor der Zukunft, jedoch beklagten Unternehmen bereits Ende des 20. Jahrhunderts einen Mangel an Fach- und Führungspersonal, der durch die Pandemie weiter verstärkt wird. Aus diesem Grund rücken Konzepte zur Rekrutierung und Mitarbeiterbindung in den Fokus der Unternehmen.
Da ethisches und ökologisches Bewusstsein in der Bevölkerung an Bedeutung gewinnen, lässt sich annehmen, dass Bewerber zukünftig verantwortungsbewusste Arbeitgeber bevorzugen. Nachhaltigkeit bzw. Corporate Responsibility wird damit zum Wettbewerbsfaktor zur Gewinnung und Bindung von Talenten. Mit Hilfe des Ansatzes der identitätsorientierten Markenführung wird ein Verständnis davon hergestellt, wie es Unternehmen gelingt, eine starke Arbeitgebermarke aufzubauen. Anhand einer konzeptionellen, praktischen und empirischen Untersuchung am Unternehmensbeispiel Unilever werden die Auswirkungen von umfassendem ökologischem und gesellschaftlichem Engagement auf die Arbeitgeberattraktivität analysiert.
Es zeigt sich, dass Nachhaltigkeit – konkretisiert über die 17 Sustainable Develop-ment Goals (SDGs) und verankert im Kern der Marke – die erfolgreiche Führung einer Employer Brand ermöglicht. Dieses Ergebnis resultiert sowohl aus dem theoretischen als auch aus dem empirischen Teil dieser Arbeit. Im letzteren konnten unter Einsatz eines Strukturgleichungsmodells drei generelle positive Wirkzusammenhänge bestätigt werden: Bewerber fühlen sich zu verantwortungsbewussten Unternehmen hingezogen, weshalb sie einen P-O-F empfinden. Diese wahrgenommene Passung mit dem Unternehmen steigert die Arbeitgeberattraktivität aus Sicht der potenziellen Bewerber, wodurch sich wiederum die Wahrscheinlichkeit für eine Bewerbungsabsicht und die Akzeptanz eines Arbeitsplatzangebotes erhöht. Es wird damit die Annahme bestätigt, dass den Herausforderungen der Personalbeschaffung über eine konsequente nachhaltige Ausrichtung der Geschäftstätigkeit und deren glaubhafte Kommunikation über die Arbeitgebermarke begegnet werden kann.
The digital transformation facilitates new forms of collaboration between companies along the supply chain and between companies and consumers. Besides sharing information on centralized platforms, blockchain technology is often regarded as a potential basis for this kind of collaboration. However, there is much hype surrounding the technology due to the rising popularity of cryptocurrencies, decentralized finance (DeFi), and non-fungible tokens (NFTs). This leads to potential issues being overlooked. Therefore, this thesis aims to investigate, highlight, and address the current weaknesses of blockchain technology: Inefficient consensus, privacy, smart contract security, and scalability.
First, to provide a foundation, the four key challenges are introduced, and the research objectives are defined, followed by a brief presentation of the preliminary work for this thesis.
The following four parts highlight the four main problem areas of blockchain. Using big data analytics, we extracted and analyzed the blockchain data of six major blockchains to identify potential weaknesses in their consensus algorithm. To improve smart contract security, we classified smart contract functionalities to identify similarities in structure and design. The resulting taxonomy serves as a basis for future standardization efforts for security-relevant features, such as safe math functions and oracle services. To challenge privacy assumptions, we researched consortium blockchains from an adversary role. We chose four blockchains with misconfigured nodes and extracted as much information from those nodes as possible. Finally, we compared scalability solutions for blockchain applications and developed a decision process that serves as a guideline to improve the scalability of their applications.
Building on the scalability framework, we showcase three potential applications for blockchain technology. First, we develop a token-based approach for inter-company value stream mapping. By only relying on simple tokens instead of complex smart-contracts, the computational load on the network is expected to be much lower compared to other solutions. The following two solutions use offloading transactions and computations from the main blockchain. The first approach uses secure multiparty computation to offload the matching of supply and demand for manufacturing capacities to a trustless network. The transaction is written to the main blockchain only after the match is made. The second approach uses the concept of payment channel networks to enable high-frequency bidirectional micropayments for WiFi sharing. The host gets paid for every second of data usage through an off-chain channel. The full payment is only written to the blockchain after the connection to the client gets terminated.
Finally, the thesis concludes by briefly summarizing and discussing the results and providing avenues for further research.
In der Dissertation werden drei ausgewählte Reformen oder Reformbedarfe im deutschen Drei-Säulen-System der Alterssicherung untersucht:
In der Säule der gesetzlichen Altersversorgung werden Möglichkeiten zur Wiedereinsetzung des 2018 ausgesetzten Nachholfaktors in der gesetzlichen Rentenversicherung erarbeitet. Je nachdem, ob Erhöhungen des aktuellen Rentenwertes verursacht durch die Niveauschutzklausel in künftigen Jahren aufgerechnet werden sollen oder nicht, werden zwei unterschiedliche Verfahren – das Getrennte Verfahren und das Integrierte Verfahren – präsentiert, in welche sich der Nachholfaktor bei aktiver Schutzklausel und Niveauschutzklausel konsistent einfügt.
In der Säule der betrieblichen Altersversorgung werden Möglichkeiten zur Reform des steuerrechtlichen Rechnungszinsfußes von 6 % für Pensionsrückstellungen analysiert. Dabei wird betrachtet, welche Auswirkungen es für Arbeitgeber hat, wenn der Rechnungszinsfuß diskretionär einen neuen Wert erhielte, wenn er regelgebunden einem Referenzzins folgte, wenn steuerrechtlich der handelsrechtlichen Bewertung gefolgt würde, und wenn ein innovatives Tranchierungsverfahren eingeführt würde. Anschließend wird erörtert, inwieweit überhaupt gesetzgeberischer Anpassungsbedarf besteht. Es kristallisiert sich der Eindruck heraus, dass mit dem steuerrechtlichen Rechnungszinsfuß eine Gesamtkapitalrendite typisiert wird. Die Hypothese kann nicht verworfen werden, dass 6 % durchaus realistisch für deutsche Unternehmen sind.
In der Säule der privaten Altersvorsorge wird erschlossen, wann im Falle eines Riester-geförderten Erwerbs einer Immobilie in der Rentenphase des Eigenheimrentners der optimale Zeitpunkt zur Ausübung seines Wahlrechts, seine nachgelagerte Besteuerung vorzeitig zu beenden, kommt. Bei vorzeitiger Beendigung sind alle ausstehenden Beträge auf einmal, jedoch nur zu 70 % zu versteuern. Wann dieser 30%ige Nachlass vorteilhaft wird, wird demonstriert unter Variation des Wohnförderkontostands, der Renteneinkünfte, des Marktzinssatzes, des Rentenbeginns, der Überlebenswahrscheinlichkeiten sowie des Besteuerungsanteils.