Refine
Has Fulltext
- yes (643)
Year of publication
- 2016 (643) (remove)
Document Type
- Journal article (547)
- Doctoral Thesis (86)
- Book article / Book chapter (3)
- Preprint (3)
- Review (2)
- Master Thesis (1)
- Working Paper (1)
Language
- English (643) (remove)
Keywords
- Drosophila (7)
- Drosophila melanogaster (7)
- inflammation (7)
- vision (7)
- Fabry disease (6)
- breast cancer (6)
- phosphorylation (6)
- Boron (5)
- Taufliege (5)
- Trypanosoma brucei (5)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (100)
- Medizinische Klinik und Poliklinik II (37)
- Physikalisches Institut (34)
- Institut für Psychologie (33)
- Neurologische Klinik und Poliklinik (32)
- Graduate School of Life Sciences (29)
- Medizinische Klinik und Poliklinik I (25)
- Julius-von-Sachs-Institut für Biowissenschaften (23)
- Rudolf-Virchow-Zentrum (23)
- Institut für Theoretische Physik und Astrophysik (21)
Schriftenreihe
Sonstige beteiligte Institutionen
- Bavarian Center for Applied Energy Research e.V. (ZAE Bayern) (1)
- Center for Nanosystems Chemistry (1)
- Department of Chemistry, Sungkyunkwan University, 440-746 Suwon, Republic of Korea (1)
- EMBL Mouse Biology Unit, Monterotondo, Italien (1)
- ESPCI Paris (1)
- Fachbereich Physik, Universität Konstanz, D-78464 Konstanz, Germany (1)
- Fraunhofer ISC (1)
- Fraunhofer Institute Interfacial Engineering and Biotechnology (IGB) (1)
- Klinik für Psychiatrie und Psychotherapie, Universität Würzburg (1)
- Lehrstuhl für Bioinformatik (1)
The microbial communities that live inside the human gastrointestinal tract -the human gut
microbiome- are important for host health and wellbeing. Characterizing this new “organ”,
made up of as many cells as the human body itself, has recently become possible through
technological advances. Metagenomics, the high-throughput sequencing of DNA directly from
microbial communities, enables us to take genomic snapshots of thousands of microbes living
together in this complex ecosystem, without the need for isolating and growing them.
Quantifying the composition of the human gut microbiome allows us to investigate its
properties and connect it to host physiology and disease. The wealth of such connections was
unexpected and is probably still underestimated. Due to the fact that most of our dietary as well
as medicinal intake affects the microbiome and that the microbiome itself interacts with our
immune system through a multitude of pathways, many mechanisms have been proposed to
explain the observed correlations, though most have yet to be understood in depth.
An obvious prerequisite to characterizing the microbiome and its interactions with the host is
the accurate quantification of its composition, i.e. determining which microbes are present and
in what numbers they occur. Historically, standard practices have existed for sample handling,
DNA extraction and data analysis for many years. However, these were generally developed for
single microbe cultures and it is not always feasible to implement them in large scale
metagenomic studies. Partly because of this and partly because of the excitement that new
technology brings about, the first metagenomic studies each took the liberty to define their own
approach and protocols. From early meta-analysis of these studies it became clear that the
differences in sample handling, as well as differences in computational approaches, made
comparisons across studies very difficult. This restricts our ability to cross-validate findings of
individual studies and to pool samples from larger cohorts. To address the pressing need for
standardization, we undertook an extensive comparison of 21 different DNA extraction methods
as well as a series of other sample manipulations that affect quantification. We developed a
number of criteria for determining the measurement quality in the absence of a mock
community and used these to propose best practices for sampling, DNA extraction and library
preparation. If these were to be accepted as standards in the field, it would greatly improve
comparability across studies, which would dramatically increase the power of our inferences
and our ability to draw general conclusions about the microbiome.
Most metagenomics studies involve comparisons between microbial communities, for example
between fecal samples from cases and controls. A multitude of approaches have been proposed
to calculate community dissimilarities (beta diversity) and they are often combined with
various preprocessing techniques. Direct metagenomics quantification usually counts
sequencing reads mapped to specific taxonomic units, which can be species, genera, etc. Due to
technology-inherent differences in sampling depth, normalizing counts is necessary, for
instance by dividing each count by the sum of all counts in a sample (i.e. total sum scaling), or by
subsampling. To derive a single value for community (dis-)similarity, multiple distance
measures have been proposed. Although it is theoretically difficult to benchmark these
approaches, we developed a biologically motivated framework in which distance measures can
be evaluated. This highlights the importance of data transformations and their impact on the
measured distances.
Building on our experience with accurate abundance estimation and data preprocessing
techniques, we can now try and understand some of the basic properties of microbial
communities. In 2011, it was proposed that the space of genus level variation of the human gut
microbial community is structured into three basic types, termed enterotypes. These were
described in a multi-country cohort, so as to be independent of geography, age and other host
properties. Operationally defined through a clustering approach, they are “densely populated
areas in a multidimensional space of community composition”(source) and were proposed as a
general stratifier for the human population. Later studies that applied this concept to other
datasets raised concerns about the optimum number of clusters and robustness of the
clustering approach. This heralded a long standing debate about the existence of structure and
the best ways to determine and capture it. Here, we reconsider the concept of enterotypes, in
the context of the vastly increased amounts of available data. We propose a refined framework
in which the different types should be thought of as weak attractors in compositional space and
we try to implement an approach to determining which attractor a sample is closest to. To this
end, we train a classifier on a reference dataset to assign membership to new samples. This way,
enterotypes assignment is no longer dataset dependent and effects due to biased sampling are
minimized. Using a model in which we assume the existence of three enterotypes characterized
by the same driver genera, as originally postulated, we show the relevance of this stratification
and propose it to be used in a clinical setting as a potential marker for disease development.
Moreover, we believe that these attractors underline different rules of community assembly and
we recommend they be accounted for when analyzing gut microbiome samples.
While enterotypes describe structure in the community at genus level, metagenomic sequencing
can in principle achieve single-nucleotide resolution, allowing us to identify single nucleotide
polymorphisms (SNPs) and other genomic variants in the gut microbiome. Analysis
methodology for this level of resolution has only recently been developed and little exploration
has been done to date. Assessing SNPs in a large, multinational cohort, we discovered that the
landscape of genomic variation seems highly structured even beyond species resolution,
indicating that clearly distinguishable subspecies are prevalent among gut microbes. In several
cases, these subspecies exhibit geo-stratification, with some subspecies only found in the
Chinese population. Generally however, they present only minor dispersion limitations and are
seen across most of our study populations. Within one individual, one subspecies is commonly
found to dominate and only rarely are several subspecies observed to co-occur in the same
ecosystem. Analysis of longitudinal data indicates that the dominant subspecies remains stable
over periods of more than three years. When interrogating their functional properties we find
many differences, with specific ones appearing relevant to the host. For example, we identify a
subspecies of E. rectale that is lacking the flagellum operon and find its presence to be
significantly associated with lower body mass index and lower insulin resistance of their hosts;
it also correlates with higher microbial community diversity. These associations could not be
seen at the species level (where multiple subspecies are convoluted), which illustrates the
importance of this increased resolution for a more comprehensive understanding of microbial
interactions within the microbiome and with the host.
Taken together, our results provide a rigorous basis for performing comparative metagenomics
of the human gut, encompassing recommendations for both experimental sample processing
and computational analysis. We furthermore refine the concept of community stratification into
enterotypes, develop a reference-based approach for enterotype assignment and provide
compelling evidence for their relevance. Lastly, by harnessing the full resolution of
metagenomics, we discover a highly structured genomic variation landscape below the
microbial species level and identify common subspecies of the human gut microbiome. By
developing these high-precision metagenomics analysis tools, we thus hope to contribute to a
greatly improved understanding of the properties and dynamics of the human gut microbiome.
The present thesis considers the development and analysis of arbitrary Lagrangian-Eulerian
discontinuous Galerkin (ALE-DG) methods with time-dependent approximation spaces for
conservation laws and the Hamilton-Jacobi equations.
Fundamentals about conservation laws, Hamilton-Jacobi equations and discontinuous Galerkin
methods are presented. In particular, issues in the development of discontinuous Galerkin (DG)
methods for the Hamilton-Jacobi equations are discussed.
The development of the ALE-DG methods based on the assumption that the distribution of
the grid points is explicitly given for an upcoming time level. This assumption allows to construct a time-dependent local affine linear mapping to a reference cell and a time-dependent
finite element test function space. In addition, a version of Reynolds’ transport theorem can be
proven.
For the fully-discrete ALE-DG method for nonlinear scalar conservation laws the geometric
conservation law and a local maximum principle are proven. Furthermore, conditions for slope
limiters are stated. These conditions ensure the total variation stability of the method. In addition, entropy stability is discussed. For the corresponding semi-discrete ALE-DG method,
error estimates are proven. If a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell, the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence for monotone fuxes and the optimal $(k+1)$ convergence for an upwind flux are proven in the $\mathrm{L}^{2}$-norm. The capability of the method is shown by numerical examples for nonlinear conservation laws.
Likewise, for the semi-discrete ALE-DG method for nonlinear Hamilton-Jacobi equations, error
estimates are proven. In the one dimensional case the optimal $\left(k+1\right)$ convergence and in the two dimensional case the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence are proven in the $\mathrm{L}^{2}$-norm, if a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell. For the fullydiscrete method, the geometric conservation is proven and for the piecewise constant forward Euler step the convergence of the method to the unique physical relevant solution is discussed.
Gambling is a popular activity in Germany, with 40% of a representative sample reporting having gambled at least once in the past year (Bundeszentrale für gesundheitliche Aufklärung, 2014). While the majority of gamblers show harmless gambling behavior, a subset develops serious problems due to their gambling, affecting their psychological well-being, social life and work. According to recent estimates, up to 0.8% of the German population are affected by such pathological gambling. People in general and pathological gamblers in particular show several cognitive distortions, that is, misconceptions about the chances of winning and skill involvement, in gambling. The current work aimed at elucidating the biopsychological basis of two such kinds of cognitive distortions, the illusion of control and the gambler’s and hot hand fallacies, and their modulation by gambling problems. Therefore, four studies were conducted assessing the processing of near outcomes (used as a proxy for the illusion of control) and outcome sequences (used as a proxy for the gambler’s and hot hand fallacies) in samples of varying degrees of gambling problems, using a multimethod approach.
The first study analyzed the processing and evaluation of near outcomes as well as choice behavior in a wheel of fortune paradigm using electroencephalography (EEG). To assess the influence of gambling problems, a group of problem gamblers was compared to a group of controls. The results showed that there were no differences in the processing of near outcomes between the two groups. Near compared to full outcomes elicited smaller P300 amplitudes. Furthermore, at a trend level, the choice behavior of participants showed signs of a pattern opposite to the gambler’s fallacy, with longer runs of an outcome color leading to increased probabilities of choosing this color again on the subsequent trial. Finally, problem gamblers showed smaller feedback-related negativity (FRN) amplitudes relative to controls.
The second study also targeted the processing of near outcomes in a wheel of fortune paradigm, this time using functional magnetic resonance imaging and a group of participants with varying degrees of gambling problems. The results showed increased activity in the bilateral superior parietal cortex following near compared to full outcomes.
The third study examined the peripheral physiology reactions to near outcomes in the wheel of fortune. Heart period and skin conductance were measured while participants with varying degrees of gambling problems played on the wheel of fortune. Near compared to full outcomes led to increased heart period duration shortly after the outcome. Furthermore, heart period reactions and skin conductance responses (SCRs) were modulated by gambling problems. Participants with high relative to low levels of gambling problems showed increased SCRs to near outcomes and similar heart period reactions to near outcomes and full wins.
The fourth study analyzed choice behavior and sequence effects in the processing of outcomes in a coin toss paradigm using EEG in a group of problem gamblers and controls. Again, problem gamblers showed generally smaller FRN amplitudes compared to controls. There were no differences between groups in the processing of outcome sequences. The break of an outcome streak led to increased power in the theta frequency band. Furthermore, the P300 amplitude was increased after a sequence of previous wins. Finally, problem gamblers compared to controls showed a trend of switching the outcome symbol relative to the previous outcome symbol more often.
In sum, the results point towards differences in the processing of near compared to full outcomes in brain areas and measures implicated in attentional and salience processes. The processing of outcome sequences involves processes of salience attribution and violation of expectations. Furthermore, problem gamblers seem to process near outcomes as more win-like compared to controls. The results and their implications for problem gambling as well as further possible lines of research are discussed.
Spermiogenesis describes the differentiation of haploid germ cells into motile, fertilization-competent spermatozoa. During this fundamental transition the species-specific sperm head is formed, which necessitates profound nuclear restructuring coincident with the assembly of sperm-specific structures and chromatin compaction. In the case of the mouse, it is characterized by reshaping of the early round spermatid nucleus into an elongated sickle-shaped sperm head. This tremendous shape change requires the transduction of cytoskeletal forces onto the nuclear envelope (NE) or even further into the nuclear interior. LINC (linkers of nucleoskeleton and cytoskeleton) complexes might be involved in this process, due to their general function in bridging the NE and thereby physically connecting the nucleus to the peripheral cytoskeleton.
LINC complexes consist of inner nuclear membrane integral SUN-domain proteins and outer nuclear membrane KASH-domain counterparts. SUN- and KASH-domain proteins are directly connected to each other within the perinuclear space, and are thus capable of transferring forces across the NE. To date, these protein complexes are known for their essential functions in nuclear migration, anchoring and positioning of the nucleus, and even for chromosome movements and the maintenance of cell polarity and nuclear shape.
In this study LINC complexes were investigated with regard to their potential role in sperm head formation, in order to gain further insight into the processes occurring during spermiogenesis. To this end, the behavior and function of the testis-specific SUN4 protein was studied. The SUN-domain protein SUN4, which had received limited characterization prior to this work, was found to be exclusively expressed in haploid stages during germ cell development. In these cell stages, it specifically localized to the posterior NE at regions decorated by the manchette, a spermatid-specific structure which was previously shown to be involved in nuclear shaping. Mice deficient for SUN4 exhibited severely disorganized manchette residues and gravely misshapen sperm heads. These defects resulted in a globozoospermia-like phenotype and male mice infertility. Therefore, SUN4 was not only found to be mandatory for the correct assembly and anchorage of the manchette, but also for the correct localization of SUN3 and Nesprin1, as well as of other NE components. Interaction studies revealed that SUN4 had the potential to interact with SUN3, Nesprin1, and itself, and as such is likely to build functional LINC complexes that anchor the manchette and transfer cytoskeletal forces onto the nucleus.
Taken together, the severe impact of SUN4 deficiency on the nucleocytoplasmic junction during sperm development provided direct evidence for a crucial role of SUN4 and other LINC complex components in mammalian sperm head formation and fertility.
Software frameworks for Realtime Interactive Systems (RIS), e.g., in the areas of Virtual, Augmented, and Mixed Reality (VR, AR, and MR) or computer games, facilitate a multitude of functionalities by coupling diverse software modules. In this context, no uniform methodology for coupling these modules does exist; instead various purpose-built solutions have been proposed. As a consequence, important software qualities, such as maintainability, reusability, and adaptability, are impeded.
Many modern systems provide additional support for the integration of Artificial Intelligence (AI) methods to create so called intelligent virtual environments. These methods exacerbate the above-mentioned problem of coupling software modules in the thus created Intelligent Realtime Interactive Systems (IRIS) even more. This, on the one hand, is due to the commonly applied specialized data structures and asynchronous execution schemes, and the requirement for high consistency regarding content-wise coupled but functionally decoupled forms of data representation on the other.
This work proposes an approach to decoupling software modules in IRIS, which is based on the abstraction of architecture elements using a semantic Knowledge Representation Layer (KRL). The layer facilitates decoupling the required modules, provides a means for ensuring interface compatibility and consistency, and in the end constitutes an interface for symbolic AI methods.
Adjuvants are compounds added to an agrochemical spray formulation to improve or modify the action of an active ingredient (AI) or the physico-chemical characteristics of the spray liquid. Adjuvants can have more than only one distinct mode of action (MoA) during the foliar spray application process and they are generally known to be the best tools to improve agrochemical formulations. The main objective for this work was to elucidate the basic MoA of adjuvants by uncoupling different aspects of the spray application. Laboratory experiments, beginning from retention and spreading characteristics, followed by humectant effects concerning the spray deposit on the leaf surface and ultimately the cuticular penetration of an AI, were figured out to evaluate overall in vivo effects of adjuvants which were also obtained in a greenhouse spray test. For this comprehensive study, the surfactant classes of non-ionic sorbitan esters (Span), polysorbates (Tween) and oleyl alcohol polyglycol ether (Genapol O) were generally considered because of their common promoting potential in agrochemical formulations and their structural diversity.
The reduction of interfacial tension is one of the most crucial physico-chemical properties of surfactants. The dynamic surface tension (DST) was monitored to characterise the surface tension lowering behaviour which is known to influence the droplet formation and retention characteristics. The DST is a function of time and the critical time frame of droplet impact might be at about 100 ms. None of the selected surfactants were found to lower the surface tension sufficiently during this short timeframe (chapter I). At ca. 100 ms, Tween 20 resulted in the lowest DST value. When surfactant monomers are fully saturated at the droplet-air-interface, an equilibrium surface tension (STeq) value can be determined which may be used to predict spreading or run-off effects. The majority of selected surfactants resulted in a narrow distribution of STeq values, ranging between 30 and 45 mN m- 1. Nevertheless, all surfactants were able to decrease the surface tension considerably compared to pure water (72 mN m- 1). The influence of different surfactants on the wetting process was evaluated by studying time-dependent static contact angles on different surfaces and the droplet spread area on Triticum aestivum leaves after water evaporation. The spreading potential was observed to be better for Spans than for Tweens. Especially Span 20 showed maximum spreading results. To transfer laboratory findings to spray application, related to field conditions, retention and leaf coverage was measured quantitatively on wheat leaves by using a variable track sprayer. Since the retention process involves short time dynamics, it is well-known that the spray retention on a plant surface is not correlated to STeq but to DST values. The relationship between DST at ca. 100 ms and results from the track sprayer showed increasing retention results with decreasing DST, whereas at DST values below ca. 60 mN m- 1 no further retention improvement could be observed.
Under field conditions, water evaporates from the droplet within a few seconds to minutes after droplet deposition on the leaf surface. Since precipitation of the AI must essentially being avoided by holding the AI in solution, so-called humectants are used as tank-mix adjuvants. The ability of pure surfactants to absorb water from the surrounding atmosphere was investigated comprehensively by analysing water sorption isotherms (chapter II). These isotherms showed an exponential shape with a steep water sorption increase starting at 60% to 70% RH. Water sorption was low for Spans and much more distinct for the polyethoxylated surfactants (Tweens and Genapol O series). The relationship between the water sorption behaviour and the molecular structure of surfactants was considered as the so-called humectant activity. With an increasing ethylene oxide (EO) content, the humectant activity increased concerning the particular class of Genapol O. However, it could be shown that the moisture absorption across all classes of selected surfactants correlates rather better with their hydrophilic-lipophilic balance values with the EO content.
All aboveground organs of plants are covered by the cuticular membrane which is therefore the first rate limiting barrier for AI uptake. In vitro penetration experiments through an astomatous model cuticle were performed to study the effects of adjuvants on the penetration of the lipophilic herbicide Pinoxaden (PXD) (chapter III). In order to understand the influence of different adjuvant MoA like humectancy, experiments were performed under three different humidity levels. No explicit relationship could be found between humidity levels and the PXD penetration which might be explained by the fact that humidity effects would rather affect hydrophilic AIs than lipophilic ones. Especially for Tween 20, it became obvious that a complex balance between multiple MoA like spreading, humectancy and plasticising effects have to be considered.
Greenhouse trials, focussing the adjuvant impact on in vivo action of PXD, were evaluated on five different grass-weed species (chapter III). Since agrochemical spray application and its following action on living plants also includes translocation processes in planta and species dependent physiological effects, this investigation may help to simulate the situation on the field. Even though the absolute weed damage was different, depending both on plant species and also on PXD rates, adjuvant effects in greenhouse experiments displayed the same ranking as in cuticular penetration studies: Tween 20 > Tween 80 > Span 20 ≥ Span 80.
Thus, the present work shows for the first time that findings obtained in laboratory experiments can be successfully transferred to spray application studies on living plants concerning adjuvant MoA. A comparative analysis, using radar charts, could demonstrate systematic derivations from structural similarities of adjuvants to their MoA (summarising discussion and outlook). Exemplarily, Tween 20 and Tween 80 cover a wide range of selected variables by having no outstanding MoA improving one distinct process during foliar application, compared to non-ethoxylated Span 20 and Span 80 which primarily revealed a surface active action. Most adjuvants used in this study represent polydisperse mixtures bearing a complex distribution of EO and aliphatic chains. From this study it seems alike that adjuvants having a wide EO distribution offer broader potential than adjuvants with a small EO distribution. It might be a speculation that due to this broad distribution of single molecules, all bearing their individual specific physico-chemical nature, a wide range of properties concerning their MoA is covered.
Mathematical modelling, simulation, and optimisation are core methodologies for future
developments in engineering, natural, and life sciences. This work aims at applying these
mathematical techniques in the field of biological processes with a focus on the wine
fermentation process that is chosen as a representative model.
In the literature, basic models for the wine fermentation process consist of a system of
ordinary differential equations. They model the evolution of the yeast population number
as well as the concentrations of assimilable nitrogen, sugar, and ethanol. In this thesis,
the concentration of molecular oxygen is also included in order to model the change of
the metabolism of the yeast from an aerobic to an anaerobic one. Further, a more sophisticated
toxicity function is used. It provides simulation results that match experimental
measurements better than a linear toxicity model. Moreover, a further equation for the
temperature plays a crucial role in this work as it opens a way to influence the fermentation
process in a desired way by changing the temperature of the system via a cooling
mechanism. From the view of the wine industry, it is necessary to cope with large scale
fermentation vessels, where spatial inhomogeneities of concentrations and temperature
are likely to arise. Therefore, a system of reaction-diffusion equations is formulated in
this work, which acts as an approximation for a model including computationally very
expensive fluid dynamics.
In addition to the modelling issues, an optimal control problem for the proposed
reaction-diffusion fermentation model with temperature boundary control is presented
and analysed. Variational methods are used to prove the existence of unique weak solutions
to this non-linear problem. In this framework, it is possible to exploit the Hilbert
space structure of state and control spaces to prove the existence of optimal controls.
Additionally, first-order necessary optimality conditions are presented. They characterise
controls that minimise an objective functional with the purpose to minimise the final
sugar concentration. A numerical experiment shows that the final concentration of sugar
can be reduced by a suitably chosen temperature control.
The second part of this thesis deals with the identification of an unknown function
that participates in a dynamical model. For models with ordinary differential equations,
where parts of the dynamic cannot be deduced due to the complexity of the underlying
phenomena, a minimisation problem is formulated. By minimising the deviations of simulation
results and measurements the best possible function from a trial function space
is found. The analysis of this function identification problem covers the proof of the
differentiability of the function–to–state operator, the existence of minimisers, and the
sensitivity analysis by means of the data–to–function mapping. Moreover, the presented
function identification method is extended to stochastic differential equations. Here, the
objective functional consists of the difference of measured values and the statistical expected
value of the stochastic process solving the stochastic differential equation. Using a
Fokker-Planck equation that governs the probability density function of the process, the
probabilistic problem of simulating a stochastic process is cast to a deterministic partial
differential equation. Proofs of unique solvability of the forward equation, the existence of
minimisers, and first-order necessary optimality conditions are presented. The application
of the function identification framework to the wine fermentation model aims at finding
the shape of the toxicity function and is carried out for the deterministic as well as the
stochastic case.
Small satellites contribute significantly in the rapidly evolving innovation in space engineering, in particular in distributed space systems for global Earth observation and communication services. Significant mass reduction by miniaturization, increased utilization of commercial high-tech components, and in particular standardization are the key drivers for modern miniature space technology.
This thesis addresses key fields in research and development on miniature satellite technology regarding efficiency, flexibility, and robustness. Here, these challenges are addressed by the University of Wuerzburg’s advanced pico-satellite bus, realizing a generic modular satellite architecture and standardized interfaces for all subsystems. The modular platform ensures reusability, scalability, and increased testability due to its flexible subsystem interface which allows efficient and compact integration of the entire satellite in a plug-and-play manner.
Beside systematic design for testability, a high degree of operational robustness is achieved by the consequent implementation of redundancy of crucial subsystems. This is combined with efficient fault detection, isolation and recovery mechanisms. Thus, the UWE-3 platform, and in particular the on-board data handling system and the electrical power system, offers one of the most efficient pico-satellite architectures launched in recent years and provides a solid basis for future extensions.
The in-orbit performance results of the pico-satellite UWE-3 are presented and summarize successful operations since its launch in 2013. Several software extensions and adaptations have been uploaded to UWE-3 increasing its capabilities. Thus, a very flexible platform for in-orbit software experiments and for evaluations of innovative concepts was provided and tested.
Amyotrophic lateral sclerosis and spinal muscular atrophy are the two most common motoneuron diseases. Both are characterized by destabilization of axon terminals, axon degeneration and alterations in neuronal cytoskeleton. Accumulation of neurofilaments has been observed in several neurodegenerative diseases but the mechanisms how elevated neurofilament levels destabilize axons are unknown so far. Here, I show that increased neurofilament expression in motor nerves of pmn mutant mice causes disturbed microtubule dynamics. Depletion of neurofilament by Nefl knockout increases the number and regrowth of microtubules in pmn mutant motoneurons and restores axon elongation. This effect is mediated by interaction of neurofilament with the stathmin complex. Depletion of neurofilament increases stathmin-Stat3 interaction and stabilizes the microtubules. Consequently, the axonal maintenance is improved and the pmn mutant mice survive longer. We propose that this mechanism could also be relevant for other neurodegenerative diseases in which neurofilament accumulation is a prominent feature.
Next, using Smn-/-;SMN2 mouse as a model, the molecular mechanism behind synapse loss in SMA is studied. SMA is characterized by degeneration of lower α-motoneurons in spinal cord; however, how reduction of ubiquitously expressed SMN leads to MN-specific degeneration remains unclear. SMN is involved in pre-mRNA splicing (Pellizzoni, Kataoka et al. 1998) and its deficiency in SMA affects the splicing machinery. Neuromuscular junction denervation precedes neurodegeneration in SMA. However, there is no evidence of a link between aberrant splicing of transcripts downstream of Smn and reduced presynaptic axon excitability observed in SMA. In this study, we observed that expression and splicing of Nrxn2, that encodes a presynaptic protein is affected in the SMA mouse and that Nrxn2 could be a candidate that relates aberrant splicing to synaptic motoneuron defects in SMA.
At a hadron collider as the LHC or the Tevatron the production of a photon in association with a leptonically decaying vector boson represents an important class of processes. These processes stand out due to a very clean signal of a photon and two leptons. Furthermore they
provide direct access to the photon–vector-boson couplings and thus an easy opportunity to test the
gauge sector of the Standard Model. Within the scope of this work we present a full calculation of the next-to-leading-order corrections which include the O (αs) corrections of the strong interaction as well as the electroweak corrections of O (α) including all photon-induced contributions. For the creation of matrix elements we use methods based on Feynman diagrams. The IR singularities are treated with the dipole subtraction technique. In order to separate photons from jets, a quark-to-photon fragmentation function ´a la Glover / Morgan or Frixione’s cone isolation is employed. Moreover, two different scenarios for charged leptons in the fi state were considered. The fi scenario for dressed leptons assumes that a charged lepton and a photon will be recombined if they are collinear. In the second scenario for bare muons it is assumed that leptons and photon can be separated in a detector also if they are collinear.
For our calculation we implemented all corrections into a fl Monte Carlo program. Be- sides the computation of the total cross section this program is also able to generate diff tial distributions of several experimentally motivated observables. Apart from the expected large electroweak corrections in the high transverse-momentum regions and sizeable corrections in the resonance regions of the transverse or the invariant masses we found photon-induced corrections up to several 10% for high transverse momenta. Within run I at the LHC for 7/8 TeV the experimental accuracy for Vγ production was roughly 10%. Due to the higher luminosity at run II this accuracy
will be reduced to the level of a few percent so that corrections of the same order within the theoretical predictions might become relevant. In this work we present results for the total cross section at the LHC for 7, 8 and 14 TeV and the corresponding distributions
for 14 TeV.