Refine
Has Fulltext
- yes (138) (remove)
Is part of the Bibliography
- yes (138)
Year of publication
Document Type
- Journal article (117)
- Preprint (9)
- Review (9)
- Conference Proceeding (1)
- Other (1)
- Working Paper (1)
Keywords
- metabolism (8)
- cosmology (6)
- crystallization (6)
- SARS-CoV-2 (5)
- apoptosis (5)
- cytokinins (5)
- qubit (5)
- COVID-19 (4)
- decoherence (4)
- evolution (4)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (138) (remove)
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 031A408B (1)
- CoG 721016–HERPES (1)
- ESF-ZDEX 4.0 (1)
The rapid development of green and sustainable materials opens up new possibilities in the field of applied research. Such materials include nanocellulose composites that can integrate many components into composites and provide a good chassis for smart devices. In our study, we evaluate four approaches for turning a nanocellulose composite into an information storage or processing device: 1) nanocellulose can be a suitable carrier material and protect information stored in DNA. 2) Nucleotide-processing enzymes (polymerase and exonuclease) can be controlled by light after fusing them with light-gating domains; nucleotide substrate specificity can be changed by mutation or pH change (read-in and read-out of the information). 3) Semiconductors and electronic capabilities can be achieved: we show that nanocellulose is rendered electronic by iodine treatment replacing silicon including microstructures. Nanocellulose semiconductor properties are measured, and the resulting potential including single-electron transistors (SET) and their properties are modeled. Electric current can also be transported by DNA through G-quadruplex DNA molecules; these as well as classical silicon semiconductors can easily be integrated into the nanocellulose composite. 4) To elaborate upon miniaturization and integration for a smart nanocellulose chip device, we demonstrate pH-sensitive dyes in nanocellulose, nanopore creation, and kinase micropatterning on bacterial membranes as well as digital PCR micro-wells. Future application potential includes nano-3D printing and fast molecular processors (e.g., SETs) integrated with DNA storage and conventional electronics. This would also lead to environment-friendly nanocellulose chips for information processing as well as smart nanocellulose composites for biomedical applications and nano-factories.
Lung cancer is currently the leading cause of cancer related mortality due to late diagnosis and limited treatment intervention. Non-coding RNAs are not translated into proteins and have emerged as fundamental regulators of gene expression. Recent studies reported that microRNAs and long non-coding RNAs are involved in lung cancer development and progression. Moreover, they appear as new promising non-invasive biomarkers for early lung cancer diagnosis. Here, we highlight their potential as biomarker in lung cancer and present how bioinformatics can contribute to the development of non-invasive diagnostic tools. For this, we discuss several bioinformatics algorithms and software tools for a comprehensive understanding and functional characterization of microRNAs and long non-coding RNAs.
No abstract available
A precise and rapid adjustment of fluxes through metabolic pathways is crucial for organisms to prevail in changing environmental conditions. Based on this reasoning, many guiding principles that govern the evolution of metabolic networks and their regulation have been uncovered. To this end, methods from dynamic optimization are ideally suited since they allow to uncover optimality principles behind the regulation of metabolic networks. We used dynamic optimization to investigate the influence of toxic intermediates in connection with the efficiency of enzymes on the regulation of a linear metabolic pathway. Our results predict that transcriptional regulation favors the control of highly efficient enzymes with less toxic upstream intermediates to reduce accumulation of toxic downstream intermediates. We show that the derived optimality principles hold by the analysis of the interplay between intermediate toxicity and pathway regulation in the metabolic pathways of over 5000 sequenced prokaryotes. Moreover, using the lipopolysaccharide biosynthesis in Escherichia coli as an example, we show how knowledge about the relation of regulation, kinetic efficiency and intermediate toxicity can be used to identify drug targets, which control endogenous toxic metabolites and prevent microbial growth. Beyond prokaryotes, we discuss the potential of our findings for the development of antifungal drugs.
Machine learning techniques are excellent to analyze expression data from single cells. These techniques impact all fields ranging from cell annotation and clustering to signature identification. The presented framework evaluates gene selection sets how far they optimally separate defined phenotypes or cell groups. This innovation overcomes the present limitation to objectively and correctly identify a small gene set of high information content regarding separating phenotypes for which corresponding code scripts are provided. The small but meaningful subset of the original genes (or feature space) facilitates human interpretability of the differences of the phenotypes including those found by machine learning results and may even turn correlations between genes and phenotypes into a causal explanation. For the feature selection task, the principal feature analysis is utilized which reduces redundant information while selecting genes that carry the information for separating the phenotypes. In this context, the presented framework shows explainability of unsupervised learning as it reveals cell-type specific signatures. Apart from a Seurat preprocessing tool and the PFA script, the pipeline uses mutual information to balance accuracy and size of the gene set if desired. A validation part to evaluate the gene selection for their information content regarding the separation of the phenotypes is provided as well, binary and multiclass classification of 3 or 4 groups are studied. Results from different single-cell data are presented. In each, only about ten out of more than 30000 genes are identified as carrying the relevant information. The code is provided in a GitHub repository at https://github.com/AC-PHD/Seurat_PFA_pipeline.
Our universe may have started by Qubit decoherence:
In quantum computers, qubits have all their states undefined during calculation and become defined as output (“decoherence”). We study the transition from an uncontrolled, chaotic quantum vacuum (“before”) to a clearly interacting “real world”. In such a cosmology, the Big Bang singularity is replaced by a condensation event of interacting strings. This triggers a crystallization process. This avoids inflation, not fitting current observations: increasing long-range interactions limit growth and crystal symmetries ensure the same laws of nature and basic symmetries over the whole crystal. Tiny mis-arrangements provide nuclei of superclusters and galaxies and crystal structure allows arrangement of dark (halo regions) and normal matter (galaxy nuclei) for galaxy formation. Crystals come and go: an evolutionary cosmology is explored: entropic forces from the quantum soup “outside” of the crystal try to dissolve it. This corresponds to dark energy and leads to a “big rip” in 70 Gigayears. Selection for best growth and condensation events over generations of crystals favors multiple self-organizing processes within the crystal including life or even conscious observers in our universe. Philosophically this theory shows harmony with nature and replaces absurd perspectives of current cosmology.
Independent of cosmology, we suggest that a “real world” (so our everyday macroscopic world) happens only inside a crystal. “Outside” there is wild quantum foam and superposition of all possibilities. In our crystallized world the vacuum no longer boils but is cooled down by the crystallization event, space-time exists and general relativity holds. Vacuum energy becomes 10**20 smaller, exactly as observed in our everyday world. We live in a “solid” state, within a crystal, the n quanta which build our world have all their different m states nicely separated. There are only nm states available for this local “multiverse”. The arrow of entropy for each edge of the crystal forms one fate, one world-line or clear development of our world, while layers of the crystal are different system states. Mathematical leads from loop quantum gravity (LQG) point to required interactions and potentials. Interaction potentials for strings or loop quanta of any dimension allow a solid, decoherent state of quanta challenging to calculate. However, if we introduce here the heuristic that any type of physical interaction of strings corresponds just to a type of calculation, there is already since 1898 the Hurwitz theorem showing that then only 1D, 2D, 4D and 8D (octonions) allow complex or hypercomplex number calculations. No other hypercomplex numbers and hence dimensions or symmetries are possible to allow calculations without yielding divisions by zero. However, the richest solution allowed by the Hurwitz theorem, octonions, is actually the observed symmetry of our universe, E8. Standard physics such as condensation, crystallization and magnetization but also solid-state physics and quantum computing allow us to show an initial mathematical treatment of our new theory by LQG to describe the cosmological state transformations by equations, and, most importantly, point out routes to parametrization of free parameters looking at testable phenomena, experiments and formulas that describe processes of crystallization, protein folding, magnetization, solid-state physics and quantum computing. This is presented here for LQG, for string theory it would be more elegant but was too demanding to be shown here.
Note: While my previous Opus server preprint “A new cosmology of a crystallization process (decoherence) from the surrounding quantum soup provides heuristics to unify general relativity and quantum physics by solid state physics” (https://doi.org/10.25972/OPUS-23076) deals with the same topics and basic formulas, this new version is improved: clearer in title, better introduction, more stringent in its mathematics and improved discussion of the implications including quantum computing, hints for parametrization and connections to LQG and other current cosmological efforts.
This 5th of June 2021 version is again an OPUS preprint, but this will next be edited for Archives https://arxiv.org.
Egress of malaria parasites from the host cell requires the concerted rupture of its enveloping membranes. Hence, we investigated the role of the plasmodial perforin-like protein PPLP2 in the egress of Plasmodium falciparum from erythrocytes. PPLP2 is expressed in blood stage schizonts and mature gametocytes. The protein localizes in vesicular structures, which in activated gametocytes discharge PPLP2 in a calcium-dependent manner. PPLP2 comprises a MACPF domain and recombinant PPLP2 has haemolytic activities towards erythrocytes. PPLP2-deficient [PPLP2(−)] merozoites show normal egress dynamics during the erythrocytic replication cycle, but activated PPLP2(−) gametocytes were unable to leave erythrocytes and stayed trapped within these cells. While the parasitophorous vacuole membrane ruptured normally, the activated PPLP2(−) gametocytes were unable to permeabilize the erythrocyte membrane and to release the erythrocyte cytoplasm. In consequence, transmission of PPLP2(−) parasites to the Anopheles vector was reduced. Pore-forming equinatoxin II rescued both PPLP2(−) gametocyte exflagellation and parasite transmission. The pore sealant Tetronic 90R4, on the other hand, caused trapping of activated wild-type gametocytes within the enveloping erythrocytes, thus mimicking the PPLP2(−) loss-of-function phenotype. We propose that the haemolytic activity of PPLP2 is essential for gametocyte egress due to permeabilization of the erythrocyte membrane and depletion of the erythrocyte cytoplasm.
Background
Phytoplankton communities are often used as a marker for the determination of fresh water quality. The routine analysis, however, is very time consuming and expensive as it is carried out manually by trained personnel. The goal of this work is to develop a system for an automated analysis.
Results
A novel open source system for the automated recognition of phytoplankton by the use of microscopy and image analysis was developed. It integrates the segmentation of the organisms from the background, the calculation of a large range of features, and a neural network for the classification of imaged organisms into different groups of plankton taxa. The analysis of samples containing 10 different taxa showed an average recognition rate of 94.7% and an average error rate of 5.5%. The presented system has a flexible framework which easily allows expanding it to include additional taxa in the future.
Conclusions
The implemented automated microscopy and the new open source image analysis system - PlanktoVision - showed classification results that were comparable or better than existing systems and the exclusion of non-plankton particles could be greatly improved. The software package is published as free software and is available to anyone to help make the analysis of water quality more reproducible and cost effective.