Refine
Has Fulltext
- yes (107)
Is part of the Bibliography
- yes (107)
Year of publication
Document Type
- Preprint (107) (remove)
Language
- English (107) (remove)
Keywords
- boron (20)
- Quran (8)
- diborenes (8)
- Koran (7)
- Text Mining (7)
- crystallization (6)
- carbenes (5)
- cosmology (5)
- qubit (5)
- Boron (4)
Institute
- Institut für Anorganische Chemie (30)
- Institut für Physikalische und Theoretische Chemie (19)
- Institut für deutsche Philologie (17)
- Institut für Organische Chemie (14)
- Theodor-Boveri-Institut für Biowissenschaften (13)
- Klinik und Poliklinik für Nuklearmedizin (6)
- Rudolf-Virchow-Zentrum (5)
- Physikalisches Institut (4)
- Institut für Klinische Neurobiologie (2)
- Institut für Mathematik (2)
Sonstige beteiligte Institutionen
- Johns Hopkins School of Medicine (3)
- International Max Planck Research School Molecular Biology, University of Göttingen, Germany (2)
- Center for Nanoscale Microscopy and Molecular Physiology of the Brain (CNMPB), Göttingen, Germany (1)
- Center for Nanosystems Chemistry (CNC), University of Würzburg (1)
- Center for Nanosystems Chemistry (CNC), Universität Würzburg (1)
- Center for Nanosystems Chemistry (CNC), Universität Würzburg, Am Hubland, 97074 Würzburg, Germany (1)
- Center of Excellence for Science and Technology - Integration of Mediterranean region (STIM), Faculty of Science, University of Split, Poljička cesta 35, 2100 Split, Croatia (1)
- Charles University, Faculty of Mathematics and Physics, Ke Karlovu 5, 121 16 Prague, Czech Republic (1)
- Departamento de Química, Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain (1)
- Department of Biomedical Imaging, National Cerebral and Cardiovascular Research Center, Suita, Japan (1)
N\(^6\)-methyladenosine (m\(^6\)A) is an important modified nucleoside in cellular RNA associated with multiple cellular processes and is implicated in diseases. The enzymes associated with the dynamic installation and removal of m\(^6\)A are heavily investigated targets for drug research, which requires detailed knowledge of the recognition modes of m\(^6\)A by proteins. Here, we use atomic mutagenesis of m\(^6\)A to systematically investigate the mechanisms of the two human m\(^6\)A demethylase enzymes FTO and ALKBH5 and the binding modes of YTH reader proteins YTHDF2/DC1/DC2. Atomic mutagenesis refers to atom-specific changes that are introduced by chemical synthesis, such as the replacement of nitrogen by carbon atoms. Synthetic RNA oligonucleotides containing site-specifically incorporated 1-deaza-, 3-deaza-, and 7-deaza-m\(^6\)A nucleosides were prepared by solid-phase synthesis and their RNA binding and demethylation by recombinant proteins were evaluated. We found distinct differences in substrate recognition and transformation and revealed structural preferences for the enzymatic activity. The deaza m\(^6\)A analogues introduced in this work will be useful probes for other proteins in m\(^6\)A research.
Deoxyribozymes are synthetic enzymes made of DNA that can catalyze the cleavage or formation of phosphodiester bonds and are useful tools for RNA biochemistry. Here we report new RNA-cleaving deoxyribozymes to interrogate the methylation status of target RNAs, thereby providing an alternative method for the biochemical validation of RNA methylation sites containing N\(^6\)-methyladenosine, which is the most wide-spread and extensively investigated natural RNA modification. Using in vitro selection from random DNA, we developed deoxyribozymes that are sensitive to the presence of N\(^6\)-methyladenosine in RNA near the cleavage site. One class of these DNA enzymes shows faster cleavage of methylated RNA, while others are strongly inhibited by the modified nucleotide. The general applicability of the new deoxyribozymes is demonstrated for several examples of natural RNA sequences, including a lncRNA and a set of C/D box snoRNAs, which have been suggested to contain m\(^6\)A as a regulatory element that influences RNA folding and protein binding.
RNA-cleaving deoxyribozymes have found broad application as useful tools for RNA biochemistry. However, tedious in vitro selection procedures combined with laborious characterization of individual candidate catalysts hinder the discovery of novel catalytic motifs. Here, we present a new high-throughput sequencing method, DZ-seq, which directly measures activity and localizes cleavage sites of thousands of deoxyribozymes. DZ-seq exploits A-tailing followed by reverse transcription with an oligo-dT primer to capture the cleavage status and sequences of both deoxyribozyme and RNA substrate. We validated DZ-seq by conventional analytical methods and demonstrated its utility by discovery of novel deoxyribozymes that allow for cleaving challenging RNA targets or the analysis of RNA modification states.
Whereas the reduction of N-heterocyclic carbene (NHC)-stabilised cymantrenyldibromoboranes, (NHC)BBr\(_2\)Cym, in benzene results in formation of the corresponding diborenes (NHC)\(_2\)B\(_2\)Cym\(_2\), a change of solvent to THF yields a borylene of the form (NHC)\(_2\)BCym, stabilised through its boratafulvene resonance form.
Nearly all classes of coding and non-coding RNA undergo post-transcriptional modification including RNA methylation. Methylated nucleotides belong to the evolutionarily most conserved features of tRNA and rRNA.1,2 Many contemporary methyltransferases use the universal cofactor S-adenosylmethionine (SAM) as methyl group donor. This and other nucleotide-derived cofactors are considered as evolutionary leftovers from an RNA World, in which ribozymes may have catalysed essential metabolic reactions beyond self-replication.3 Chemically diverse ribozymes seem to have been lost in Nature, but may be reconstructed in the laboratory by in vitro selection. Here, we report a methyltransferase ribozyme that catalyses the site-specific installation of 1-methyladenosine (m1A) in a substrate RNA, utilizing O6-methylguanine (m6G) as a small-molecule cofactor. The ribozyme shows a broad RNA sequence scope, as exemplified by site-specific adenosine methylation in tRNAs. This finding provides fundamental insights into RNA’s catalytic abilities, serves a synthetic tool to install m1A in RNA, and may pave the way to in vitro evolution of other methyltransferase and demethylase ribozymes.
RNA-catalysed RNA methylation was recently shown to be part of the catalytic repertoire of ribozymes. The methyltransferase ribozyme MTR1 catalyses the site-specific synthesis of 1-methyladenosine (m\(^1\)A) in RNA, using O\(^6\)-methylguanine (m\(^6\)G) as methyl group donor. Here we report the crystal structure of MTR1 at a resolution of 2.8 Å, which reveals a guanine binding site reminiscent of natural guanine riboswitches. The structure represents the postcatalytic state of a split ribozyme in complex with the m1A-containing RNA product and the demethylated cofactor guanine. The structural data suggest the mechanistic involvement of a protonated cytidine in the methyl transfer reaction. A synergistic effect of two 2'-O-methylated ribose residues in the active site results in accelerated methyl group transfer. Supported by these results, it seems plausible that modified nucleotides may have enhanced early RNA catalysis and that metabolite-binding riboswitches may resemble inactivated ribozymes that have lost their catalytic activity during evolution.
2D electrophysiology is often used to determine the electrical properties of neurons, while in the brain, neurons form extensive 3D networks. Thus, performing electrophysiology in a 3D environment provides a closer situation to the physiological condition and serves as a useful tool for various applications in the field of neuroscience. In this study, we established 3D electrophysiology within a fiber-reinforced matrix to enable fast readouts from transfected cells, which are often used as model systems for 2D electrophysiology. Using melt electrowriting (MEW) of scaffolds to reinforce Matrigel, we performed 3D electrophysiology on a glycine receptor-transfected Ltk-11 mouse fibroblast cell line. The glycine receptor is an inhibitory ion channel associated when mutated with impaired neuromotor behaviour. The average thickness of the MEW scaffold was 141.4 ± 5.7µm, using 9.7 ± 0.2µm diameter fibers, and square pore spacings of 100 µm, 200 µm and 400 µm. We demonstrate, for the first time, the electrophysiological characterization of glycine receptor-transfected cells with respect to agonist efficacy and potency in a 3D matrix. With the MEW scaffold reinforcement not interfering with the electrophysiology measurement, this approach can now be further adapted and developed for different kinds of neuronal cultures to study and understand pathological mechanisms under disease conditions.
In this communication we describe a helically chiral push-pull molecule named 9,10-dimethoxy-[7]helicene diimide, displaying fluorescence (FL) and circularly polarised luminescence (CPL) over nearly the entire visible spectrum dependent on solvent polarity. The synthesised molecule exhibits an unusual solvent polarity dependence of FL quantum yield and nonradiative rate constant, as well as remarkable gabs and glum values along with high configurational stability.
Reactive hydrocarbon molecules like radicals, biradicals and carbenes are not only key players in combustion processes and interstellar and atmospheric chemistry, but some of them are also important intermediates in organic synthesis. These systems typically possess many low-lying, strongly coupled electronic states. After light absorption, this leads to rich photodynamics characterized by a complex interplay of nuclear and electronic motion, which is still not comprehensively understood and not easy to investigate both experimentally and theoretically. In order to elucidate trends and contribute to a more general understanding, we here review our recent work on excited-state dynamics of open-shell hydrocarbon species using time-resolved photoelectron spectroscopy and field-induced surface hopping simulations, and report new results on the excited-state dynamics of the tropyl and the 1-methylallyl radical. The different dynamics are compared, and the difficulties and future directions of time-resolved photoelectron spectroscopy and excited state dynamics simulations of open-shell hydrocarbon molecules are discussed.
We present a joint experimental and computational study of the nonradiative deactivation of the benzyl radical, C\(_7\)H\(_7\) after UV excitation. Femtosecond time-resolved photoelectron imaging was applied to investigate the photodynamics of the radical. The experiments were accompanied by excited state dynamics simulations using surface hopping. Benzyl has been excited at 265 nm into the D-band (\(\pi\pi^*\)) and the dynamics was probed using probe wavelengths of 398 nm or 798 nm. With 398 nm probe a single time constant of around 70-80 fs was observed. When the dynamics was probed at 798 nm, a second time constant \(\tau_2\)=1.5 ps was visible. It is assigned to further non-radiative deactivation to the lower-lying D\(_1\)/D\(_2\) states.
The analysis presented in this paper applies to experimental situations where observers or objects to be studied, all at stationary positions, are located in environments the optical thickness of which is strongly different. Non-transparent media comprise thin metallic films, packed or fluidised beds, superconductors, the Earth’s crust, and even dark clouds and other cosmological objects. The analysis applies mapping functions that correlate physical events, e, in non-transparent media, with their images, f(e), tentatively located on standard physical time scale. The analysis demonstrates, however, that physical time, in its rigorous sense, does not exist under non-transparency conditions. A proof of this conclusion is attempted in three steps: i) the theorem “there is no time without space and events” is accepted, (ii) images f[e(s,t)] do not constitute a dense, uncountably infinite set, and (iii) sets of images that are not uncountably infinite do not create physical time but only time-like sequences. As a consequence, mapping f[e(s,t)] in non-transparent space does not create physical analogues to the mathematical structure of the ordered, dense half-set R+ of real numbers, and reverse mapping, f-1f[e(s,t)], the mathematical inverse problem, would not allow unique identification and reconstruction of original events from their images. In these cases, causality as well as invariance of physical processes under time reversal, might be violated. An interesting problem is whether temporal cloaking (a time hole) in a transparent medium, as very recently reported in the literature, can be explained by the present analysis. Existence of time holes could perhaps be possible, not in transparent but in non-transparent media, as follows from the sequence of images, f[e(s,t)], that is not uncountably infinite, in contrast to R+. Impacts are expected for understanding physical diffusion-like, radiative transfer processes and stability models to protect superconductors against quenchs. There might be impacts also in relativity, quantum mechanics, nuclear decay, or in systems close to their phase transitions. The analysis is not restricted to objects of laboratory dimensions.
The analysis presented in this paper applies to experimental situations where observers or objects to be studied (both stationary, with respect to each other) are located in environments the optical thickness of which is strongly different. By their large optical thickness, non-transparent media are clearly distinguished from their transparent counterparts. Non-transparent media comprise thin metallic films, packed or fluidised beds, the Earth’s crust, and even dark clouds and other cosmological objects. As a representative example, a non-transparent slab is subjected to transient disturbances, and a rigorous analysis is presented whether physical time reasonably could be constructed under such condition. The analysis incorporates mapping functions that correlate physical events, e, in non-transparent media, with their images, f(e), tentatively located on a standard physical time scale. The analysis demonstrates, however, that physical time, in its rigorous sense, does not exist under non-transparency conditions. A proof of this conclusion is attempted in three steps: i) the theorem “there is no time without space and events” is accepted, (ii) images f[e(s,t)] do not constitute a dense, uncountably infinite set, and (iii) sets of images that are not uncountably infinite do not create physical time but only time-like sequences. As a consequence, mapping f[e(s,t)] in non-transparent space does not create physical analogues to the mathematical structure of the ordered, dense half-set R+ of real numbers, and reverse mapping, f-1f[e(s,t)] would not allow unique identification and reconstruction of original events from their images. In these cases, causality and determinism, as well as invariance of physical processes under time reversal, might be violated. Existence of time holes could be possible, as follows from the sequence of images, f[e(s,t)], that is not uncountably infinite, in contrast to R+. Practical impacts are expected for understanding physical diffusion-like, radiative transfer processes, stability models to protect superconductors against quenchs or for description of their transient local pair density and critical currents. Impacts would be expected also in mathematical formulations (differential equations) of classical physics, in relativity and perhaps in quantum mechanics, all as far as transient processes in non-transparent space would be concerned. An interesting problem is whether temporal cloaking (a time hole) in a transparent medium, as very recently reported in the literature, can be explained by the present analysis. The analysis is not restricted to objects of laboratory dimensions: Because of obviously existing radiation transfer analogues, it is tempting to discuss consequences also for much larger structures in particular if an origin of time is postulated.
We have investigated the photodynamics of \(\beta\)-D-glucose employing our field-induced surface hopping method (FISH), which allows us to simulate the coupled electron-nuclear dynamics, including explicitly nonadiabatic effects and light-induced excitation. Our results reveal that from the initially populated S\(_{1}\) and S\(_{2}\) states, glucose returns nonradiatively to the ground state within about 200 fs. This takes place mainly via conical intersections (CIs) whose geometries
in most cases involve the elongation of a single O-H bond, while in some instances ring-opening due to dissociation of a C-O bond is observed. Experimentally, excitation to a distinct excited electronic state is improbable due to the presence of a dense manifold of states bearing similar oscillator strengths. Our FISH simulations explicitly including a UV laser pulse of 6.43 eV photon energy reveals that after initial excitation the population is almost equally spread over several close-lying electronic states. This is followed by a fast nonradiative decay on the time scale of 100-200 fs, with the final return to the ground state proceeding via the S\(_{1}\) state through the same types of CIs as observed in the field-free simulations.
Immunofluorescence is a common method to localise proteins within their cellular context via fluorophore labelled antibodies and for some applications without alternative. However, some protein targets evade detection due to low protein abundance or accessibility issues. In addition, some imaging methods require a massive reduction in antigen density thus impeding detection of even medium-abundant proteins.Here, we show that the fusion of the target protein to TurboID, a biotin ligase labelling lysine residues in close proximity, and subsequent detection of biotinylation by fluorescent streptavidin offers an “all in one” solution to the above-mentioned restrictions. For a wide range of target proteins tested, the streptavidin signal was significantly stronger than an antibody signal, markedly improving the imaging sensitivity in expansion microscopy and correlative light and electron microscopy, with no loss in resolution. Importantly, proteins within phase-separated regions, such as the central channel of the nuclear pores, the nucleolus or RNA granules, were readily detected with streptavidin, while most antibodies fail to label proteins in these environments. When TurboID is used in tandem with an HA epitope tag, co-probing with streptavidin and anti-HA can be used to map antibody-accessibility to certain cellular regions. As a proof of principle, we mapped antibody access to all trypanosome nuclear pore proteins (NUPs) and found restricted antibody labelling of all FG NUPs of the central channel that are known to be phase-separated, while most non-FG Nups could be labelled. Lastly, we show that streptavidin imaging can resolve dynamic, temporally and spatially distinct sub-complexes and, in specific cases, reveal a history of dynamic protein interaction.In conclusion, streptavidin imaging has major advantages for the detection of lowly abundant or inaccessible proteins and in addition, can provide information on protein interactions and biophysical environment.
Background: Recent developments in cellular reprogramming technology enable the production of virtually unlimited numbers of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CM). Although hiPSC-CM share various characteristic hallmarks with endogenous cardiomyocytes, it remains a question as to what extent metabolic characteristics are equivalent to mature mammalian cardiomyocytes. Here we set out to functionally characterize the metabolic status of hiPSC-CM in vitro by employing a radionuclide tracer uptake assay. Material and Methods: Cardiac differentiation of hiPSC was induced using a combination of well-orchestrated extrinsic stimuli such as WNT activation (by CHIR99021) and BMP signalling followed by WNT inhibition and lactate based cardiomyocyte enrichment. For characterization of metabolic substrates, dual tracer uptake studies were performed with \(^{18}\)F-2-fluoro-2-deoxy-D-glucose (\(^{18}\)F-FDG) and \(^{125}\)I-β-methyl-iodophenyl-pentadecanoic acid (\(^{125}\)I-BMIPP) as transport markers of glucose and fatty acids, respectively. Results: After cardiac differentiation of hiPSC, in vitro tracer uptake assays confirmed metabolic substrate shift from glucose to fatty acids that was comparable to those observed in native isolated human cardiomyocytes. Immunostaining further confirmed expression of fatty acid transport and binding proteins on hiPSC-CM. Conclusions: During in vitro cardiac maturation, we observed a metabolic shift to fatty acids, which are known as a main energy source of mammalian hearts, suggesting hi-PSC-CM as a potential functional phenotype to investigate alteration of cardiac metabolism in cardiac diseases. Results also highlight the use of available clinical nuclear medicine tracers as functional assays in stem cell research for improved generation of autologous differentiated cells for numerous biomedical applications.
A tolane-modified 5-ethynyluridine as a universal and fluorogenic photochemical DNA crosslinker
(2023)
We report the fluorescent nucleoside ToldU and its application as a photoresponsive crosslinker in three different DNA architectures with enhanced fluorescence emission of the crosslinked products. The fluorogenic ToldU crosslinking reaction enables the assembly of DNA polymers in a hybridization chain reaction for the concentration-dependent detectio of a specific DNA sequence.
Covalent crosslinking of DNA strands provides a useful tool for medical, biochemical and DNA nanotechnology applications. Here we present a light-induced interstrand DNA crosslinking reaction using the modified nucleoside 5-phenylethynyl-2’-deoxyuridine (\(^{Phe}\)dU). The crosslinking ability of \(^{Phe}\)dU was programmed by base pairing and by metal ion interaction at the Watson-Crick base pairing site. Rotation to intrahelical positions was favored by hydrophobic stacking and enabled an unexpected photochemical alkene-alkyne [2+2] cycloaddition within the DNA duplex, resulting in efficient formation of a \(^{Phe}\)dU-dimer after short irradiation times of a few seconds. A \(^{Phe}\)dU dimer-containing DNA was shown to efficiently bind a helicase complex, but the covalent crosslink completely prevented DNA unwinding, suggesting possible applications in biochemistry or structural biology.
Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a histogram.
Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning.
Design and Implementation of Architectures for Interactive Textual Documents Collation Systems
(2011)
One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced.
The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side.
A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran
(2011)
Computationally categorizing Quran’s chapters has been mainly confined to the determination of chapters’ revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters’ dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran’s chapters based on their lexical contents.
Overlapping is a common word used to describe documents whose structural dimensions cannot be adequately represented using tree structure. For instance a quotation that starts in one verse and ends in another verse. The problem of overlapping hierarchies is a recurring one, which has been addressed by a variety of approaches. There are XML based solutions as well as Non-XML ones. The XML-based solutions are: multiple documents, empty elements, fragmentation, out-of-line markup, JITT and BUVH. And the Non-XML approaches comprise CONCUR/XCONCUR, MECS, LMNL ...etc. This paper presents shortly state-of-the-art in overlapping hierarchies, and introduces two variations on the TEI fragmentation markup that have several advantages.
The Visual Editor for XML (Vex)[1] used by TextGrid [2]and other applications has got rendering and layout engines. The layout engine is well documented but the rendering engine is not. This lack of documenting the rendering engine has made refactoring and extending the editor hard and tedious. For instance many CSS2.1 and upcoming CSS3 properties have not been implemented. Software developers in different projects such as TextGrid using Vex would like to update its CSS rendering engine in order to provide advanced user interfaces as well as support different document types. In order to minimize the effort of extending Vex functionality, I found it beneficial to write a basic documentation about Vex software architecture in general and its CSS rendering engine in particular. The documentation is mainly based on the idea of architectural layered diagrams. In fact layered diagrams can help developers understand software’s source code faster and easier in order to alter it, and fix errors. This paper is written for the purpose of providing direct support for exploration in the comprehension process of Vex source code. It discusses Vex software architecture. The organization of packages that make up the software, the architecture of its CSS rendering engine, an algorithm explaining the working principle of its rendering engine are described.
The technique of using Cascading Style Sheets (CSS) to format and present structured data is called CSS processing model. For instance a CSS processing model for XML documents describes steps involved in formatting and presenting XML documents on screens or papers. Many software applications such as browsers and XML editors have their own CSS processing models which are part of their rendering engines. For instance each browser based on its CSS processing model renders CSS layout differently, as a result an inconsistency in the support of CSS features arises. Some browsers support more CSS features than others, and the rendering itself varies. Moreover the W3C standards are not even adhered by some browsers such as Internet Explorer. Test suites and other hacks and filters cannot definitely solve these problems, because these solutions are temporary and fragile. To palliate this inconsistency and browser compatibility issues with respect to CSS, a reference CSS processing model is needed. By extension it could even allow interoperability across CSS rendering engines. A reference architecture would provide common software architecture and interfaces, and facilitate refactoring, reuse, and automated unit testing. In [2] a reference architecture for browsers has been proposed. However this reference architecture is a macro reference model which does not consider separately individual components of rendering and layout engines. In this paper an attempt to develop a reference architecture for CSS processing models is discussed. In addition the Vex editor [3] rendering and layout engines, as well as an extended version of the editor used in TextGrid project [5] are also presented in order to validate the proposed reference architecture.
Empirical Study on Screen Scraping Web Service Creation: Case of Rhein-Main-Verkehrsverbund (RMV)
(2010)
Internet is the biggest database that science and technology have ever produced. The world wide web is a large repository of information that cannot be used for automation by many applications due to its limited target audience. One of the solutions to the automation problem is to develop wrappers. Wrapping is a process whereby unstructured extracted information is transformed into a more structured one such as XML, which could be provided as webservice to other applications. A web service is a web page whose content is well structured so that a computer program can consume it automatically. This paper describes steps involved in constructing wrappers manually in order to automatically generate web services.
This article discusses web frameworks that are available to a software developer in Java language. It introduces MVC paradigm and some frameworks that implement it. The article presents an overview of Struts, Spring MVC, JSF Frameworks, as well as guidelines for selecting one of them as development environment.
Webservices composition is traditionally carried out using composition technologies such as Business Process Execution Language (BPEL) [1] and Web Service Choreography Interface (WSCI) [2]. The composition technology involves the process of web service discovery, invocation, and composition. However these technologies are not easy and flexible enough because they are mainly developer-centric. Moreover majority of websites have not yet embarked into the world of web service, although they have very important and useful information to offer. Is it because they have not understood the usefulness of web services or is it because of the costs? Whatever might be the answers to these questions, time and money are definitely required in order to create and offer web services. To avoid these expenditures, wrappers [7] to automatically generate webservices from websites would be a cheaper and easier solution. Mashups offer a different way of doing webservices composition. In web environment a Mashup is a web application that brings together data from several sources using webservices, APIs, wrappers and so on, in order to create entirely a new application that was not provided before. This paper presents first an overview of Mashups and the process of web service invocation and composition based on Mashup, then describes an example of a web-based simulator for navigation system in Germany.
This paper discusses the categorization of Quranic chapters by major phases of Prophet Mohammad’s messengership using machine learning algorithms. First, the chapters were categorized by places of revelation using Support Vector Machine and naïve Bayesian classifiers separately, and their results were compared to each other, as well as to the existing traditional Islamic and western orientalists classifications. The chapters were categorized into Meccan (revealed in Mecca) and Medinan (revealed in Medina). After that, chapters of each category were clustered using a kind of fuzzy-single linkage clustering approach, in order to correspond to the major phases of Prophet Mohammad’s life. The major phases of the Prophet’s life were manually derived from the Quranic text, as well as from the secondary Islamic literature e.g hadiths, exegesis. Previous studies on computing the places of revelation of Quranic chapters relied heavily on features extracted from existing background knowledge of the chapters. For instance, it is known that Meccan chapters contain mostly verses about faith and related problems, while Medinan ones encompass verses dealing with social issues, battles…etc. These features are by themselves insufficient as a basis for assigning the chapters to their respective places of revelation. In fact, there are exceptions, since some chapters do contain both Meccan and Medinan features. In this study, features of each category were automatically created from very few chapters, whose places of revelation have been determined through identification of historical facts and events such as battles, migration to Medina…etc. Chapters having unanimously agreed places of revelation were used as the initial training set, while the remaining chapters formed the testing set. The classification process was made recursive by regularly augmenting the training set with correctly classified chapters, in order to classify the whole testing set. Each chapter was preprocessed by removing unimportant words, stemming, and representation with vector space model. The result of this study shows that, the two classifiers have produced useable results, with an outperformance of the support vector machine classifier. This study indicates that, the proposed methodology yields encouraging results for arranging Quranic chapters by phases of Prophet Mohammad’s messengership.
In this research, an attempt to create a knowledge-based learning system for the Quranic text has been performed. The knowledge base is made up of the Quranic text along with detailed information about each chapter and verse, and some rules. The system offers the possibility to study the Quran through web-based interfaces, implementing novel visualization techniques for browsing, querying, consulting, and testing the acquired knowledge. Additionally the system possesses knowledge acquisition facilities for maintaining the knowledge base.
Computing Generic Causes of Revelation of the Quranic Verses Using Machine Learning Techniques
(2011)
Because many verses of the holy Quran are similar, there is high probability that, similar verses addressing same issues share same generic causes of revelation. In this study, machine learning techniques have been employed in order to automatically derive causes of revelation of Quranic verses. The derivation of the causes of revelation is viewed as a classification problem. Initially the categories are based on the verses with known causes of revelation, and the testing set consists of the remaining verses. Based on a computed threshold value, a naïve Bayesian classifier is used to categorize some verses. After that, using a decision tree classifier the remaining uncategorized verses are separated into verses that contain indicators (resultative connectors, causative expressions…), and those that do not. As for those verses having indicators, each one is segmented into its constituent clauses by identification of the linking indicators. Then a dominant clause is extracted and considered either as the cause of revelation, or post-processed by adding or subtracting some terms to form a causal clause that constitutes the cause of revelation. Concerning remaining unclassified verses without indicators, a naive Bayesian classifier is again used to assign each one of them to one of the existing classes based on features and topics similarity. As for verses that could not be classified so far, manual classification was made by considering each verse as a category on its own. The result obtained in this study is encouraging, and shows that automatic derivation of Quranic verses’ generic causes of revelation is achievable, and reasonably reliable for understanding and implementing the teachings of the Quran.
Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. This paper discusses about modeling religious texts using semantic XML markup based on frame-based knowledge representation, with the purpose of assisting understanding, retention, and sharing of knowledge they contain. In this study, books organized in terms of chapters made up of verses are considered as the source of knowledge to model. Some metadata representing the multiple perspectives of knowledge modeling are assigned to each chapter and verse. Chapters and verses with their metadata form a meta-model, which is represented using frames, and published on a web mashup. An XML-based annotation and visualization system equipped with user interfaces for creating static and dynamic metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing generated knowledge on the Internet, has been developed. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts, in order to support analysis, understanding, and retention of the texts.
Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a bar diagram. Additionally this study includes a recursive algorithm for automatically reconstructing the original text from the identified base text and ranked witnesses.
The question of why the Quran structure does not follow its chronology of revelation is a recurring one. Some Islamic scholars such as [1] have answered the question using hadiths, as well as other philosophical reasons based on internal evidences of the Quran itself. Unfortunately till today many are still wondering about this issue. Muslims believe that the Quran is a summary and a copy of the content of a preserved tablet called Lawhul-Mahfuz located in the heaven. Logically speaking, this suggests that the arrangement of the verses and chapters is expected to be similar to that of the Lawhul-Mahfuz. As for the arrangement of the verses in each chapter, there is unanimity that it was carried out by the Prophet himself under the guidance of Angel Gabriel with the recommendation of God. But concerning the ordering of the chapters, there are reports about some divergences [3] among the Prophet’s companions as to which chapter should precede which one. This paper argues that Quranic chapters might have been arranged according to months and seasons of revelation. In fact, based on some verses of the Quran, it is defendable that the Lawhul-Mahfuz itself is understood to have been structured in terms of the months of the year. In this study, philosophical and mathematical arguments for computing chapters’ months of revelation are discussed, and the result is displayed on an interactive scatter plot.
We demonstrate two-quantum (2Q) coherent two-dimensional (2D)electronic spectroscopy using a shot-to-shot-modulated pulse shaper and fluorescence detection. Broadband collinear excitation is realized with the supercontinuum output of an argon-filled hollow-core fiber, enabling us to excite multiple transitions simultaneously in the visible range. The 2Q contribution is extracted via a three-pulse sequence with 16-fold phase cycling and simulated employing cresyl violet as a model system. Furthermore, we report the first experimental realization of one-quantum−two-quantum (1Q-2Q) 2D spectroscopy, offering less congested spectra as compared with the 2Q implementation. We avoid scattering artifacts and nonresonant solvent contributions by using fluorescence as the observable. This allows us to extract quantitative information about doubly excited states that agree with literature expectations. The high sensitivity and background-free nature of fluorescence detection allow for a general applicability of this method to many other systems.
The lability of B=B, B-P and B-halide bonds is combined in the syntheses of the first diiododiborenes. In a series of reactivity tests, these diiododiborenes demonstrate cleavage of all six of their central bonds in different ways, leading to products of B=B hydrogenation and dihalogenation as well as halide exchange.
General and efficient tools for site-specific fluorescent or bioorthogonal labeling of RNA are in high demand. Here, we report direct in vitro selection, characterization, and application of versatile trans-acting 2'-5' adenylyl transferase ribozymes for covalent and site-specific RNA labeling. The design of our partially structured RNA pool allowed for in vitro evolution of ribozymes that modify a predetermined nucleotide in cis (i.e. intramolecular reaction), and were then easily engineered for applications in trans (i.e. in an intermolecular setup). The resulting ribozymes are readily designed for specific target sites in small and large RNAs and accept a wide variety of N6-modified ATP analogues as small molecule substrates. The most efficient new ribozyme (FH14) shows excellent specificity towards its target sequence also in the context of total cellular RNA.
Protein kinase D1 deletion in adipocytes enhances energy dissipation and protects against adiposity
(2018)
Nutrient overload in combination with decreased energy dissipation promotes obesity and diabetes. Obesity results in a hormonal imbalance, which among others, activates G-protein coupled receptors utilizing diacylglycerol (DAG) as secondary messenger. Protein kinase D1 (PKD1) is a DAG effector which integrates multiple nutritional and hormonal inputs, but its physiological role in adipocytes is unknown. Here, we show that PKD1 promotes lipogenesis and suppresses mitochondrial fragmentation, biogenesis, respiration, and energy dissipation in an AMP-activated protein kinase (AMPK)-dependent manner. Moreover, mice lacking PKD1 in adipocytes are resistant to diet-induced obesity due to elevated energy expenditure. Beiging of adipocytes promotes energy expenditure and counteracts obesity. Consistently, deletion of PKD1 promotes expression of the β3-adrenergic receptor (ADRB3) in a CCAAT/enhancerbinding protein (C/EBP)-α and δ-dependent manner, which leads to the elevated expression of beige markers in adipocytes and subcutaneous adipose tissue. Finally, deletion of PKD1 in adipocytes improves insulin sensitivity and ameliorates liver steatosis. Thus, loss of PKD1 in adipocytes increases energy dissipation by several complementary mechanisms and might represent an attractive strategy to treat obesity and its related complications.
The study of main-group molecules that behave and react similarly to transition-metal (TM) complexes has attracted significant interest in recent decades. Most notably, the attractive idea of replacing the all-too-often rare and costly metals from catalysis has motivated efforts to develop main-group-element-mediated reactions. Main-group elements, however, lack the electronic flexibility of TM complexes that arises from combinations of empty and filled d orbitals and that seem ideally suited to bind and activate many substrates. In this review, we look at boron, an element that despite its nonmetal nature, low atomic weight, and relative redox staticity has achieved great milestones in terms of TM-like reactivity. We show how in interelement cooperative systems, diboron molecules, and hypovalent complexes the fifth element can acquire a truly metallomimetic character. As we discuss, this character is powerfully demonstrated by the reactivity of boron-based molecules with H2, CO, alkynes, alkenes and even with N2.
Collective Response in DNA-Stabilized Silver Cluster Assemblies from First-Principles Simulations
(2019)
We investigate fluorescence resonant energy transfer and concurrent electron dynamics in a pair of DNA-stabilized silver clusters. For this purpose we introduce a methodology for the simulation of collective optoelectronic properties of coupled molecular aggregates starting from first-principles quantum chemistry, which can be further applied to a broad range of coupled molecular systems to study their electro-optical response. Our simulations reveal the existence of low-energy coupled excitonic states, which enable ultrafast energy transport between subunits, and give insight into the origin of the fluorescence signal in coupled DNA-stabilized silver clusters, which have been recently experimentally detected. Hence, we demonstrate the possibility of constructing ultrasmall energy transmission lines and optical converters based on these hybrid molecular systems.
The multistate metadynamics for automatic exploration of conical intersection seams and systematic location of minimum energy crossing points in molecular systems and its implementation into the software package metaFALCON is presented. Based on a locally modified energy gap between two Born–Oppenheimer electronic states as a collective variable, multistate metadynamics trajectories are driven toward an intersection point starting from an arbitrary ground state geometry and are subsequently forced to explore the conical intersection seam landscape. For this purpose, an additional collective variable capable of distinguishing structures within the seam needs to be defined and an additional bias is introduced into the off-diagonal elements of an extended (multistate) electronic Hamiltonian. We demonstrate the performance of the algorithm on the examples of the 1,3-butadiene, benzene, and 9H-adenine molecules, where multiple minimum energy crossing points could be systematically located using the Wiener number or Cremer–Pople parameters as collective variables. Finally, with the example of 9H-adenine, we show that the multistate metadynamics potential can be used to obtain a global picture of a conical intersection seam. Our method can be straightforwardly connected with any ab initio or semiempirical electronic structure theory that provides energies and gradients of the respective electronic states and can serve for systematic elucidation of the role of conical intersections in the photophysics and photochemistry of complex molecular systems, thus complementing nonadiabatic dynamics simulations.
Energy Transfer Between Squaraine Polymer Sections: From helix to zig-zag and All the Way Back
(2015)
Joint experimental and theoretical study of the absorption spectra of squaraine polymers in solution provide evidence that two different conformations are present in solution: a helix and a zig-zag structure. This unique situation allows investigating ultrafast energy transfer processes between different structural segments within a single polymer chain in solution. The understanding of the underlying dynamics is of fundamental importance for the development of novel materials for light-harvesting and optoelectronic applications. We combine here femtosecond transient absorption spectroscopy with time-resolved 2D electronic spectroscopy showing that ultrafast energy transfer within the squaraine polymer chains proceeds from initially excited helix segments to zig-zag segments or vice versa, depending on the solvent as well as on the excitation wavenumber. These observations contrast other conjugated polymers such as MEH-PPV where much slower intrachain energy transfer was reported. The reason for the very fast energy transfer in squaraine polymers is most likely a close matching of the density of states between donor and acceptor polymer segments because of very small reorganization energy in these cyanine-like chromophores.
Large Stokes shift (LSS) fluorescent proteins (FPs) exploit excited state proton transfer pathways to enable fluorescence emission from the phenolate intermediate of their internal 4 hydroxybenzylidene imidazolone (HBI) chromophore. An RNA aptamer named Chili mimics LSS FPs by inducing highly Stokes-shifted emission from several new green and red HBI analogs that are non-fluorescent when free in solution. The ligands are bound by the RNA in their protonated phenol form and feature a cationic aromatic side chain for increased RNA affinity and reduced magnesium dependence. In combination with oxidative functional-ization at the C2 position of the imidazolone, this strategy yielded DMHBO\(^+\), which binds to the Chili aptamer with a low-nanomolar K\(_D\). Because of its highly red-shifted fluorescence emission at 592 nm, the Chili–DMHBO\(^+\) complex is an ideal fluorescence donor for Förster resonance energy transfer (FRET) to the rhodamine dye Atto 590 and will therefore find applications in FRET-based analytical RNA systems.
Animals, just like humans, can freely move. They do so for various important reasons, such as finding food and escaping predators. Observing these behaviors can inform us about the underlying cognitive processes. In addition, while humans can convey complicated information easily through speaking, animals need to move their bodies to communicate. This has prompted many creative solutions by animal neuroscientists to enable studying the brain during movement. In this review, we first summarize how animal researchers record from the brain while an animal is moving, by describing the most common neural recording techniques in animals and how they were adapted to record during movement. We further discuss the challenge of controlling or monitoring sensory input during free movement.
However, not only is free movement a necessity to reflect the outcome of certain internal cognitive processes in animals, it is also a fascinating field of research since certain crucial behavioral patterns can only be observed and studied during free movement. Therefore, in a second part of the review, we focus on some key findings in animal research that specifically address the interaction between free movement and brain activity. First, focusing on walking as a fundamental form of free movement, we discuss how important such intentional movements are for understanding processes as diverse as spatial navigation, active sensing, and complex motor planning. Second, we propose the idea of regarding free movement as the expression of a behavioral state. This view can help to understand the general influence of movement on brain function.
Together, the technological advancements towards recording from the brain during movement, and the scientific questions asked about the brain engaged in movement, make animal research highly valuable to research into the human “moving brain”.
For the rational design of new fluorophores, reliable predictions of fluorescence quantum yields from first principles would be of great help. However, efficient computational approaches for predicting transition rates usually assume that the vibrational structure is harmonic. While the harmonic approximation has been used successfully to predict vibrationally resolved spectra and radiative rates, its reliability for non-radiative rates is much more questionable. Since non-adiabatic transitions convert large amounts of electronic energy into vibrational energy, the highly excited final vibrational states deviate greatly from harmonic oscillator eigenfunctions. We employ a time-dependent formalism to compute radiative and non-radiative rates for transitions and study the dependence on model parameters. For several coumarin dyes we compare different adiabatic and vertical harmonic models (AS, ASF, AH, VG, VGF, VH), in order to dissect the
importance of displacements, frequency changes and Duschinsky rotations. In addition we analyze the effect of different broadening functions (Gaussian, Lorentzian or Voigt). Moreover, to assess the qualitative influence of anharmonicity on the internal conversion rate, we develop a simplified anharmonic model. We adress the reliability of these models considering the potential errors introduced by the harmonic approximation and the phenomenological width of the broadening function.
Space- and time-resolved UV-to-NIR surface spectroscopy and 2D nanoscopy at 1 MHz repetition rate
(2019)
We describe a setup for time-resolved photoemission electron microscopy (TRPEEM) with aberration correction enabling 3 nm spatial resolution and sub-20 fs temporal resolution. The latter is realized by our development of a widely tunable (215–970 nm) noncollinear optical parametric amplifier (NOPA) at 1 MHz repetition rate. We discuss several exemplary applications. Efficient photoemission from plasmonic Au nanoresonators is investigated with phase-coherent pulse pairs from an actively stabilized interferometer. More complex excitation fields are created with a liquid-crystal-based pulse shaper enabling amplitude and phase shaping of NOPA pulses with spectral components from 600 to 800 nm. With this system we demonstrate spectroscopy within a single plasmonic nanoslit resonator by spectral amplitude shaping and investigate the local field dynamics with coherent two-dimensional (2D) spectroscopy at the nanometer length scale (“2D nanoscopy”). We show that the local response varies across a distance as small as 33 nm in our sample. Further, we report two-color pump–probe experiments using two independent NOPA beamlines. We extract local variations of the excited-state dynamics of a monolayered 2D material (WSe2) that we correlate with low-energy electron microscopy (LEEM) and reflectivity (LEER) measurements. Finally, we demonstrate the in-situ sample preparation capabilities for organic thin films and their characterization via spatially resolved electron diffraction and dark-field LEEM.
The mechanism of excimer formation: an experimental and theoretical study on the pyrene dimer
(2017)
The understanding of excimer formation in organic materials is of fundamental importance, since excimers profoundly influence their functional performance in applications such as light-harvesting, photovoltaics or organic electronics. We present a joint experimental and theoretical study of the ultrafast dynamics of excimer formation in the pyrene dimer in a supersonic jet, which is the archetype of an excimer forming system. We perform simulations of the nonadiabatic photodynamics in the frame of TDDFT that reveal two distinct excimer formation pathways in the gas-phase dimer. The first pathway involves local excited state relaxation close to the initial Franck–Condon geometry that is characterized by a strong excitation of the stacking coordinate exhibiting damped oscillations with a period of 350 fs that persist for several picoseconds. The second excimer forming pathway involves large amplitude oscillations along the parallel shift coordinate with a period of ≈900 fs that after intramolecular vibrational energy redistribution leads to the formation of a perfectly stacked dimer. The electronic relaxation within the excitonic manifold is mediated by the presence of intermolecular conical intersections formed between fully delocalized excitonic states. Such conical intersections may generally arise in stacked π-conjugated aggregates due to the interplay between the long-range and short-range electronic coupling. The simulations are supported by picosecond photoionization experiments in a supersonic jet that provide a time-constant for the excimer formation of around 6–7 ps, in good agreement with theory. Finally, in order to explore how the crystal environment influences the excimer formation dynamics we perform large scale QM/MM nonadiabatic dynamics simulations on a pyrene crystal in the framework of the long-range corrected tight-binding TDDFT. In contrast to the isolated dimer, the excimer formation in the crystal follows a single reaction pathway in which the initially excited parallel slip motion is strongly damped by the interaction with the surrounding molecules leading to the slow excimer stabilization on a picosecond time scale.
sp\(^2\)–sp\(^3\) diborane species based on bis(catecholato)diboron and N-heterocyclic carbenes (NHCs) are subjected to catechol/bromide exchange selectively at the sp\(^3\) boron atom. The reduction of the resulting 1,1-dibromodiborane adducts led to reductive coupling and isolation of doubly NHC-stabilized 1,2-diboryldiborenes. These compounds are the first examples of molecules exhibiting pelectron delocalization over an all-boron chain.
The reductive coupling of an NHC-stabilized aryldibromoborane yields a mixture of trans- and cis-diborenes in which the aryl groups are coplanar with the diborene core. Under dilute reduction conditions two diastereomers of a borirane-borane intermediate are isolated, which upon further reduction give rise to the aforementioned diborene mixture. DFT calculations suggest a mechanism proceeding via nucleophilic attack of a dicoordinate borylene intermediate on the aryl ring and subsequent intramolecular B-B bond formation.
Herpesviruses have mastered host cell modulation and immune evasion to augment productive infection, life-long latency and reactivation thereof 1,2. A long appreciated, yet elusively defined relationship exists between the lytic-latent switch and viral non-coding RNAs 3,4. Here, we identify miRNA-mediated inhibition of miRNA processing as a novel cellular mechanism that human herpesvirus 6A (HHV-6A) exploits to disrupt mitochondrial architecture, evade intrinsic host defense and drive the latent-lytic switch. We demonstrate that virus-encoded miR-aU14 selectively inhibits the processing of multiple miR-30 family members by direct interaction with the respective pri-miRNA hairpin loops. Subsequent loss of miR-30 and activation of miR-30/p53/Drp1 axis triggers a profound disruption of mitochondrial architecture, which impairs induction of type I interferons and is necessary for both productive infection and virus reactivation. Ectopic expression of miR-aU14 was sufficient to trigger virus reactivation from latency thereby identifying it as a readily drugable master regulator of the herpesvirus latent-lytic switch. Our results show that miRNA-mediated inhibition of miRNA processing represents a generalized cellular mechanism that can be exploited to selectively target individual members of miRNA families. We anticipate that targeting miR-aU14 provides exciting therapeutic options for preventing herpesvirus reactivations in HHV-6-associated disorders like myalgic encephalitis/chronic fatigue syndrome (ME/CFS) and Long-COVID.