Refine
Has Fulltext
- yes (14)
Is part of the Bibliography
- yes (14)
Document Type
- Journal article (13)
- Doctoral Thesis (1)
Language
- English (14)
Keywords
- deep learning (3)
- cardiac magnetic resonance (2)
- foraging (2)
- honeybee (2)
- neural networks (2)
- segmentation (2)
- 16S metabarcoding (1)
- 7 T (1)
- 7T (1)
- Action potentials (1)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (10)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (4)
- Center for Computational and Theoretical Biology (3)
- Institut für diagnostische und interventionelle Radiologie (Institut für Röntgendiagnostik) (3)
- Julius-von-Sachs-Institut für Biowissenschaften (2)
- Medizinische Klinik und Poliklinik I (2)
- Graduate School of Life Sciences (1)
ResearcherID
- D-1221-2009 (1)
EU-Project number / Contract (GA) number
In vitro rearing of honeybee larvae is an established method that enables exact control and monitoring of developmental factors and allows controlled application of pesticides or pathogens. However, only a few studies have investigated how the rearing method itself affects the behavior of the resulting adult honeybees. We raised honeybees in vitro according to a standardized protocol: marking the emerging honeybees individually and inserting them into established colonies. Subsequently, we investigated the behavioral performance of nurse bees and foragers and quantified the physiological factors underlying the social organization. Adult honeybees raised in vitro differed from naturally reared honeybees in their probability of performing social tasks. Further, in vitro-reared bees foraged for a shorter duration in their life and performed fewer foraging trips. Nursing behavior appeared to be unaffected by rearing condition. Weight was also unaffected by rearing condition. Interestingly, juvenile hormone titers, which normally increase strongly around the time when a honeybee becomes a forager, were significantly lower in three- and four-week-old in vitro bees. The effects of the rearing environment on individual sucrose responsiveness and lipid levels were rather minor. These data suggest that larval rearing conditions can affect the task performance and physiology of adult bees despite equal weight, pointing to an important role of the colony environment for these factors. Our observations of behavior and metabolic pathways offer important novel insight into how the rearing environment affects adult honeybees.
Solitary bees build their nests by modifying the interior of natural cavities, and they provision them with food by importing collected pollen. As a result, the microbiota of the solitary bee nests may be highly dependent on introduced materials. In order to investigate how the collected pollen is associated with the nest microbiota, we used metabarcoding of the ITS2 rDNA and the 16S rDNA to simultaneously characterize the pollen composition and the bacterial communities of 100 solitary bee nest chambers belonging to seven megachilid species. We found a weak correlation between bacterial and pollen alpha diversity and significant associations between the composition of pollen and that of the nest microbiota, contributing to the understanding of the link between foraging and bacteria acquisition for solitary bees. Since solitary bees cannot establish bacterial transmission routes through eusociality, this link could be essential for obtaining bacterial symbionts for this group of valuable pollinators.
Individual-based models are doubly complex: as well as representing complex ecological systems, the software that implements them is complex in itself. Both forms of complexity must be managed to create reliable models. However, the ecological modelling literature to date has focussed almost exclusively on the biological complexity. Here, we discuss methods for containing software complexity.
Strategies for containing complexity include avoiding, subdividing, documenting and reviewing it. Computer science has long-established techniques for all of these strategies. We present some of these techniques and set them in the context of IBM development, giving examples from published models.
Techniques for avoiding software complexity are following best practices for coding style, choosing suitable programming languages and file formats and setting up an automated workflow. Complex software systems can be made more tractable by encapsulating individual subsystems. Good documentation needs to take into account the perspectives of scientists, users and developers. Code reviews are an effective way to check for errors, and can be used together with manual or automated unit and integration tests.
Ecological modellers can learn from computer scientists how to deal with complex software systems. Many techniques are readily available, but must be disseminated among modellers. There is a need for further work to adapt software development techniques to the requirements of academic research groups and individual-based modelling.
Sensitivity analysis for interpretation of machine learning based segmentation models in cardiac MRI
(2021)
Background
Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance.
Results
We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model.
Conclusions
Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.
Purpose
Artificial neural networks show promising performance in automatic segmentation of cardiac MRI. However, training requires large amounts of annotated data and generalization to different vendors, field strengths, sequence parameters, and pathologies is limited. Transfer learning addresses this challenge, but specific recommendations regarding type and amount of data required is lacking. In this study, we assess data requirements for transfer learning to experimental cardiac MRI at 7T where the segmentation task can be challenging. In addition, we provide guidelines, tools, and annotated data to enable transfer learning approaches by other researchers and clinicians.
Methods
A publicly available segmentation model was used to annotate a publicly available data set. This labeled data set was subsequently used to train a neural network for segmentation of left ventricle and myocardium in cardiac cine MRI. The network is used as starting point for transfer learning to 7T cine data of healthy volunteers (n = 22; 7873 images) by updating the pre-trained weights. Structured and random data subsets of different sizes were used to systematically assess data requirements for successful transfer learning.
Results
Inconsistencies in the publically available data set were corrected, labels created, and a neural network trained. On 7T cardiac cine images the model pre-trained on public imaging data, acquired at 1.5T and 3T, achieved DICE\(_{LV}\) = 0.835 and DICE\(_{MY}\) = 0.670. Transfer learning using 7T cine data and ImageNet weight initialization improved model performance to DICE\(_{LV}\) = 0.900 and DICE\(_{MY}\) = 0.791. Using only end-systolic and end-diastolic images reduced training data by 90%, with no negative impact on segmentation performance (DICE\(_{LV}\) = 0.908, DICE\(_{MY}\) = 0.805).
Conclusions
This work demonstrates and quantifies the benefits of transfer learning for cardiac cine image segmentation. We provide practical guidelines for researchers planning transfer learning projects in cardiac MRI and make data, models, and code publicly available.
Purpose
Image acquisition and subsequent manual analysis of cardiac cine MRI is time-consuming. The purpose of this study was to train and evaluate a 3D artificial neural network for semantic segmentation of radially undersampled cardiac MRI to accelerate both scan time and postprocessing.
Methods
A database of Cartesian short-axis MR images of the heart (148,500 images, 484 examinations) was assembled from an openly accessible database and radial undersampling was simulated. A 3D U-Net architecture was pretrained for segmentation of undersampled spatiotemporal cine MRI. Transfer learning was then performed using samples from a second database, comprising 108 non-Cartesian radial cine series of the midventricular myocardium to optimize the performance for authentic data. The performance was evaluated for different levels of undersampling by the Dice similarity coefficient (DSC) with respect to reference labels, as well as by deriving ventricular volumes and myocardial masses.
Results
Without transfer learning, the pretrained model performed moderately on true radial data [maximum number of projections tested, P = 196; DSC = 0.87 (left ventricle), DSC = 0.76 (myocardium), and DSC =0.64 (right ventricle)]. After transfer learning with authentic data, the predictions achieved human level even for high undersampling rates (P = 33, DSC = 0.95, 0.87, and 0.93) without significant difference compared with segmentations derived from fully sampled data.
Conclusion
A 3D U-Net architecture can be used for semantic segmentation of radially undersampled cine acquisitions, achieving a performance comparable with human experts in fully sampled data. This approach can jointly accelerate time-consuming cine image acquisition and cumbersome manual image analysis.
Compensatory base changes (CBCs) in internal transcribed spacer 2 (ITS2) rDNA secondary structures correlate with Ernst Mayr’s biological species concept. This hypothesis also referred to as the CBC species concept recently was subjected to large-scale testing, indicating two distinct probabilities. (1) If there is a CBC then there are two different species with a probability of ~0.93. (2) If there is no CBC then there is the same species with a probability of ~0.76. In ITS2 research, however, the main problem is the multicopy nature of ITS2 sequences. Most recently, 454 pyrosequencing data have been used to characterize more than 5000 intragenomic variations of ITS2 regions from 178 plant species, demonstrating that mutation of ITS2 is frequent, with a mean of 35 variants per species, respectively per individual organism. In this study, using those 454 data, the CBC criterion is reconsidered in the light of intragenomic variability, a proof of concept, a necessary criterion, expecting no intragenomic CBCs in variant ITS2 copies. In accordance with the CBC species concept, we could demonstrate that the probability that there is no intragenomic CBC is ~0.99.
New experimental methods have drastically accelerated the pace and quantity at which biological data is generated. High-throughput DNA sequencing is one of the pivotal new technologies. It offers a number of novel applications in various fields of biology, including ecology, evolution, and genomics. However, together with those opportunities many new challenges arise. Specialized algorithms and software are required to cope with the amount of data, often requiring substantial training in bioinformatic methods. Another way to make those data accessible to non-bioinformaticians is the development of programs with intuitive user interfaces.
In my thesis I developed analyses and programs to tackle current problems with high-throughput data in biology. In the field of ecology this covers the establishment of the bioinformatic workflow for pollen DNA meta-barcoding. Furthermore, I developed an application that facilitates the analysis of ecological communities in the context of their traits. Information from multiple public databases have been aggregated and can now be mapped automatically to existing community tables for interactive inspection. In evolution the new data are used to reconstruct phylogenetic trees from multiple genes. I developed the tool bcgTree to automate this process for bacteria. Many plant genomes have been sequenced in current years. Sequencing reads of those projects also contain data from the chloroplasts. The tool chloroExtractor supports the targeted extraction and analysis of the chloroplast genome. To compare the structure of multiple genomes specialized software is required for calculation and visualization of the relationships. I developed AliTV to address this. In contrast to existing programs for this task it allows interactive adjustments of produced graphics. Thus, facilitating the discovery of biologically relevant information. Another application I developed helps to analyze transcriptomes even if no reference genome is present. This is achieved by aggregating the different pieces of information, like functional annotation and expression level, for each transcript in a web platform. Scientists can then search, filter, subset, and visualize the transcriptome.
Together the methods and tools expedite insights into biological systems that were not possible before.
Background
Meta-barcoding of mixed pollen samples constitutes a suitable alternative to conventional pollen identification via light microscopy. Current approaches however have limitations in practicability due to low sample throughput and/or inefficient processing methods, e.g. separate steps for amplification and sample indexing.
Results
We thus developed a new primer-adapter design for high throughput sequencing with the Illumina technology that remedies these issues. It uses a dual-indexing strategy, where sample-specific combinations of forward and reverse identifiers attached to the barcode marker allow high sample throughput with a single sequencing run. It does not require further adapter ligation steps after amplification. We applied this protocol to 384 pollen samples collected by solitary bees and sequenced all samples together on a single Illumina MiSeq v2 flow cell. According to rarefaction curves, 2,000–3,000 high quality reads per sample were sufficient to assess the complete diversity of 95% of the samples. We were able to detect 650 different plant taxa in total, of which 95% were classified at the species level. Together with the laboratory protocol, we also present an update of the reference database used by the classifier software, which increases the total number of covered global plant species included in the database from 37,403 to 72,325 (93% increase).
Conclusions
This study thus offers improvements for the laboratory and bioinformatical workflow to existing approaches regarding data quantity and quality as well as processing effort and cost-effectiveness. Although only tested for pollen samples, it is furthermore applicable to other research questions requiring plant identification in mixed and challenging samples.
1.Honeybees Apis mellifera and other pollinating insects suffer from pesticides in agricultural landscapes. Flupyradifurone is the active ingredient of a novel pesticide by the name of ‘Sivanto’, introduced by Bayer AG (Crop Science Division, Monheim am Rhein, Germany). It is recommended against sucking insects and marketed as ‘harmless’ to honeybees. Flupyradifurone binds to nicotinergic acetylcholine receptors like neonicotinoids, but it has a different mode of action. So far, little is known on how sublethal flupyradifurone doses affect honeybees.
2. We chronically applied a sublethal and field‐realistic concentration of flupyradifurone to test for long‐term effects on flight behaviour using radio‐frequency identification. We examined haematoxylin/eosin‐stained brains of flupyradifurone‐treated bees to investigate possible changes in brain morphology and brain damage.
3. A field‐realistic flupyradifurone dose of approximately 1.0 μg/bee/day significantly increased mortality. Pesticide‐treated bees initiated foraging earlier than control bees. No morphological damage in the brain was observed.
4. Synthesis and applications. The early onset of foraging induced by a chronical application of flupyradifurone could be disadvantageous for honeybee colonies, reducing the period of in‐hive tasks and life expectancy of individuals. Radio‐frequency identification technology is a valuable tool for studying pesticide effects on lifetime foraging behaviour of insects.