TY - JOUR A1 - Stebani, Jannik A1 - Blaimer, Martin A1 - Zabler, Simon A1 - Neun, Tilmann A1 - Pelt, Daniël M. A1 - Rak, Kristen T1 - Towards fully automated inner ear analysis with deep-learning-based joint segmentation and landmark detection framework JF - Scientific Reports N2 - Automated analysis of the inner ear anatomy in radiological data instead of time-consuming manual assessment is a worthwhile goal that could facilitate preoperative planning and clinical research. We propose a framework encompassing joint semantic segmentation of the inner ear and anatomical landmark detection of helicotrema, oval and round window. A fully automated pipeline with a single, dual-headed volumetric 3D U-Net was implemented, trained and evaluated using manually labeled in-house datasets from cadaveric specimen (N = 43) and clinical practice (N = 9). The model robustness was further evaluated on three independent open-source datasets (N = 23 + 7 + 17 scans) consisting of cadaveric specimen scans. For the in-house datasets, Dice scores of 0.97 and 0.94, intersection-over-union scores of 0.94 and 0.89 and average Hausdorf distances of 0.065 and 0.14 voxel units were achieved. The landmark localization task was performed automatically with an average localization error of 3.3 and 5.2 voxel units. A robust, albeit reduced performance could be attained for the catalogue of three open-source datasets. Results of the ablation studies with 43 mono-parametric variations of the basal architecture and training protocol provided task-optimal parameters for both categories. Ablation studies against single-task variants of the basal architecture showed a clear performance beneft of coupling landmark localization with segmentation and a dataset-dependent performance impact on segmentation ability. KW - anatomy KW - bone imaging KW - diagnosis KW - medical imaging KW - software KW - three-dimensional imaging KW - tomography Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357411 VL - 13 ER - TY - THES A1 - Jenett, Arnim T1 - The Virtual Insect Brain Protocol : development and application of software for the standardization of neuroanatomy T1 - Das Virtual Insect Brain Protocol N2 - Since the fruit fly Drosophila melanogaster entered the laboratories as a model organism, new genetic, physiological, molecular and behavioral techniques for the functional analysis of the brain rapidly accumulated. Nowadays this concerted assault obtains its main thrust form Gal4 expression patterns that can be visualized and provide the means for manipulating -in unrestrained animals- groups of neurons of the brain. To take advantage of these patterns one needs to know their anatomy. This thesis describes the Virtual Insect Brain (VIB) protocol, a software package for the quantitative assessment, comparison, and presentation of neuroanatomical data. It is based on the 3D-reconstruction and visualization software Amira (Mercury Inc.). Its main part is a standardization procedure which aligns individual 3D images (series of virtual sections obtained by confocal microscopy) to a common coordinate system and computes average intensities for each voxel (volume pixel). The VIB protocol facilitates direct comparison of gene expression patterns and describes their interindividual variability. It provides volumetry of brain regions and helps to characterize the phenotypes of brain structure mutants. Using the VIB protocol does not require any programming skills since all operations are carried out at a (near to) self-explanatory graphical user interface. Although the VIB protocol has been developed for the standardization of Drosophila neuroanatomy, the program structure can be used for the standardization of other 3D structures as well. Standardizing brains and gene expression patterns is a new approach to biological shape and its variability. Using the VIB protocol consequently may help to integrate knowledge on the correlation of form and function of the insect brain. The VIB protocol provides a first set of tools supporting this endeavor in Drosophila. The software is freely available at http://www.neurofly.de. N2 - Seitdem die Taufliege Drosophila melanogaster als Modellorganismus Einzug in die Forschung erhalten hat, sammeln sich mehr und mehr genetische, physiologische und molekulare Techniken für die Funktionsanalyse des Gehirns an. Diese beruhen heutzutage meist auf Gal4 Expressionsmustern, die sichtbar gemacht werden können und eine gezielte Manipulierung von definierten Zellgruppen ermöglichen. Um Ergebnisse verschiedener Untersuchungen miteinander in Beziehung setzen zu können, muss man jedoch die typische Anatomie der zugrunde liegenden Expressionsmuster kennen. Diese Arbeit beschreibt das Virtual Insect Brain (VIB) Protokoll, eine Software für die Darstellung, die quantitative Einschätzung und den Vergleich von neuroanatomischen Daten, sowie einige exemplarische Anwendungen des VIB Protokolls. Die Software basiert auf der 3D-Rekonstruktions- und der Visualisierungs-Software Amira (Mercury Inc.). Sein Hauptbestandteil ist ein Normierungverfahren, das 3D-Bild-Stapel (Folgen virtueller Schnittbilder, erhalten durch konfokale Mikroskopie) auf ein gemeinsames Koordinatensystem abbildet und für jedes Voxel (dreidimensionaler Bildpunkt) die durchschnittliche Intensität berechnet. Das VIB Protokoll erleichtert dadurch den direkten Vergleich von Expressionsmustern und beschreibt ihre interindividuelle Variabilität. Es liefert volumetrische Messungen zu definierten Gehirnregionen und hilft, die durch Mutation entstehenden Veränderungen der Gehirnstruktur zu erkennen. Das Verwenden des VIB Protokolls erfordert keinerlei Programmierkenntnisse, da alle Vorgänge auf einer selbsterklärenden graphischen Benutzeroberfläche ausgeführt werden können. Obgleich das VIB Protokoll für die Normierung der Neuroanatomy von Taufliegen entwickelt worden ist, kann die Programmstruktur auch für die Normierung anderer 3D-Strukturen benutzt werden. Gehirne und Expressionsmuster zu standardisieren ist ein neuer Ansatz die Variabilität der Neuroanatomie zu hinterfragen. Bei konsequenter Verwendung kann das VIB Protokoll helfen Wissen über Form und Funktion des Insektengehirns zu miteinander zu vernetzen. Das VIB Protokoll liefert einen ersten Satz Werkzeuge, die diese Bemühung in der Taufliege ermöglichen. Die Software kann kostenfrei von http://www.neurofly.de herunter geladen werden. KW - Taufliege KW - Gehirn KW - Neuroanatomie KW - Software KW - Drosophila melanogaster KW - Standardgehirn KW - Neuroanatomie KW - VIB KW - standardization KW - software Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-22297 ER - TY - JOUR A1 - Ahmed, Zeeshan A1 - Zeeshan, Saman A1 - Dandekar, Thomas T1 - Mining biomedical images towards valuable information retrieval in biomedical and life sciences JF - Database - The Journal of Biological Databases and Curation N2 - Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. KW - humans KW - software KW - image processing KW - animals KW - computer-assisted KW - data mining/methods KW - natural language processing Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-162697 VL - 2016 ER - TY - JOUR A1 - Hartrampf, Philipp E. A1 - Heinrich, Marieke A1 - Seitz, Anna Katharina A1 - Brumberg, Joachim A1 - Sokolakis, Ioannis A1 - Kalogirou, Charis A1 - Schirbel, Andreas A1 - Kübler, Hubert A1 - Buck, Andreas K. A1 - Lapa, Constantin A1 - Krebs, Markus T1 - Metabolic Tumour Volume from PSMA PET/CT Scans of Prostate Cancer Patients during Chemotherapy — Do Different Software Solutions Deliver Comparable Results? JF - Journal of Clinical Medicine N2 - (1) Background: Prostate-specific membrane antigen (PSMA)-derived tumour volume (PSMA-TV) and total lesion PSMA (TL-PSMA) from PSMA PET/CT scans are promising biomarkers for assessing treatment response in prostate cancer (PCa). Currently, it is unclear whether different software tools for assessing PSMA-TV and TL-PSMA produce comparable results. (2) Methods: \(^{68}\)Ga-PSMA PET/CT scans from n = 21 patients with castration-resistant PCa (CRPC) receiving chemotherapy were identified from our single-centre database. PSMA-TV and TL-PSMA were calculated with Syngo.via (Siemens) as well as the freely available Beth Israel plugin for FIJI (Fiji Is Just ImageJ) before and after chemotherapy. While statistical comparability was illustrated and quantified via Bland-Altman diagrams, the clinical agreement was estimated by matching PSMA-TV, TL-PSMA and relative changes of both variables during chemotherapy with changes in serum PSA (ΔPSA) and PERCIST (Positron Emission Response Criteria in Solid Tumors). (3) Results: Comparing absolute PSMA-TV and TL-PSMA as well as Bland–Altman plotting revealed a good statistical comparability of both software algorithms. For clinical agreement, classifying therapy response did not differ between PSMA-TV and TL-PSMA for both software solutions and showed highly positive correlations with BR. (4) Conclusions: due to the high levels of statistical and clinical agreement in our CRPC patient cohort undergoing taxane chemotherapy, comparing PSMA-TV and TL-PSMA determined by Syngo.via and FIJI appears feasible. KW - prostate-specific membrane antigen (PSMA) KW - metabolic tumour volume (MTV) KW - total lesion PSMA KW - biomarker KW - software KW - comparability KW - agreement Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205893 SN - 2077-0383 VL - 9 IS - 5 ER - TY - JOUR A1 - Karulin, Alexey Y. A1 - Karacsony, Kinga A1 - Zhang, Wenji A1 - Targoni, Oleg S. A1 - Moldova, Ioana A1 - Dittrich, Marcus A1 - Sundararaman, Srividya A1 - Lehmann, Paul V. T1 - ELISPOTs produced by CD8 and CD4 cells follow Log Normal size distribution permitting objective counting JF - Cells N2 - Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. KW - ELISPOT KW - software KW - IFN-γ KW - IL-17 KW - T cells KW - Normal Distribution KW - spot size KW - gating KW - cytokines KW - IL-2 KW - IL-4 KW - IL-5 KW - CD8 KW - CD4 Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-149648 VL - 4 IS - 1 ER - TY - JOUR A1 - Griebel, Matthias A1 - Segebarth, Dennis A1 - Stein, Nikolai A1 - Schukraft, Nina A1 - Tovote, Philip A1 - Blum, Robert A1 - Flath, Christoph M. T1 - Deep learning-enabled segmentation of ambiguous bioimages with deepflash2 JF - Nature Communications N2 - Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability. KW - machine learning KW - microscopy KW - quality control KW - software Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357286 VL - 14 ER - TY - JOUR A1 - Al-Zaben, Naim A1 - Medyukhina, Anna A1 - Dietrich, Stefanie A1 - Marolda, Alessandra A1 - Hünniger, Kerstin A1 - Kurzai, Oliver A1 - Figge, Marc Thilo T1 - Automated tracking of label-free cells with enhanced recognition of whole tracks JF - Scientific Reports N2 - Migration and interactions of immune cells are routinely studied by time-lapse microscopy of in vitro migration and confrontation assays. To objectively quantify the dynamic behavior of cells, software tools for automated cell tracking can be applied. However, many existing tracking algorithms recognize only rather short fragments of a whole cell track and rely on cell staining to enhance cell segmentation. While our previously developed segmentation approach enables tracking of label-free cells, it still suffers from frequently recognizing only short track fragments. In this study, we identify sources of track fragmentation and provide solutions to obtain longer cell tracks. This is achieved by improving the detection of low-contrast cells and by optimizing the value of the gap size parameter, which defines the number of missing cell positions between track fragments that is accepted for still connecting them into one track. We find that the enhanced track recognition increases the average length of cell tracks up to 2.2-fold. Recognizing cell tracks as a whole will enable studying and quantifying more complex patterns of cell behavior, e.g. switches in migration mode or dependence of the phagocytosis efficiency on the number and type of preceding interactions. Such quantitative analyses will improve our understanding of how immune cells interact and function in health and disease. KW - image processing KW - software Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-221093 VL - 9 ER -