TY - JOUR A1 - Karulin, Alexey Y. A1 - Karacsony, Kinga A1 - Zhang, Wenji A1 - Targoni, Oleg S. A1 - Moldova, Ioana A1 - Dittrich, Marcus A1 - Sundararaman, Srividya A1 - Lehmann, Paul V. T1 - ELISPOTs produced by CD8 and CD4 cells follow Log Normal size distribution permitting objective counting JF - Cells N2 - Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. KW - ELISPOT KW - software KW - IFN-γ KW - IL-17 KW - T cells KW - Normal Distribution KW - spot size KW - gating KW - cytokines KW - IL-2 KW - IL-4 KW - IL-5 KW - CD8 KW - CD4 Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-149648 VL - 4 IS - 1 ER - TY - JOUR A1 - Ahmed, Zeeshan A1 - Zeeshan, Saman A1 - Dandekar, Thomas T1 - Mining biomedical images towards valuable information retrieval in biomedical and life sciences JF - Database - The Journal of Biological Databases and Curation N2 - Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. KW - humans KW - software KW - image processing KW - animals KW - computer-assisted KW - data mining/methods KW - natural language processing Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-162697 VL - 2016 ER - TY - JOUR A1 - Hartrampf, Philipp E. A1 - Heinrich, Marieke A1 - Seitz, Anna Katharina A1 - Brumberg, Joachim A1 - Sokolakis, Ioannis A1 - Kalogirou, Charis A1 - Schirbel, Andreas A1 - Kübler, Hubert A1 - Buck, Andreas K. A1 - Lapa, Constantin A1 - Krebs, Markus T1 - Metabolic Tumour Volume from PSMA PET/CT Scans of Prostate Cancer Patients during Chemotherapy — Do Different Software Solutions Deliver Comparable Results? JF - Journal of Clinical Medicine N2 - (1) Background: Prostate-specific membrane antigen (PSMA)-derived tumour volume (PSMA-TV) and total lesion PSMA (TL-PSMA) from PSMA PET/CT scans are promising biomarkers for assessing treatment response in prostate cancer (PCa). Currently, it is unclear whether different software tools for assessing PSMA-TV and TL-PSMA produce comparable results. (2) Methods: \(^{68}\)Ga-PSMA PET/CT scans from n = 21 patients with castration-resistant PCa (CRPC) receiving chemotherapy were identified from our single-centre database. PSMA-TV and TL-PSMA were calculated with Syngo.via (Siemens) as well as the freely available Beth Israel plugin for FIJI (Fiji Is Just ImageJ) before and after chemotherapy. While statistical comparability was illustrated and quantified via Bland-Altman diagrams, the clinical agreement was estimated by matching PSMA-TV, TL-PSMA and relative changes of both variables during chemotherapy with changes in serum PSA (ΔPSA) and PERCIST (Positron Emission Response Criteria in Solid Tumors). (3) Results: Comparing absolute PSMA-TV and TL-PSMA as well as Bland–Altman plotting revealed a good statistical comparability of both software algorithms. For clinical agreement, classifying therapy response did not differ between PSMA-TV and TL-PSMA for both software solutions and showed highly positive correlations with BR. (4) Conclusions: due to the high levels of statistical and clinical agreement in our CRPC patient cohort undergoing taxane chemotherapy, comparing PSMA-TV and TL-PSMA determined by Syngo.via and FIJI appears feasible. KW - prostate-specific membrane antigen (PSMA) KW - metabolic tumour volume (MTV) KW - total lesion PSMA KW - biomarker KW - software KW - comparability KW - agreement Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-205893 SN - 2077-0383 VL - 9 IS - 5 ER - TY - JOUR A1 - Stebani, Jannik A1 - Blaimer, Martin A1 - Zabler, Simon A1 - Neun, Tilmann A1 - Pelt, Daniël M. A1 - Rak, Kristen T1 - Towards fully automated inner ear analysis with deep-learning-based joint segmentation and landmark detection framework JF - Scientific Reports N2 - Automated analysis of the inner ear anatomy in radiological data instead of time-consuming manual assessment is a worthwhile goal that could facilitate preoperative planning and clinical research. We propose a framework encompassing joint semantic segmentation of the inner ear and anatomical landmark detection of helicotrema, oval and round window. A fully automated pipeline with a single, dual-headed volumetric 3D U-Net was implemented, trained and evaluated using manually labeled in-house datasets from cadaveric specimen (N = 43) and clinical practice (N = 9). The model robustness was further evaluated on three independent open-source datasets (N = 23 + 7 + 17 scans) consisting of cadaveric specimen scans. For the in-house datasets, Dice scores of 0.97 and 0.94, intersection-over-union scores of 0.94 and 0.89 and average Hausdorf distances of 0.065 and 0.14 voxel units were achieved. The landmark localization task was performed automatically with an average localization error of 3.3 and 5.2 voxel units. A robust, albeit reduced performance could be attained for the catalogue of three open-source datasets. Results of the ablation studies with 43 mono-parametric variations of the basal architecture and training protocol provided task-optimal parameters for both categories. Ablation studies against single-task variants of the basal architecture showed a clear performance beneft of coupling landmark localization with segmentation and a dataset-dependent performance impact on segmentation ability. KW - anatomy KW - bone imaging KW - diagnosis KW - medical imaging KW - software KW - three-dimensional imaging KW - tomography Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357411 VL - 13 ER - TY - JOUR A1 - Griebel, Matthias A1 - Segebarth, Dennis A1 - Stein, Nikolai A1 - Schukraft, Nina A1 - Tovote, Philip A1 - Blum, Robert A1 - Flath, Christoph M. T1 - Deep learning-enabled segmentation of ambiguous bioimages with deepflash2 JF - Nature Communications N2 - Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool’s training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability. KW - machine learning KW - microscopy KW - quality control KW - software Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-357286 VL - 14 ER -