Refine
Has Fulltext
- yes (6)
Is part of the Bibliography
- yes (6)
Document Type
- Journal article (6)
Language
- English (6)
Keywords
- artificial intelligence (3)
- deep learning (3)
- melanoma (3)
- DNA damage (1)
- Hautkrebs (1)
- Krebs <Medizin> (1)
- MITF (1)
- apoptosis (1)
- benchmark (1)
- cancer (1)
In vitro evidence for senescent multinucleated melanocytes as a source for tumor-initiating cells
(2015)
Oncogenic signaling in melanocytes results in oncogene-induced senescence (OIS), a stable cell-cycle arrest frequently characterized by a bi-or multinuclear phenotype that is considered as a barrier to cancer progression. However, the long-sustained conviction that senescence is a truly irreversible process has recently been challenged. Still, it is not known whether cells driven into OIS can progress to cancer and thereby pose a potential threat. Here, we show that prolonged expression of the melanoma oncogene N-RAS\(^{61K}\) in pigment cells overcomes OIS by triggering the emergence of tumor-initiating mononucleated stem-like cells from senescent cells. This progeny is dedifferentiated, highly proliferative, anoikis-resistant and induces fast growing, metastatic tumors. Our data describe that differentiated cells, which are driven into senescence by an oncogene, use this senescence state as trigger for tumor transformation, giving rise to highly aggressive tumor-initiating cells. These observations provide the first experimental in vitro evidence for the evasion of OIS on the cellular level and ensuing transformation.
Background: Inactivation of the p53 pathway that controls cell cycle progression, apoptosis and senescence, has been proposed to occur in virtually all human tumors and p53 is the protein most frequently mutated in human cancer. However, the mutational status of p53 in melanoma is still controversial; to clarify this notion we analysed the largest series of melanoma samples reported to date. Methodology/Principal Findings: Immunohistochemical analysis of more than 180 melanoma specimens demonstrated that high levels of p53 are expressed in the vast majority of cases. Subsequent sequencing of the p53 exons 5–8, however, revealed only in one case the presence of a mutation. Nevertheless, by means of two different p53 reporter constructs we demonstrate transcriptional inactivity of wild type p53 in 6 out of 10 melanoma cell lines; the 4 other p53 wild type melanoma cell lines exhibit p53 reporter gene activity, which can be blocked by shRNA knock down of p53. Conclusions/Significance: In melanomas expressing high levels of wild type p53 this tumor suppressor is frequently inactivated at transcriptional level.
p53 is a central tumor suppressor protein and its inhibition is believed to be a prerequisite for cancer development. In approximately 50% of all malignancies this is achieved by inactivating mutations in the p53 gene. However, in several cancer entities, including melanoma, p53 mutations are rare. It has been recently proposed that tyrosinase related protein 2 (TRP2), a protein involved in melanin synthesis, may act as suppressor of the p53 pathway in melanoma. To scrutinize this notion we analyzed p53 and TRP2 expression by immunohistochemistry in 172 melanoma tissues and did not find any correlation. Furthermore, we applied three different TRP2 shRNAs to five melanoma cell lines and could not observe a target specific effect of the TRP2 knockdown on either p53 expression nor p53 reporter gene activity. Likewise, ectopic expression of TRP2 in a TRP2 negative melanoma cell line had no impact on p53 expression. In conclusion our data suggest that p53 repression critically controlled by TRP2 is not a general event in melanoma.
Background
Several recent publications have demonstrated the use of convolutional neural networks to classify images of melanoma at par with board-certified dermatologists. However, the non-availability of a public human benchmark restricts the comparability of the performance of these algorithms and thereby the technical progress in this field.
Methods
An electronic questionnaire was sent to dermatologists at 12 German university hospitals. Each questionnaire comprised 100 dermoscopic and 100 clinical images (80 nevi images and 20 biopsy-verified melanoma images, each), all open-source. The questionnaire recorded factors such as the years of experience in dermatology, performed skin checks, age, sex and the rank within the university hospital or the status as resident physician. For each image, the dermatologists were asked to provide a management decision (treat/biopsy lesion or reassure the patient). Main outcome measures were sensitivity, specificity and the receiver operating characteristics (ROC).
Results
Total 157 dermatologists assessed all 100 dermoscopic images with an overall sensitivity of 74.1%, specificity of 60.0% and an ROC of 0.67 (range = 0.538–0.769); 145 dermatologists assessed all 100 clinical images with an overall sensitivity of 89.4%, specificity of 64.4% and an ROC of 0.769 (range = 0.613–0.9). Results between test-sets were significantly different (P < 0.05) confirming the need for a standardised benchmark.
Conclusions
We present the first public melanoma classification benchmark for both non-dermoscopic and dermoscopic images for comparing artificial intelligence algorithms with diagnostic performance of 145 or 157 dermatologists. Melanoma Classification Benchmark should be considered as a reference standard for white-skinned Western populations in the field of binary algorithmic melanoma classification.
Background
Melanoma is the most dangerous type of skin cancer but is curable if detected early. Recent publications demonstrated that artificial intelligence is capable in classifying images of benign nevi and melanoma with dermatologist-level precision. However, a statistically significant improvement compared with dermatologist classification has not been reported to date.
Methods
For this comparative study, 4204 biopsy-proven images of melanoma and nevi (1:1) were used for the training of a convolutional neural network (CNN). New techniques of deep learning were integrated. For the experiment, an additional 804 biopsy-proven dermoscopic images of melanoma and nevi (1:1) were randomly presented to dermatologists of nine German university hospitals, who evaluated the quality of each image and stated their recommended treatment (19,296 recommendations in total). Three McNemar's tests comparing the results of the CNN's test runs in terms of sensitivity, specificity and overall correctness were predefined as the main outcomes.
Findings
The respective sensitivity and specificity of lesion classification by the dermatologists were 67.2% (95% confidence interval [CI]: 62.6%–71.7%) and 62.2% (95% CI: 57.6%–66.9%). In comparison, the trained CNN achieved a higher sensitivity of 82.3% (95% CI: 78.3%–85.7%) and a higher specificity of 77.9% (95% CI: 73.8%–81.8%). The three McNemar's tests in 2 × 2 tables all reached a significance level of p < 0.001. This significance level was sustained for both subgroups.
Interpretation
For the first time, automated dermoscopic melanoma image classification was shown to be significantly superior to both junior and board-certified dermatologists (p < 0.001).
Background
A basic requirement for artificial intelligence (AI)–based image analysis systems, which are to be integrated into clinical practice, is a high robustness. Minor changes in how those images are acquired, for example, during routine skin cancer screening, should not change the diagnosis of such assistance systems.
Objective
To quantify to what extent minor image perturbations affect the convolutional neural network (CNN)–mediated skin lesion classification and to evaluate three possible solutions for this problem (additional data augmentation, test-time augmentation, anti-aliasing).
Methods
We trained three commonly used CNN architectures to differentiate between dermoscopic melanoma and nevus images. Subsequently, their performance and susceptibility to minor changes (‘brittleness’) was tested on two distinct test sets with multiple images per lesion. For the first set, image changes, such as rotations or zooms, were generated artificially. The second set contained natural changes that stemmed from multiple photographs taken of the same lesions.
Results
All architectures exhibited brittleness on the artificial and natural test set. The three reviewed methods were able to decrease brittleness to varying degrees while still maintaining performance. The observed improvement was greater for the artificial than for the natural test set, where enhancements were minor.
Conclusions
Minor image changes, relatively inconspicuous for humans, can have an effect on the robustness of CNNs differentiating skin lesions. By the methods tested here, this effect can be reduced, but not fully eliminated. Thus, further research to sustain the performance of AI classifiers is needed to facilitate the translation of such systems into the clinic.