Back

Diagnostic accuracy in detecting malignancy in suspicious skin lesions using Artificial Intelligence

Sanchez-Viera, M.; Medela, A.; del Campo, I.; Barrachina, J.; D'Alessandro, C.; Vallejos, A.; Aguilar, A.; Mac Carthy, T.; Fernandez, G.; Martorell, A.

2025-03-14 dermatology
10.1101/2025.03.11.25323753 medRxiv
Show abstract

BackgroundArtificial Intelligence (AI) has demonstrated a high image processing capacity and improved diagnostic accuracy in dermatology. In this context, Computer-Aided Diagnosis (CAD) systems have shown a diagnostic performance comparable to that of specialists in classifying skin lesions, particularly pigmented lesions. The present study aims to validate Legit.Health is a reliable tool for diagnosing and assessing the severity of patients with skin lesions suspicious of malignancy. ObjectiveTo validate that the Legit.Health medical device optimises clinical workflow by enhancing diagnostic accuracy and determining the malignancy or severity of patients with skin lesions suspicious of malignancy. MethodsAn observational, prospective study was conducted, incorporating both longitudinal and retrospective cases. A total of 76 retrospective patients with 88 lesions and 32 prospective patients with 42 lesions attending Instituto de Dermatologia Integral Madrid, Spain, were recruited. The diagnostic performance of Legit.Health was compared with that of dermatologists in the retrospective images against a gold standard (biopsy results). In the prospective phase of the study, the performance of the current Legit.Health medical device was evaluated alongside dermatologists assisted by the device and the latest version of the device (Legit.Health Plus). Analyses were performed to calculate the AUC (area under the curve), accuracy, sensitivity, and specificity. ResultsIn the retrospective analysis, the device demonstrated an AUC of 0.76 compared to 0.79 for dermatologists in detecting malignant lesions. For these images, the device achieved the following accuracy scores: top-1 = 0.23, top-3 = 0.38, and top-5 = 0.47, whereas dermatologists achieved top-1 = 0.33 and top-3 = 0.45 (providing only three possible diagnoses). When the specific histologic subtype of naevus was not considered in the diagnosis, Legit.Health achieved an accuracy of top-1 = 0.50, top-3 = 0.71, and top-5 = 0.78, compared to dermatologists top-1 = 0.50 and top-3 = 0.70. In the prospective analysis, we examined the performance of dermatologists using the Legit.Health medical device, the device alone, and the latest version of the device. In the malignancy analysis, they achieved an AUC of 0.94, 0.95, and 0.97, respectively. Regarding diagnostic accuracy, dermatologists assisted by the medical device achieved a top-1 accuracy of 0.30, while both the medical device alone and its latest version achieved top-1 accuracies of 0.22 and 0.26, respectively, which increased to 0.44 and 0.52 when expanding to top-5. When the specific histologic subtype of naevus was not considered in the diagnosis, accuracies increased to 0.85, 0.74, and 0.81, respectively, further improving as top-K was increased to top-5, reaching 0.89 and 0.93, respectively. ConclusionsThe devices diagnostic capability in distinguishing malignant conditions is on par with that of expert dermatologists. This confirms its reliability as a tool for detecting skin malignant categories in ICD-11, assisting in prioritising patients based on urgency and directing them to the appropriate specialist or consultation.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Frontiers in Medicine
113 papers in training set
Top 0.1%
27.9%
2
Scientific Reports
3102 papers in training set
Top 4%
11.2%
3
PLOS ONE
4510 papers in training set
Top 23%
7.7%
4
Cureus
67 papers in training set
Top 0.4%
7.3%
50% of probability mass above
5
Informatics in Medicine Unlocked
21 papers in training set
Top 0.1%
4.0%
6
Frontiers in Public Health
140 papers in training set
Top 2%
4.0%
7
JMIR Formative Research
32 papers in training set
Top 0.4%
3.1%
8
British Journal of Ophthalmology
14 papers in training set
Top 0.1%
2.5%
9
PLOS Neglected Tropical Diseases
378 papers in training set
Top 3%
2.3%
10
JAMA Network Open
127 papers in training set
Top 1%
2.2%
11
PLOS Medicine
98 papers in training set
Top 3%
1.4%
12
European Journal of Cancer
10 papers in training set
Top 0.3%
1.3%
13
Computational and Structural Biotechnology Journal
216 papers in training set
Top 6%
1.3%
14
Applied Sciences
24 papers in training set
Top 0.5%
1.2%
15
JMIR Public Health and Surveillance
45 papers in training set
Top 3%
1.2%
16
GigaScience
172 papers in training set
Top 2%
1.2%
17
Translational Vision Science & Technology
35 papers in training set
Top 0.5%
1.2%
18
Ophthalmology Science
20 papers in training set
Top 0.2%
1.0%
19
BMC Cancer
52 papers in training set
Top 2%
1.0%
20
Experimental Dermatology
10 papers in training set
Top 0.2%
1.0%
21
Diagnostics
48 papers in training set
Top 2%
1.0%
22
Frontiers in Digital Health
20 papers in training set
Top 1%
0.8%
23
Frontiers in Immunology
586 papers in training set
Top 8%
0.7%
24
PLOS Digital Health
91 papers in training set
Top 3%
0.5%