Back

Advancing Hair Loss Assessment in Alopecia Areata: The Mathematical Case for Centralised, Standardised Imaging

Fleet, D. M.; Messenger, A.; Bryden, A.; Harris, M. j.; Holmes, S.; Farrant, P.; Leaker, B.; Takwale, A.; Oakford, M.; Kaur, M.; Mowbray, M.; Macbeth, A.; Gangwani, P.; Gkini, M. a.; Jolliffe, V.

2026-04-04 dermatology
10.64898/2026.04.02.26349939 medRxiv
Show abstract

Background In clinical trials for alopecia areata (AA) the treatment effect (percentage of hair loss) is estimated using the Severity of Alopecia Tool (SALT) score. Trials in patients with severe AA (>=50% hair loss) employed a local rating of the SALT score performed at trial sites by different investigators. However, in mild-to-moderate AA (<= 50% hair loss) where SALT scores are lower, potential inter rater variability and margin of error may compromise the results. Objectives To compare Centralised and Local measurement of hair loss in mild moderate AA. Methods In a Phase 2 clinical trial a centralised measurement of hair loss was performed from photographic images taken using a standardised protocol and professional camera equipment. Local scoring was also undertaken at screening/baseline for eligibility. We assessed: the repeatability of the central system (screening vs baseline values), the reproducibility of the central versus the local rating system and the potential impact of each method on the endpoints using a Monte-Carlo simulation method. Results There was good agreement and consistency of scoring with Central rating. This provided much smaller margins of error, 50% lower than Local rating. The simulations demonstrated that substituting Local rating for Central rating would result in a reduction of the likelihood of a statistically significant outcome by at least 50% depending on the SALT score defined clinical response endpoint. Conclusions Central rating is most appropriate in the Phase 2 learning stage of clinical development and provides an accurate representation of the quantity of hair loss, minimising error and ensuring consistency in measurements.

Matching journals

The top 6 journals account for 50% of the predicted probability mass.

1
Experimental Dermatology
10 papers in training set
Top 0.1%
18.1%
2
PLOS ONE
4510 papers in training set
Top 16%
10.8%
3
JAMA Network Open
127 papers in training set
Top 0.3%
7.0%
4
Frontiers in Medicine
113 papers in training set
Top 0.4%
7.0%
5
Trials
25 papers in training set
Top 0.2%
6.6%
6
Blood Advances
54 papers in training set
Top 0.3%
4.4%
50% of probability mass above
7
Scientific Reports
3102 papers in training set
Top 33%
3.8%
8
BMC Cancer
52 papers in training set
Top 0.6%
3.7%
9
PLOS Medicine
98 papers in training set
Top 1%
3.2%
10
Eye
11 papers in training set
Top 0.2%
3.2%
11
PLOS Neglected Tropical Diseases
378 papers in training set
Top 2%
2.8%
12
Cureus
67 papers in training set
Top 2%
2.8%
13
European Journal of Cancer
10 papers in training set
Top 0.1%
1.9%
14
Nature Communications
4913 papers in training set
Top 48%
1.9%
15
eClinicalMedicine
55 papers in training set
Top 0.4%
1.8%
16
BMJ Open
554 papers in training set
Top 9%
1.7%
17
Journal of Investigative Dermatology
42 papers in training set
Top 0.3%
1.7%
18
RMD Open
13 papers in training set
Top 0.2%
1.3%
19
Tropical Medicine and Infectious Disease
12 papers in training set
Top 0.2%
1.3%
20
Frontiers in Nutrition
23 papers in training set
Top 1%
0.7%
21
eLife
5422 papers in training set
Top 60%
0.7%
22
Pilot and Feasibility Studies
12 papers in training set
Top 0.8%
0.5%
23
Frontiers in Public Health
140 papers in training set
Top 9%
0.5%
24
Frontiers in Immunology
586 papers in training set
Top 9%
0.5%