Back

Predicting self-harm at one year in female prisoners: a retrospective cohort study using machine learning

Tiffin, P. A.; Leelamanthep, S.; Paton, L. W.; Perry, A.

2023-09-22 psychiatry and clinical psychology
10.1101/2023.09.20.23295770 medRxiv
Show abstract

BackgroundSelf-harm and suicide are relatively overrepresented in incarcerated populations, especially in female prisons. Identifying those most at risk of significant self-harm could provide opportunities for effective, targeted interventions. AimsTo develop and validate a machine learning-based algorithm capable of achieving a clinically useful level of accuracy when predicting the risk of self-harm in female prisoners. MethodData were available on 31 variables for 286 female prisoners from a single UK-based prison. This included sociodemographic factors, nature of the index offence, and responses to several psychometric assessment tools used at baseline. At 12-month follow-up any self-harm incidents were reported. A machine learning algorithm (CatBoost) to predict self-harm at one-year was developed and tested. To quantify uncertainty about the accuracy of the algorithm, the model building and evaluation process was repeated 2000 times and the distribution of results summarised. ResultsThe mean Area Under the Curve (AUC) for the model on unseen (validation) data was 0.92 (SD 0.04). Sensitivity was 0.83 (SD 0.07), specificity 0.94 (SD 0.03), positive predictive value 0.78 (SD 0.08) and the negative predictive value 0.95 (0.02). If the algorithm was used in this population, for every 100 women screened, this would equate to approximately 17 true positives and five false positives. ConclusionsThe accuracy of the algorithm was superior to those previously reported for predicting future self-harm in general and prison populations and likely to provide clinically useful levels of prediction. Research is needed to evaluate the feasibility of implementing this approach in a prison setting.

Matching journals

The top 10 journals account for 50% of the predicted probability mass.

1
PLOS ONE
4510 papers in training set
Top 16%
12.4%
2
European Psychiatry
10 papers in training set
Top 0.1%
6.8%
3
BMJ Open
554 papers in training set
Top 4%
4.9%
4
PLOS Medicine
98 papers in training set
Top 0.6%
4.9%
5
The British Journal of Psychiatry
21 papers in training set
Top 0.2%
4.0%
6
BJPsych Open
25 papers in training set
Top 0.1%
4.0%
7
Acta Psychiatrica Scandinavica
10 papers in training set
Top 0.1%
3.6%
8
Frontiers in Psychiatry
83 papers in training set
Top 1%
3.6%
9
Frontiers in Artificial Intelligence
18 papers in training set
Top 0.1%
3.3%
10
Journal of Affective Disorders
81 papers in training set
Top 0.7%
2.7%
50% of probability mass above
11
Drug and Alcohol Dependence
37 papers in training set
Top 0.3%
2.6%
12
BMC Public Health
147 papers in training set
Top 2%
2.4%
13
Acta Neuropsychiatrica
12 papers in training set
Top 0.3%
2.1%
14
Scientific Reports
3102 papers in training set
Top 50%
2.1%
15
Public Health in Practice
11 papers in training set
Top 0.1%
1.9%
16
eClinicalMedicine
55 papers in training set
Top 0.4%
1.9%
17
BMC Medicine
163 papers in training set
Top 3%
1.7%
18
Psychiatry Research
35 papers in training set
Top 0.9%
1.7%
19
Journal of Neurology, Neurosurgery & Psychiatry
29 papers in training set
Top 0.7%
1.7%
20
Translational Psychiatry
219 papers in training set
Top 3%
1.5%
21
Journal of Psychiatric Research
28 papers in training set
Top 0.5%
1.3%
22
Epidemiology and Infection
84 papers in training set
Top 2%
1.3%
23
Cureus
67 papers in training set
Top 4%
1.2%
24
Neuropsychopharmacology
134 papers in training set
Top 2%
1.2%
25
International Journal of Medical Informatics
25 papers in training set
Top 1%
1.0%
26
Frontiers in Digital Health
20 papers in training set
Top 1%
1.0%
27
Frontiers in Public Health
140 papers in training set
Top 7%
1.0%
28
Occupational and Environmental Medicine
15 papers in training set
Top 0.1%
1.0%
29
EClinicalMedicine
21 papers in training set
Top 0.7%
0.9%
30
Epidemiology and Psychiatric Sciences
10 papers in training set
Top 0.3%
0.9%