Back

Scaling text de-identification using locally augmented ensembles

Murugadoss, K.; Kilamsetty, S.; Doddahonnaiah, D.; Iyer, N.; Pencina, M.; Ferranti, J.; Halamka, J.; Malin, B. A.; Ardhanari, S.

2024-06-20 health informatics
10.1101/2024.06.20.24308896 medRxiv
Show abstract

The natural language text in electronic health records (EHRs), such as clinical notes, often contains information that is not captured elsewhere (e.g., degree of disease progression and responsiveness to treatment) and, thus, is invaluable for downstream clinical analysis. However, to make such data available for broader research purposes, in the United States, personally identifiable information (PII) is typically removed from the EHR in accordance with the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA). Automated de-identification systems that mimic human accuracy in identifier detection can enable access, at scale, to more diverse de-identified data sets thereby fostering robust findings in medical research to advance patient care. The best performing of such systems employ language models that require time and effort for retraining or fine tuning for newer datasets to achieve consistent results and revalidation on older datasets. Hence, there is a need to adapt text de-identification methods to datasets across health institutions. Given the success of foundational large language models (LLMs), such as ChatGPT, in a wide array of natural language processing (NLP) tasks, they seem a natural fit for identifying PII across varied datasets. In this paper, we introduce locally augmented ensembles, which adapt an existing PII detection ensemble method trained at one health institution to others by using institution-specific dictionaries to capture location specific PII and recover medically relevant information that was previously misclassified as PII. We augment an ensemble model created at Mayo Clinic and test it on a dataset of 15,716 clinical notes at Duke University Health System. We further compare the task specific fine tuned ensemble against LLM based prompt engineering solutions on the 2014 i2b2 and 2003 CoNLL NER datasets for prediction accuracy, speed and cost. On the Duke notes, our approach achieves increased recall and precision of 0.996 and 0.982 respectively compared to 0.989 and 0.979 respectively without the augmentation. Our results indicate that LLMs may require significant prompt engineering effort to reach the levels attained by ensemble approaches. Further, given the current state of technology, they are at least 3 times slower and 5 times more expensive to operate than the ensemble approach.

Matching journals

The top 2 journals account for 50% of the predicted probability mass.

1
Journal of Biomedical Informatics
45 papers in training set
Top 0.1%
32.6%
2
Journal of the American Medical Informatics Association
61 papers in training set
Top 0.1%
18.4%
50% of probability mass above
3
Scientific Reports
3102 papers in training set
Top 28%
4.3%
4
BMC Medical Informatics and Decision Making
39 papers in training set
Top 0.7%
3.9%
5
npj Digital Medicine
97 papers in training set
Top 1%
3.8%
6
JAMIA Open
37 papers in training set
Top 0.4%
3.5%
7
PLOS Digital Health
91 papers in training set
Top 0.8%
3.5%
8
International Journal of Medical Informatics
25 papers in training set
Top 0.5%
2.7%
9
Artificial Intelligence in Medicine
15 papers in training set
Top 0.2%
2.1%
10
JCO Clinical Cancer Informatics
18 papers in training set
Top 0.5%
1.6%
11
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 1%
1.6%
12
Nature Communications
4913 papers in training set
Top 54%
1.5%
13
PLOS ONE
4510 papers in training set
Top 59%
1.3%
14
JMIR Medical Informatics
17 papers in training set
Top 1%
1.2%
15
Biology Methods and Protocols
53 papers in training set
Top 2%
1.1%
16
Journal of Medical Internet Research
85 papers in training set
Top 4%
0.9%
17
Patterns
70 papers in training set
Top 2%
0.9%
18
iScience
1063 papers in training set
Top 27%
0.9%
19
Bioinformatics
1061 papers in training set
Top 9%
0.8%
20
Frontiers in Digital Health
20 papers in training set
Top 1%
0.7%
21
NAR Genomics and Bioinformatics
214 papers in training set
Top 4%
0.6%
22
BioData Mining
15 papers in training set
Top 1%
0.6%