Mining for Health: A Comparison of Word Embedding Methods for Analysis of EHRs Data
Getzen, E.; Ruan, Y.; Ungar, L.; Long, Q.
Show abstract
Electronic health records (EHRs), routinely collected as part of healthcare delivery, offer great promise for advancing precision health. At the same time, they present significant analytical challenges. In EHRs, data for individual patients are collected at irregular time intervals and with varying frequencies; they include both structured and unstructured data. Advanced statistical and machine learning methods have been developed to tackle these challenges, for example, for predicting diagnoses earlier and more accurately. One powerful tool for extracting useful information from EHRs data is word embedding algorithms, which represent words as vectors of real numbers that capture the words semantic and syntactic similarities. Learning embeddings can be viewed as automated feature engineering, producing features that can be used for predictive modeling of medical events. Methods such as Word2Vec, BERT, FastText, ELMo, and GloVe have been developed for word embedding, but there has been little work on re-purposing these algorithms for the analysis of structured medical data. Our work seeks to fill this important gap. We extended word embedding methods to embed (structured) medical codes from a patients entire medical history, and used the resultant embeddings to build prediction models for diseases. We assessed the performance of multiple embedding methods in terms of predictive accuracy and computation time using the Medical Information Mart for Intensive Care (MIMIC) database. We found that using Word2Vec, Fast-Text, and GloVe algorithms yield comparable models, while more recent contextual embeddings provide marginal further improvement. Our results provide insights and guidance to practitioners regarding the use of word embedding methods for the analysis of EHR data.
Matching journals
The top 2 journals account for 50% of the predicted probability mass.