Background & Objective: Biomedical text data are increasingly available for research. Tokenization is an initial step in many biomedical text mining pipelines. Tokenization is the process of parsing an input biomedical sentence (represented as a digital character sequence) into a discrete set of word/token symbols, which convey focused semantic/syntactic meaning. The objective of this study is to explore variation in tokenizer outputs when applied across a series of challenging biomedical sentences. Method: Diaz [2015] introduce 24 challenging example biomedical sentences for comparing tokenizer performance. In this study, we descriptively explore variation in outputs of eight tokenizers applied to each example biomedical sentence. The tokenizers compared in this study are the NLTK white space tokenizer, the NLTK Penn Tree Bank tokenizer, Spacy and SciSpacy tokenizers, Stanza/Stanza-Craft tokenizers, the UDPipe tokenizer, and R-tokenizers. Results: For many examples, tokenizers performed similarly effectively; however, for certain examples, there were meaningful variation in returned outputs. The white space tokenizer often performed differently than other tokenizers. We observed performance similarities for tokenizers implementing rule-based systems (e.g. pattern matching and regular expressions) and tokenizers implementing neural architectures for token classification. Oftentimes, the challenging tokens resulting in the greatest variation in outputs, are those words which convey substantive and focused biomedical/clinical meaning (e.g. x-ray, IL-10, TCR/CD3, CD4+ CD8+, and (Ca2+)-regulated). Conclusion: When state-of-the-art, open-source tokenizers from Python and R were applied to a series of challenging biomedical example sentences, we observed subtle variation in the returned outputs.
Objective: To comparatively evaluate several transformer model architectures at identifying protected health information (PHI) in the i2b2/UTHealth 2014 clinical text de-identification challenge corpus. Methods: The i2b2/UTHealth 2014 corpus contains N=1304 clinical notes obtained from N=296 patients. Using a transfer learning framework, we fine-tune several transformer model architectures on the corpus, including: BERT-base, BERT-large, ROBERTA-base, ROBERTA-large, ALBERT-base and ALBERT-xxlarge. During fine-tuning we vary the following model hyper-parameters: batch size, number training epochs, learning rate and weight decay. We fine tune models on a training data set, we evaluate and select optimally performing models on an independent validation dataset, and lastly assess generalization performance on a held-out test dataset. We assess model performance in terms of accuracy, precision (positive predictive value), recall (sensitivity) and F1 score (harmonic mean of precision and recall). We are interested in overall model performance (PHI identified vs. PHI not identified), as well as PHI-specific model performance. Results: We observe that the ROBERTA-large models perform best at identifying PHI in the i2b2/UTHealth 2014 corpus, achieving >99% overall accuracy and 96.7% recall/precision on the heldout test corpus. Performance was good across many PHI classes; however, accuracy/precision/recall decreased for identification of the following entity classes: professions, organizations, ages, and certain locations. Conclusions: Transformers are a promising model class/architecture for clinical text de-identification. With minimal hyper-parameter tuning transformers afford researchers/clinicians the opportunity to obtain (near) state-of-the-art performance.