Specialised or Generic? Tokenization Choices for Radiology Language Models

Warr H., Xu W., Anthony H., Ibrahim Y., McGowan DR., Kamnitsas K.

The vocabulary used by language models (LM) - defined by the tokenizer - plays a key role in text generation quality. However, its impact remains under-explored in radiology. In this work, we address this gap by systematically comparing general, medical, and domain-specific tokenizers on the task of radiology report summarisation across three imaging modalities. We also investigate scenarios with and without LM pre-training on PubMed abstracts. Our findings demonstrate that medical and domain-specific vocabularies outperformed widely used natural language alternatives when models are trained from scratch. Pre-training partially mitigates performance differences between tokenizers, whilst the domain-specific tokenizers achieve the most favourable results. Domain-specific tokenizers also reduce memory requirements due to smaller vocabularies and shorter sequences. These results demonstrate that adapting the vocabulary of LMs to the clinical domain provides practical benefits, including improved performance and reduced computational demands, making such models more accessible and effective for both research and real-world healthcare settings. Code available at: GitHub.

DOI

10.1007/978-3-032-07502-4_8

Type

Conference paper

Publication Date

2026-01-01T00:00:00+00:00

Volume

16146 LNCS

Pages

62 - 70

Total pages

8

Permalink More information Close