Inexpensive Domain Adaptation Of Pretrained Language Models: Case Studies On Biomedical NER And Covid-19 QA | Awesome LLM Papers

Inexpensive Domain Adaptation Of Pretrained Language Models: Case Studies On Biomedical NER And Covid-19 QA

Nina Poerner, Ulli Waltinger, Hinrich Schütze · Findings of the Association for Computational Linguistics: EMNLP 2020 · 2020

Domain adaptation of Pretrained Language Models (PTLMs) is typically achieved by unsupervised pretraining on target-domain text. While successful, this approach is expensive in terms of hardware, runtime and CO_2 emissions. Here, we propose a cheaper alternative: We train Word2Vec on target-domain text and align the resulting word vectors with the wordpiece vectors of a general-domain PTLM. We evaluate on eight biomedical Named Entity Recognition (NER) tasks and compare against the recently proposed BioBERT model. We cover over 60% of the BioBERT-BERT F1 delta, at 5% of BioBERT’s CO_2 footprint and 2% of its cloud compute cost. We also show how to quickly adapt an existing general-domain Question Answering (QA) model to an emerging domain: the Covid-19 pandemic.

Similar Work
Loading…