Efficient Continual Pre-training For Building Domain Specific Large Language Models | Awesome LLM Papers

Efficient Continual Pre-training For Building Domain Specific Large Language Models

Yong Xie, Karan Aggarwal, Aitzaz Ahmad · Findings of the Association for Computational Linguistics ACL 2024 · 2023

Large language models (LLMs) have demonstrated remarkable open-domain capabilities. LLMs tailored for a domain are typically trained entirely on domain corpus to excel at handling domain-specific tasks. In this work, we explore an alternative strategy of continual pre-training as a means to develop domain-specific LLMs over an existing open-domain LLM. We introduce FinPythia-6.9B, developed through domain-adaptive continual pre-training on the financial domain. Continual pre-trained FinPythia showcases consistent improvements on financial tasks over the original foundational model. We further explore simple but effective data selection strategies for continual pre-training. Our data selection strategies outperform vanilla continual pre-training’s performance with just 10% of corpus size and cost, without any degradation on open-domain standard tasks. Our work proposes an alternative solution to building domain-specific LLMs cost-effectively.

Similar Work
Loading…