MSP: Multi-stage Prompting For Making Pre-trained Language Models Better Translators | Awesome LLM Papers

MSP: Multi-stage Prompting For Making Pre-trained Language Models Better Translators

Zhixing Tan, Xiangwen Zhang, Shuo Wang, Yang Liu Β· Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Β· 2021

Prompting has recently been shown as a promising approach for applying pre-trained language models to perform downstream tasks. We present Multi-Stage Prompting (MSP), a simple and automatic approach for leveraging pre-trained language models to translation tasks. To better mitigate the discrepancy between pre-training and translation, MSP divides the translation process via pre-trained language models into multiple separate stages: the encoding stage, the re-encoding stage, and the decoding stage. During each stage, we independently apply different continuous prompts for allowing pre-trained language models better shift to translation tasks. We conduct extensive experiments on three translation tasks. Experiments show that our method can significantly improve the translation performance of pre-trained language models.

Similar Work
Loading…