Recurrent Stacking Of Layers For Compact Neural Machine Translation Models | Awesome LLM Papers

Recurrent Stacking Of Layers For Compact Neural Machine Translation Models

Raj Dabre, Atsushi Fujita Β· Proceedings of the AAAI Conference on Artificial Intelligence Β· 2018

In neural machine translation (NMT), the most common practice is to stack a number of recurrent or feed-forward layers in the encoder and the decoder. As a result, the addition of each new layer improves the translation quality significantly. However, this also leads to a significant increase in the number of parameters. In this paper, we propose to share parameters across all the layers thereby leading to a recurrently stacked NMT model. We empirically show that the translation quality of a model that recurrently stacks a single layer 6 times is comparable to the translation quality of a model that stacks 6 separate layers. We also show that using pseudo-parallel corpora by back-translation leads to further significant improvements in translation quality.

Similar Work
Loading…