Character N-gram Embeddings To Improve RNN Language Models | Awesome LLM Papers

Character N-gram Embeddings To Improve RNN Language Models

Sho Takase, Jun Suzuki, Masaaki Nagata Β· Proceedings of the AAAI Conference on Artificial Intelligence Β· 2019

This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character n-gram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks.

Similar Work
Loading…