Scheduled Drophead: A Regularization Method For Transformer Models | Awesome LLM Papers

Scheduled Drophead: A Regularization Method For Transformer Models

Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou Β· Findings of the Association for Computational Linguistics: EMNLP 2020 Β· 2020

In this paper, we introduce DropHead, a structured dropout method specifically designed for regularizing the multi-head attention mechanism, which is a key component of transformer, a state-of-the-art model for various NLP tasks. In contrast to the conventional dropout mechanisms which randomly drop units or connections, the proposed DropHead is a structured dropout method. It drops entire attention-heads during training and It prevents the multi-head attention model from being dominated by a small portion of attention heads while also reduces the risk of overfitting the training data, thus making use of the multi-head attention mechanism more efficiently. Motivated by recent studies about the learning dynamic of the multi-head attention mechanism, we propose a specific dropout rate schedule to adaptively adjust the dropout rate of DropHead and achieve better regularization effect. Experimental results on both machine translation and text classification benchmark datasets demonstrate the effectiveness of the proposed approach.

Similar Work
Loading…