Negative Training For Neural Dialogue Response Generation | Awesome LLM Papers

Negative Training For Neural Dialogue Response Generation

Tianxing He, James Glass · Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics · 2019

Although deep learning models have brought tremendous advancements to the field of open-domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses. In this work, we propose a framework named “Negative Training” to minimize such behaviors. Given a trained model, the framework will first find generated samples that exhibit the undesirable behavior, and then use them to feed negative training signals for fine-tuning the model. Our experiments show that negative training can significantly reduce the hit rate of malicious responses, or discourage frequent responses and improve response diversity.

Similar Work
Loading…