HETFORMER: Heterogeneous Transformer With Sparse Attention For Long-text Extractive Summarization | Awesome LLM Papers

HETFORMER: Heterogeneous Transformer With Sparse Attention For Long-text Extractive Summarization

Ye Liu, Jian-Guo Zhang, Yao Wan, Congying Xia, Lifang He, Philip S. Yu Β· Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing Β· 2021

To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. To mitigate these issues, this paper proposes HETFORMER, a Transformer-based pre-trained model with multi-granularity sparse attentions for long-text extractive summarization. Specifically, we model different types of semantic nodes in raw text as a potential heterogeneous graph and directly learn heterogeneous relationships (edges) among nodes by Transformer. Extensive experiments on both single- and multi-document summarization tasks show that HETFORMER achieves state-of-the-art performance in Rouge F1 while using less memory and fewer parameters.

Similar Work
Loading…