Few-shot Text Classification With Triplet Networks, Data Augmentation, And Curriculum Learning | Awesome LLM Papers

Few-shot Text Classification With Triplet Networks, Data Augmentation, And Curriculum Learning

Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, Shiqi Xu · Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies · 2021

Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category. This paper explores data augmentation – a technique particularly suitable for training with limited data – for this few-shot, highly-multiclass text classification setting. On four diverse text classification tasks, we find that common data augmentation techniques can improve the performance of triplet networks by up to 3.0% on average. To further boost performance, we present a simple training strategy called curriculum data augmentation, which leverages curriculum learning by first training on only original examples and then introducing augmented data as training progresses. We explore a two-stage and a gradual schedule, and find that, compared with standard single-stage training, curriculum data augmentation trains faster, improves performance, and remains robust to high amounts of noising from augmentation.

Similar Work
Loading…