Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting And Calibrated Confidence Estimation | Awesome LLM Papers

Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting And Calibrated Confidence Estimation

Zhexin Zhang, Jiaxin Wen, Minlie Huang Β· Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Β· 2023

Large pre-trained language models achieve impressive results across many tasks. However, recent works point out that pre-trained language models may memorize a considerable fraction of their training data, leading to the privacy risk of information leakage. In this paper, we propose a method named Ethicist for targeted training data extraction through loss smoothed soft prompting and calibrated confidence estimation, investigating how to recover the suffix in the training data when given a prefix. To elicit memorization in the attacked model, we tune soft prompt embeddings while keeping the model fixed. We further propose a smoothing loss that smooths the loss distribution of the suffix tokens to make it easier to sample the correct suffix. In order to select the most probable suffix from a collection of sampled suffixes and estimate the prediction confidence, we propose a calibrated confidence estimation method, which normalizes the confidence of the generated suffixes with a local estimation. We show that Ethicist significantly improves the extraction performance on a recently proposed public benchmark. We also investigate several factors influencing the data extraction performance, including decoding strategy, model scale, prefix length, and suffix length. Our code is available at https://github.com/thu-coai/Targeted-Data-Extraction.

Similar Work
Loading…