Prompting method is regarded as one of the crucial progress for few-shot
nature language processing. Recent research on prompting moves from discrete
tokens based hard prompts'' to continuoussoft prompts’’, which employ
learnable vectors as pseudo prompt tokens and achieve better performance.
Though showing promising prospects, these soft-prompting methods are observed
to rely heavily on good initialization to take effect. Unfortunately, obtaining
a perfect initialization for soft prompts requires understanding of inner
language models working and elaborate design, which is no easy task and has to
restart from scratch for each new task. To remedy this, we propose a
generalized soft prompting method called MetaPrompting, which adopts the
well-recognized model-agnostic meta-learning algorithm to automatically find
better prompt initialization that facilitates fast adaptation to new prompting
tasks.Extensive experiments show MetaPrompting tackles soft prompt
initialization problem and brings significant improvement on four different
datasets (over 6 points improvement in accuracy for 1-shot setting), achieving
new state-of-the-art performance.