The concept of "soft prompts" has evolved since its introduction, offering a flexible and parameter-efficient alternative to traditional prompt-based learning methods. Here's an overview of the key works on soft prompts:
The Introduction of Soft Prompts (2021): Brian Lester et al. introduced "Prompt Tuning" in EMNLP 2021. This method focuses on conditioning a frozen language model using tunable soft prompts, which are learnable vectors attached to the input rather than manually crafted text prompts. The work demonstrated that soft prompts could achieve competitive performance while maintaining a frozen model, which is more efficient than fine-tuning all parameters(Google Research)
SPoT: Soft Prompt Transfer (2022): Following the initial success of prompt tuning, SPoT: Soft Prompt Transfer was introduced by Tu Vu et al. in ACL 2022. This method enhances the performance of soft prompts by transferring them across tasks. SPoT learns a soft prompt from a source task and transfers it to a target task, significantly improving prompt tuning performance across various NLP tasks(
)(
).
Applications in Multi-task Learning and Flexibility: Soft prompts are particularly advantageous for multi-task learning, allowing a single model to switch between tasks efficiently by changing the prompts. They are also widely applicable in tasks such as sentiment analysis, machine translation, and language generation(
).
These papers form the foundation of soft prompt research, focusing on optimizing parameter efficiency, enabling multi-task learning, and providing flexibility in handling diverse tasks. For further reading, I recommend checking the EMNLP 2021 and ACL 2022 proceedings.