SELF-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations
Journal
EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings
ISBN
9798891760608
Date Issued
2023-01-01
Author(s)
Abstract
Large language models (LLMs) have exhibited striking in-context learning (ICL) ability to adapt to target tasks with a few input-output demonstrations. For better ICL, different methods are proposed to select representative demonstrations from existing training corpora. However, such settings are not aligned with real-world practices, as end-users usually query LMs without access to demonstration pools. In this work, we introduce SELF-ICL-a simple framework which bootstraps LMs' intrinsic capabilities to perform zero-shot ICL. Given a test input, SELF-ICL first prompts the model to generate pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we perform ICL for the test input with the pseudo-input-label pairs as demonstrations. Evaluation on 23 BIG-Bench Hard tasks shows SELF-ICL outperforms zero-shot baselines on both average accuracy and head-to-head comparison. Moreover, with zero-shot chain-of-thought, SELF-ICL achieves results comparable to using real demonstrations. Additionally, we conduct a range of analyses to validate SELF-ICL's effectiveness and provide insights for its behaviors under different settings.
Type
conference paper
