Zero-shot Learning
Asking a model to perform a task it was never explicitly trained on, with no examples.
Zero-shot learning is asking the model to do something cold no examples, just instructions. Modern LLMs are surprisingly good at zero-shot because their pretraining covered such a wide range of tasks.
Zero-shot is what makes LLMs feel like general-purpose tools. The same model that translates also summarizes, also extracts JSON, also generates SQL, also writes poetry none of which it was specifically trained to do, but all of which it has seen done in training data.
Zero-shot performance scales with model capability. Stronger models close the gap with few-shot performance and sometimes exceed it on creative tasks where examples constrain the output.