From zero-shot prompting to RAG - Part 2: Few-shot prompting
Few-Shot Prompting is an effective and flexible method of preparing language models for new tasks - without any additional training.
Last week we looked at the simplest form of prompting: Zero-shot prompting. This involves writing a prompt and using the answer as the result. This form is particularly suitable for simple questions that require a quick answer. However, if a little more context is required or the requirements for the answer increase, this method quickly reaches its limits. This is where the decisive advantage of language models comes into play. Namely that they can recognize context. This leads us to the second part of the series: Few-shot prompting.