Published: Aug 22, 2023

Few Shot and Zero Shot Prompting

The two simplest prompting techniques are zero shot and few shot prompting. In zero shot prompting you directly tell the model what to do in your prompt and expect it to do it with zero examples. For example:

Your task is to classify the sentiment of the text.
text: The book was great!
sentiment:

or

Translate the following text into german.
text: I went swimming in the lake yesterday.
translation:

Zero shot prompting can be an effective technique to try on straightforward tasks or on instruction tuned models. The main benefit is that this technique is very cost effective in terms of token costs. If, after testing the output of the model however is lacking, then you may try Few Shot Prompting. As with Zero Shot prompting the idea is the name, instead of zero examples you may provide the model with a few good examples before the one you want it to perform the task on. For example continuing the above examples:

Your task is to classify the sentiment of the text.
text: I hated the book, Bobby should have never fallen for Jessica!
sentiment: negative
text: What a funny movie! I laughed a lot and had a good time!
sentiment: positive
text: The book was great!
sentiment:

or:

Translate the following text into german.
text: I like eating ice cream.
translation: Ich mag es, Eis zu essen.
text: The cat is under the table.
translation: Die Katze ist unter dem Tisch.
text: I went swimming in the lake yesterday.
translation:

Few shot prompting can provide better results for complex tasks or tasks where the context is particularly important. However, it is more token costly than zero shot prompting because of the extra examples used. So for example if you your task is summarization of long texts, than you may not have enough tokens to include more than one example, and if you are using an API that charges by the number of tokens, like openai, then bear in mind that every extra example you do also pay for in real dollars. You will have to balance your use case and weigh the extra cost with the improvement in the model’s performance.

After you’ve tried zero shot and few shot prompting, you can start experimenting with other techniques, like many shot prompting or tweaking the prompt engineering. Always remember, the key to successful prompting is writing clear and specific instructions.

For more complex tasks, consider the chain of thought and tree of thoughts prompting techniques.

Techniques to improve Few-Shot Prompting

Several techniques have been proposed and empirically researched to improve few-shot prompting.

  • Try to keep the examples varied (but relevant and semantically similar to the task) and in a random order (to prevent the model from inferring an order)
  • Define the labels(Min et al .2022) in your prompt. For example in the sentiment classification example above, the labels are positive and negative. This is also about being clear and precise in your prompts.
  • [[In Context Instruction Learning]] is a technique that improves zero-shot learning (using a model on a task it wasn’t designed for) by combining few-shot prompting with instruction prompting.
  • [[Chain of Thought]] prompting (Wei et al. in 2022), is a method that generates a series of concise, logical steps, known as reasoning chains, to guide the problem-solving process towards a final answer, especially beneficial in complex tasks when using large-scale models.

OliPrompts.com © 2024