Published: Aug 28, 2023

By now, in whatever industry you are, you surely have heard about AI, and these days, when your colleagues, bosses and friends talk about AI, they really mean generative AI. These tools are transforming how we work, they can make you more productive, efficient and better at your job if you know how to use them. Unfortunately, like with so many other fantastic tools they never fully get utilized because people do not read the manual and miss most of the functionality. The good news is that you don’t need a PhD or understand what a autoregressive models and flash attention is to effectively use Generative AI Models.

This Pocket Guide to Prompting will teach you enough to make you more efficient than everyone else just blindly typing prompts hoping for a good outcome.

As you delve deeper into the world of AI, you’ll quickly realize that the magic lies not just in the technology itself but in how you communicate with it. Think of these AI models as highly skilled assistants; the clearer you are with your instructions, the better they can assist you. This is where the art of prompting comes into play.

Providing Clear and Specific Instructions

Imagine asking someone to fetch you a book, but not specifying which one. You might end up with a cookbook when you wanted a mystery novel. Similarly, when working with generative AI models, clarity is key. The more specific and direct you are with your prompts, the more accurate and relevant the model’s response will be. It’s not about being verbose, but about being precise.

Zero-shot Prompting: Starting Simple

Before we dive into the complexities, let’s start with the basics: zero-shot prompting. In this approach, you’re giving the model a task without any prior examples. It’s like asking a seasoned chef to make you a dish without showing them any recipes. They’ll use their vast knowledge and expertise to whip up something for you.

Zero Shot Prompting a Chef with a dish

For instance, if you ask the model, “Describe the process of photosynthesis,” it will provide an answer based on its the dataset it was trained on, without needing any examples.

Building on the Basics: Few-shot Prompting

Now, imagine if you showed that chef a couple of dishes first and then asked them to make something similar.

Few shot chef prompting result of prompting a chef with a few shot examples

This is the essence of few-shot prompting. By providing the model with a few examples, you’re setting a context, a guideline of sorts.

I have a more detailed guide with more examples on Zero and Few Shot Prompting here.

For instance:

  1. English: “Cat” -> French: “Chat”
  2. English: “Dog” -> French: “Chien”
  3. Translate the following English word to French: “Bird”

By giving translations for “Cat” and “Dog”, you’re guiding the model’s response for “Bird”.

As you progress through this guide, you’ll discover more nuances and techniques to master the art of prompting. Remember, it’s not about overwhelming the model with information, but guiding it to produce the best possible outcome. Experiment and try a few examples of your own!

Having explored the concepts of zero-shot and few-shot prompting, where we guide the model with no examples or a handful of examples respectively, we now transition to a more dynamic approach: iterative prompting. This method emphasizes the importance of refining and optimizing prompts through a series of iterations, rather than expecting perfection on the first try.

Iterative Prompting: A Summary with Examples

When building applications with large language models, it’s rare to nail the perfect prompt on the first attempt. However, the key isn’t to get it right immediately but to have a robust process for iterative refinement. This mirrors the experience in traditional machine learning, where models often don’t work perfectly on the first training attempt.

The iterative process involves:

  1. Initial Idea Formation: Start with a clear idea of the task you want the model to perform.
  2. First Attempt: Write a prompt based on your idea and observe the model’s response.
  3. Evaluation: Analyze the output. Is it too long? Too technical? Missing details?
  4. Refinement: Modify the prompt based on your observations and run it again.

For instance, consider the task of summarizing a technical fact sheet for a chair. An initial prompt might ask the model to create a product description. The first result might be too verbose. On refining the prompt to limit the description to 50 words, the output becomes more concise. Further iterations can focus on emphasizing technical details or including specific product IDs.

This iterative process is crucial because there’s no one-size-fits-all “perfect prompt.” The goal is to develop a prompt tailored to your specific application.

In a more advanced example, the model was tasked with generating an HTML formatted product description. While the initial output was lengthy, further iterations could refine it to be more succinct.

The takeaway is clear: iterative prompting is a journey. It’s about having a systematic approach to refine prompts, evaluate their effectiveness, and make necessary adjustments. Whether you’re working with a single example or evaluating prompts against a larger dataset, this iterative process is the key to harnessing the full potential of large language models.

After delving into the nuances of iterative prompting, where we emphasized the importance of refining prompts through a series of adjustments, we now transition to another advanced technique: In-Context Instruction Learning. This method offers a structured way to guide large language models in understanding and executing tasks that might be unfamiliar to them.

In-Context Instruction Learning: A Summary with Examples

Imagine you’re teaching a friend, Alex, how to cook. Alex has never cooked before but has seen a lot of cooking shows. One day, you decide to teach Alex how to make an omelette.

Instead of just telling Alex to “make an omelette,” you provide a step-by-step demonstration:

  1. Instruction: “Crack the eggs into a bowl.”
  2. Example Input: You show Alex an egg and a bowl.
  3. Correct Output: You demonstrate cracking the egg and pouring its contents into the bowl.

You repeat this process for each step, from whisking the eggs to adding ingredients and frying the omelette. By the end, Alex has a clear context of what each step looks like and can replicate the process on their own.

Now, think of Alex as a large language model. Just like Alex, the model has seen a lot of data but might not know how to perform a specific task without guidance. ICIL is like giving the model a cooking demonstration. You provide clear instructions, show examples, and then demonstrate the desired outcome. This way, the model learns in context and can perform the task more effectively.

In essence, ICIL is about teaching the model “how to cook” by providing clear demonstrations, ensuring it understands the task and can replicate it accurately.

What is ICIL? In-Context Instruction Learning (ICIL) is akin to teaching someone how to cook using step-by-step demonstrations. Just as you’d show someone how to make an omelette by guiding them through each step, ICIL provides large language models with demonstrations to help them understand and perform new tasks. These demonstrations are a series of examples that set the context, enabling the model to focus and produce accurate results. The goal of ICIL is to enable a trained model to tackle tasks it wasn’t explicitly trained for by learning from the context of the prompt.

How Does It Work? Just as we provided Alex with clear steps to make an omelette, with ICIL, we give the model a series of demonstrations. For instance, when teaching a model to summarize a technical fact sheet for a chair, we might use:

  1. Instruction: “Summarize the key features of a product.”
  2. Example Input: “A chair with a coated aluminum base, pneumatic adjustments, and comes from Italy.”
  3. Correct Output: “A stylish Italian chair with an aluminum base and pneumatic features.”

These demonstrations act as a guiding light, setting a clear context and helping the model grasp the task at hand. The demonstrations remain consistent across various tasks, ensuring the model has a consistent foundation to learn from.

Utilizing ICIL for Better Results:

  1. Identify the Task: Clearly outline what you want the model to achieve.
  2. Craft the Instruction: Provide clear guidelines, just as you’d instruct someone on the steps to cook a dish.
  3. Provide Demonstrations: Offer a set of instructions, example inputs, and correct outputs.
  4. Concatenate Demonstrations: Merge these demonstrations into one guiding prompt.
  5. Test & Refine: Evaluate the model’s performance and adjust as needed.

Expanding on our chair example, if we wanted the model to categorize products based on their descriptions, our ICIL demonstration might be:

  1. Instruction: “Categorize the product based on its description.”
  2. Example Input: “A chair with a luxurious leather finish and ergonomic design.”
  3. Correct Output: “Luxury ergonomic chair.”

In essence, ICIL is about offering the model a structured “recipe” within the prompt. While it might take some iterations to perfect, mastering ICIL can significantly enhance the effectiveness of your prompts, making it an invaluable tool in prompt engineering. © 2024