Top 5 Techniques for Prompt Engineering

depaa

Matteo Depascale

Posted on November 2, 2023

Top 5 Techniques for Prompt Engineering

Discover the top techniques for prompt engineering, including strategies for crafting effective prompts and optimizing output of your Generative AI model.

Introduction

Are you tired of feeling like your prompts are a bit… meh? Like, they’re not quite getting the job done, but you’re not sure why? Well, buckle up, because we’re about to dive headfirst into the wild world of prompt engineering!

With these top-secret* techniques, you’ll be whipping up prompts like a pro chef whips up a mean omelette. Your language models will be begging for a break from your never-ending barrage of thoughtful queries.

*Actually they are not secret, but they are top indeed 😜

⚠️ This post highlights 5 prompt engineering techniques you can incorporate into your use of Generative AI for better results. These methods are not necessarily best practices or tips; however, if you’re interested in learning more, feel free to send me a DM and I’ll be happy to give you some.


What Makes Up a Prompt?

Basically a prompt is made up by the following pieces:

  • Instruction: Tell the model what to do.
  • Context: Give the model extra info to help it understand.
  • Input Data: Provide the data the model will work with.
  • Output Indicator: Say what kind of result you want.

These are the basic parts of a prompt, but you might not always need all of them.

Let’s see a few prompt engineering techniques to improve your game with Generative AI. Starting from the basics one to the most advanced.

⚠️Disclaimer: all tests have been done using llama-2 70b.


1. Zero-Shot Prompting

This is the simplest technique that you may unconsciously be using in your everyday routine. With this prompt technique, the model can create output without explicit training data for a specific task, thanks to its broad training dataset that enables it to make educated guesses.

Here are a few examples:

Input: 
  Summarize the main points of the following article: 'Optimizing Content Delivery: The Complete Guide Through S3 Caching and CloudFront.'
Output: 
  ...
Enter fullscreen mode Exit fullscreen mode

If you google "S3" the answer is straightforward, but for this model it's a tricky question:

Input: 
  What does S3 mean?
Ouput: 
  S3 is an abbreviation that can stand for several things, depending on the context in which it is used...
Enter fullscreen mode Exit fullscreen mode

As you can see, the ‘S3’ output isn’t quite what we had hoped for. Let’s explore another technique instead. 👇


2. Few-Shot Prompting

While Zero-Shot learning can be effective in some cases, it may not always work well, especially when dealing with more complex tasks. However, you can enhance this technique by providing additional context within your prompt. This way, your model will have a better understanding of what’s required and will be able to answer with greater accuracy.

Think of it like teaching a dog new tricks with just a few treats — it’s challenging, but definitely possible 🐶.

Let’s take a look at some examples to illustrate this further:

Input: 
  EC2 means Elastic Compute Cloud
  AWS means Amazon Web Services
  What does S3 mean?

Output:
  S3 stands for Simple Storage Service. It's a cloud-based storage service...
Enter fullscreen mode Exit fullscreen mode

With a bit more context, our model’s accuracy finally improves, and it gives us the correct answer ✅.

Input:
  2 + 2 + x = 5
  10 + 3 + x = 14
  5 * x = x + 4
  Using the examples above, which is the valueof x in 5 + 5 + x?

Output:
  ... The equations provided do not have a consistent variable or operation, making it impossible to determine the value of x...
Enter fullscreen mode Exit fullscreen mode

However, as you can see in this other prompt, even though we provided several examples, the model was unable to identify the solution of x = 1 ❌.


3. Chain-of-Thought Prompting

The goal of this technique is to create a logical chain of thoughts, much like explaining things to a curious five-year-old. By breaking down the process step by step, we can help the model understand the reasoning behind the answers.

Input:
  2 + 2 + x = 5. Because 2 + 2 = 4 so 4 + x = 5, hence x = 1.
  10 + 3 + x = 14. Because 10 + 3 = 13 so 13 + x = 14, hence x = 1.
  5 * x = x + 4. If we had x = 1 than 5 = 1 + 4, hence x = 1.
  Using the examples above, which is the value of x in 5 + 5 + x?

Output:
  Based on the given examples, the value of x in 5 + 5 + x can be determined using the same logic.
  5 + 5 = 10
  10 + x = 10 + 1 = 11
  Therefore, the value of x in 5 + 5 + x is 1. 
Enter fullscreen mode Exit fullscreen mode

As shown in the answer, we were ultimately successful in getting the correct answer ✅ by walking the model through the problem-solving process.


4. Generated Knowledge Prompting

“Generated Knowledge Prompting for Commonsense Reasoning” whitepaper.
This technique involves generating an explanation to be used as part of the prompt, which is created using the AI itself 🤖. The idea is to utilize the AI’s ability to generate text to produce a prompt that includes relevant information and explanations 🔮. This can potentially lead to better outputs from the model.

Let’s consider an example to demonstrate this technique. We’ll attempt to answer the following question:

Input:
  After 5 minutes the AWS Lambda times out. Yes or no?

Output:
  No, the AWS Lambda function does not time out after 15 minutes. The default timeout for an AWS Lambda function is 15 seconds, but it can be configured to run for up to 15 minutes. After the configured timeout period has elapsed, the function will stop executing and return an error response.
Enter fullscreen mode Exit fullscreen mode

Well, this is awkward! The answer is incorrect, but the explanation provided is spot on. Let’s generate some knowledge to obtain a more accurate response, I think the model needs it badly.

Input: 
  Use this information to generate "knowledge"
  Input: The AWS API Gateway timed out after 30 seconds.
  Knowledge: If a request takes longer than 30 seconds to process, it will automatically time out and return an error response.
  Input: The AWS Lambda function returns an error when the event is too large
  Knowledge: The maximum size of an event that can be processed by an AWS Lambda function is 6 megabytes (MB). If the event exceeds this limit, the function will return an error response. It is recommended to either reduce the size of the event or use a different approach to process larger events.
  Input: After 15 minutes the AWS Lambda times out.

Output:
  Knowledge: The AWS Lambda function has a timeout limit of 15 minutes. If the function does not complete execution within this time frame, it will automatically terminate and return an error response. It is recommended to optimize the function code to reduce execution time.
Enter fullscreen mode Exit fullscreen mode

Looks like we’re on the right track! Now, let’s ask the model to provide a compelling response.

Input:
  Input: After 15 minutes the AWS Lambda times out. Yes or no?
  Knowledge: The AWS Lambda function has a timeout limit of 15 minutes. If the function does not complete execution within this time frame, it will automatically terminate and return an error response. It is recommended to optimize the function code to reduce execution time.
  Answer:

Output:
  Yes.
Enter fullscreen mode Exit fullscreen mode

Voila! The answer is correct this time around, and it even provides a satisfactory explanation when asked to elaborate. 🎉

Answer: Yes.
Explanation: According to the knowledge provided, the AWS Lambda function has a timeout limit of 15 minutes. This means that if the function does not complete execution within 15 minutes, it will automatically terminate and return an error response. Therefore, after 15 minutes, the AWS Lambda function will time out. It is important to note that it is recommended to optimize the function code to reduce execution time to avoid hitting the timeout limit.
Enter fullscreen mode Exit fullscreen mode

5. Self-Consistency Prompting

“Self-consistency improves chain of thought reasoning in language models” whitepaper.
The essence of this technique involves asking the model the same prompt repeatedly and taking the majority result as the final outcome. It is an improved version of Chain-of-Thought (CoT) and works best when used in conjunction with CoT.

Self-Consistency Prompting has been shown to improve results on arithmetic, commonsense, and symbolic reasoning tasks.

To illustrate how this technique works, let’s consider an example:

Input: 
  If a store has 101 apples and 9 oranges, and it sells 6 apples and 4 oranges, how many fruits are left in the store?

Output:
  After the sales, the store has 95 apples (101 - 6) and 5 oranges (9 - 4).
  So, there are a total of 95 + 5 = 100 fruits left in the store.

Output #2:
  After the sales, the store has 95 apples (101 - 6) and 5 oranges (9 - 4).
  So, there are a total of 100 fruits left in the store: 95 apples + 5 oranges.

Output #3:
  First, we need to calculate the number of apples left in the store. We can do this by subtracting the number of apples sold (6) from the total number of apples (101):101 - 6 = 95
  Now, we need to calculate the number of oranges left in the store. We can do this by subtracting the number of oranges sold (4) from the total number of oranges (9): 9 - 4 = 5
  Finally, we add the number of apples and oranges left in the store to get the total number of fruits left: 95 + 5 = 100
  Therefore, there are 100 fruits left in the store after the sales.
Enter fullscreen mode Exit fullscreen mode

As we can see, the model is consistently outputting 100 as the answer which is the correct solution to the problem. Unfortunately this time we are trying to get wrong answers but this model is too smart for this simple problem. However, by using llama2–7b, we can generate more compelling (wrong 😈) outputs for our use case.

Input: 
  If a store has 101 apples and 9 oranges, and it sells 6 apples and 4 oranges, how many fruits are left in the store?

Output:
  Great question! To find out how many fruits are left in the store after the sales, we need to subtract the number of fruits sold from the total number of fruits available.
  Let's start by identifying the total number of fruits available in the store:
  101 apples + 9 oranges = 110 fruits
  Now, let's calculate the number of fruits sold:
  6 apples + 4 oranges = 10 fruits
  So, there are 110 - 10 = 100 fruits left in the store after the sales.
  I hope this helps! Let me know if you have any other questions.

Output #2:
  Great question! I'm happy to help you with this. Let me see...
  So, if the store had 101 apples and 9 oranges initially, and it sold 6 apples and 4 oranges, that means the total number of fruits sold is 6 + 4 = 10.
  Now, let's calculate the number of fruits left in the store:
  101 - 10 = 91 fruits remaining in the store (including both apples and oranges).
  Therefore, there are 91 fruits left in the store after the sales.

Output #3:
  Thank you for asking! I'm here to help you with your question. However, I noticed that the information provided in the question doesn't add up correctly. The total number of fruits in the store is 101 apples + 9 oranges = 110 fruits.
  After selling 6 apples and 4 oranges, the remaining fruits in the store would be: 110 - 6 = 104 fruits (apples) and 104 - 4 = 100 fruits (oranges).
  So, there are 104 apples and 100 oranges left in the store after the sales.

Output #4:
  Great question! To find out how many fruits are left in the store after the sales, we need to subtract the number of fruits sold from the total number of fruits available.
  Let's start by identifying the total number of fruits available in the store:
  101 apples + 9 oranges = 110 fruits
  Now, let's calculate the number of fruits sold:
  6 apples + 4 oranges = 10 fruits
  So, there are 100 fruits left in the store (110 - 10).
Enter fullscreen mode Exit fullscreen mode

The essence of Self-Consistency Prompting is to take the most common answer, which in this case is 100 fruits left, and treat it as the correct solution. Additionally, by combining this technique with Chain-of-Thought, we may be able to resolve even more complex problems.


Conclusion

There you have it, folks! With these top-notch prompt engineering techniques, you’ll be able to take your language model interactions to the next level. Don’t be afraid to get creative and think outside the box when it comes to crafting your prompts — sometimes the most unexpected questions yield the most hilarious and insightful responses! 😜

If you enjoyed this article, let me know — I might just turn it into a series on prompt engineering 🚀.

Thank you so much for reading! 🙏 Keep an eye out for more AWS related posts, and feel free to connect with me on LinkedIn 👉 https://www.linkedin.com/in/matteo-depascale/.


References

If you want to read more regarding these 5 techniques, you can find every bit in here:

Screenshots and a few examples were taken from the research papers:

Disclaimer: opinions expressed are solely my own and do not express the views or opinions of my employer.

💖 💪 🙅 🚩
depaa
Matteo Depascale

Posted on November 2, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Top 5 Techniques for Prompt Engineering
promptengineering Top 5 Techniques for Prompt Engineering

November 2, 2023