Master LLM Hallucinations πŸ’­

tanyarai

tanya rai

Posted on December 15, 2023

Master LLM Hallucinations πŸ’­

Building with AI and LLMs is now a must-know for every developer. Every application is trying to integrate AI models. But hallucinations -- the phenomenon of AI models generating incorrect or unverified information -- are still an unsolved problem.

"Ughh ChatGPT - I told you to NOT make stuff up!"

meme

Andrej Karpathy shared recently his take on hallucinations on Twitter:

"The LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it."

So how do we fix it?

Chain-of-Verification (CoVe), a technique introduced by researchers at Meta, is one way. Let's dive into the high-level process of CoVe and then explore how we implemented a CoVe prompt template using AIConfig that you can use to reduce hallucinations in your LLM-powered apps.

The Chain-of-Verification (CoVe) Process πŸ”—

As documented in the white paper, the process involves 4 crucial steps:

1️⃣ Generate Baseline: Given a query, the Large Language Model (LLM) generates a response.
2️⃣ Plan Verification(s): With the query and baseline response, the system formulates a list of verification questions. These would aid in analyzing potential inaccuracies within the original response.
3️⃣ Execute Verification(s): Each verification question is answered, and then cross-checked against the original response to discern inconsistencies or flaws.
4️⃣ Generate Final Response: If inconsistencies are found, a revised response is generated, factoring in the results from the verification process.

Integrate CoVe into your App with AIConfigπŸ’‘

We've brought the CoVe technique to life using AIConfig, streamlining the process to help reduce hallucinations in your LLM applications.

Using AIConfig, we can separate the core application logic from the model components (prompts, model routing parameters, etc.). Here's what the prompt template looks like:

1️⃣ GPT4 + Baseline Generation prompt: This sets the foundation by generating the initial response using GPT4.
2️⃣ GPT4 + Verification prompt: This prompt creates a series of verification questions based on the initial response.
3️⃣ GPT4 + Final Response Generation prompt: Leveraging the findings from the verification stage, this prompt generates a final, more reliable response.

πŸ”— AIConfig CoVE: https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification

Want to see it action? πŸ‘€ Try out our demo in Streamlit!!

🎁 Streamlit App: https://chain-of-verification.streamlit.app/

streamlit

Are you already using AIConfig or CoVe in your projects? Feel free to share your experiences in the comments below.

Liked the post?

Show your support by starring our project on GitHub! ⭐️ https://github.com/lastmile-ai/aiconfig

πŸ’– πŸ’ͺ πŸ™… 🚩
tanyarai
tanya rai

Posted on December 15, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related