Making sure you get JSON in your LLM response

omills

Olivier

Posted on January 10, 2024

Making sure you get JSON in your LLM response

Using language models to return JSON is very useful.

For OpenAI you can just pass the response_format: { type: "json_object" } such as:

const completion = await openai.chat.completions.create({
    messages: [
      {
        role: "system",
        content: "You are a helpful assistant designed to output JSON.",
      },
      { role: "user", content: "Who won the world series in 2020?" },
    ],
    model: "gpt-3.5-turbo-1106",
    response_format: { type: "json_object" }, // <==== here
  });
Enter fullscreen mode Exit fullscreen mode

For other models its not always reliable. It might say "Here is the JSON object you wanted... ". Or even add things at the end like. "I hope this helps!" 😡

Here is a function that will make sure you only get the JSON.

function parseJSONFromString(input: string): any {
    // Regular expression to find a JSON object or array
    const regex = /(\{.*\}|\[.*\])/s;

    // Extract JSON string using the regular expression
    const match = input.match(regex);

    // Check if a match was found
    if (match) {
        try {
            // Parse and return the JSON object/array
            return JSON.parse(match[0]);
        } catch (error) {
            // Handle parsing error (invalid JSON)
            console.error("Invalid JSON:", error);
            return null;
        }
    } else {
        // No JSON object/array found
        console.warn("No JSON object or array found in the string.");
        return null;
    }
}

// Example usage
const exampleString = "here is your JSON object: { \"key\": \"value\" }";
const result = parseJSONFromString(exampleString);
console.log(result);
Enter fullscreen mode Exit fullscreen mode
💖 💪 🙅 🚩
omills
Olivier

Posted on January 10, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related