Create a Next.js AI Chatbot App with Vercel AI SDK

milu_franz

Milu

Posted on July 12, 2024

Create a Next.js AI Chatbot App with Vercel AI SDK

The recent advancements in Artificial Intelligence have propelled me (and probably many in the Software Engineering community) to delve deeper into this field. Initially I wasn't sure where to start, so I enrolled in the Supervised Machine Learning Coursera class to learn the basics. While the course is fantastic, my hands-on approach led me to implement a quick application to dip my toe and grasp the practical fundamentals. This is how I discovered the Vercel AI SDK, paired with the OpenAI provider. Using one of their existing templates, I developed my version of an AI chatbot. This exercise introduced me to the variety of available providers and the possibilities of integrating these providers to offer capabilities to users. In this article, I’ll define the Vercel AI SDK, detail how to use it, and share my thoughts on this experience.

What is Vercel AI SDK?

The Vercel AI SDK is a TypeScript toolkit designed to implement Large Language Models (LLMs) capabilities in frameworks such as React, Next.js, Vue, Svelte, Node.js, and others.

Why Use Vercel AI SDK?

There are several LLM providers available for building AI-powered apps, including:

  • OpenAI
  • Azure
  • Anthropic
  • Amazon Bedrock
  • Google Generative AI
  • Databricks
  • Cohere
  • ...

However, integrating with each provider can vary and is not always straightforward, as some offer SDKs or APIs. With the Vercel AI SDK, you can integrate multiple LLM providers using the same API, UI hooks, and stream generative user interfaces.

How to Use Vercel AI SDK in a Next.js App?

Vercel offers an AI SDK RSC package that supports React Server Components, enabling you to write UI components that render on the server and stream to the client. This package uses server actions to achieve this. Let's explain some of the functions used:

useUIState: Acts like React’s useState hook but allows you to access and update the visual representation of the AI state.

const [messages, setMessages] = useUIState<typeof AI>()
Enter fullscreen mode Exit fullscreen mode

useAIState: Provides access to the AI state, which contains context and relevant data shared with the AI model, and allows you to update it.

const [aiState, setAiState] = useAIState()
Enter fullscreen mode Exit fullscreen mode

getMutableAIState: Provides a mutable copy of the AI state for server-side updates.

const aiState = getMutableAIState<typeof AI>()
Enter fullscreen mode Exit fullscreen mode

useActions: Provides access to the server actions from the client.

const { submitUserMessage } = useActions()
Enter fullscreen mode Exit fullscreen mode

streamUI: Calls a model and returns with a React Server component.

  const result = await streamUI({
    model: openai('gpt-3.5-turbo'),
    initial: <SpinnerMessage />,
    messages: [...],
    text: ({ content, done, delta }) => {
      ...
      return textNode
    }
  })
Enter fullscreen mode Exit fullscreen mode

Detailed Tutorial

You can fork the simplified project I’ve worked on or use the official Vercel template. Both repositories have installation information in the README.

Let's dive into the key parts that make this integration work.

components/prompt-form.tsx
In this component, we use the useUIState hook to update the visual representation of the AI state. We also use the useActions function to access the submitUserMessage function that we will create next.

export function PromptForm({
  input,
  setInput
}: {
  input: string
  setInput: (value: string) => void
}) {
  const { submitUserMessage } = useActions()
  const [_, setMessages] = useUIState<typeof AI>()

  return (
    <form
      onSubmit={async (e: any) => {
        e.preventDefault()

        const value = input.trim()
        setInput('')
        if (!value) return

        // Optimistically add user message UI
        setMessages(currentMessages => [
          ...currentMessages,
          {
            id: nanoid(),
            display: <UserMessage>{value}</UserMessage>
          }
        ])

        // Submit and get response message
        const responseMessage = await submitUserMessage(value)
        setMessages(currentMessages => [...currentMessages, responseMessage])
      }}
    >
     <div>...</div>
    </form>
Enter fullscreen mode Exit fullscreen mode

lib/chat/actions.tsx
Now, let's explore the submitUserMessage function. First, we use getMutableAIState to define a variable called aiState and update it with the user-submitted message.

async function submitUserMessage(content: string) {
  'use server'

  const aiState = getMutableAIState<typeof AI>()

  aiState.update({
    ...aiState.get(),
    messages: [
      ...aiState.get().messages,
      {
        id: nanoid(),
        role: 'user',
        content
      }
    ]
  })
  ...
}
Enter fullscreen mode Exit fullscreen mode

Next, we use the streamUI function to define the LLM model we want to use (in this case, gpt-3.5-turbo), set an initial loading state while waiting for the response, and provide an array containing all the messages and their context.

Since we are using a streaming function, we can display the LLM results as they are received, even if they are incomplete. This enhances the user experience by showing results on the screen quickly, rather than waiting for a complete response.

async function submitUserMessage(content: string) {
  ...
  let textStream: undefined | ReturnType<typeof createStreamableValue<string>>
  let textNode: undefined | React.ReactNode

  const result = await streamUI({
    model: openai('gpt-3.5-turbo'),
    initial: <SpinnerMessage />,
    messages: [
      ...aiState.get().messages.map((message: any) => ({
        role: message.role,
        content: message.content,
        name: message.name
      }))
    ],
    text: ({ content, done, delta }) => {
      if (!textStream) {
        textStream = createStreamableValue('')
        textNode = <BotMessage content={textStream.value} />
      }

      if (done) {
        textStream.done()
        aiState.done({
          ...aiState.get(),
          messages: [
            ...aiState.get().messages,
            {
              id: nanoid(),
              role: 'assistant',
              content
            }
          ]
        })
      } else {
        textStream.update(delta)
      }

      return textNode
    }
  })
}
Enter fullscreen mode Exit fullscreen mode

The streamUI function also has a tools attribute that could extend your chatbot's capabilities by defining custom tools that can be invoked during the conversation, enhancing the user experience with dynamic and context-aware responses.

async function submitUserMessage(content: string) {
  ...
  const result = await streamUI({
    ...
    tools: {
      weather: async (location: string) => {
        const response = await fetch(`/api/weather/location=${location}`)
        const data = await response.json()
        return `The current weather in ${location} is ${data.weather} with a temperature of ${data.temperature}°C.`
      }
    }
    ...
  })
}
Enter fullscreen mode Exit fullscreen mode

The tools attribute is added to streamUI to define custom tools.
In this example, a weather tool is defined that takes a location as an argument. The weather tool makes an API call to /api/weather to fetch weather information for the given location. The API response is parsed, and a formatted weather message is returned.

And there you have it! You can get an AI chatbot working pretty quickly with these functions.

Thoughts on Vercel AI SDK

The Vercel AI SDK was intuitive and easy to use, especially if you already have experience with Next.js or React. While you could implement the OpenAI SDK directly, the Vercel AI SDK’s ability to integrate multiple LLM models without additional boilerplate makes it a good choice.

💖 💪 🙅 🚩
milu_franz
Milu

Posted on July 12, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related