Building a chatbot with GPT-3.5 and Next.js: A Detailed Guide

njlawz

Nat

Posted on April 16, 2023

Building a chatbot with GPT-3.5 and Next.js: A Detailed Guide

Image description
With all of the hype around AI and ChatGPT, I thought it would be appropriate to put out a tutorial on how to build our very own ChatGPT powered chat bot! Most of this code has already been open sourced on Vercel's website as a template, so you can feel free to clone that repo to get started, or if you just want to interact with ChatGPT and not have to sign up, you can check it out on my website!

Let's jump in! These are the technologies that we will be using:

  • Next.js
  • TypeScript
  • Tailwind (although I won't be covering that here)
  • OpenAI API

Getting Started

Let's get our project setup. I like to use pnpm and create-t3-app, but feel free to use the package manager and CLI of your choice to get started.

Project Setup

Using pnpm and create-t3-app:

pnpm create t3-app@latest
Enter fullscreen mode Exit fullscreen mode
  1. Name your project

  2. Select TypeScript

  3. Select Tailwind

  4. Select Y for Git repository

  5. Select Y to run pnpm install

  6. Hit Enter for default import alias

create-t3-app

Now that we have a bootstrapped Next.js project, lets make sure that we have an OpenAI API key to use. To retrieve your OpenAI API key you need to create a user account at openai.com and access the API Keys section in the OpenAI dashboard to create a new API key.

Create your environment variables

In your projects root directory, create a .env.local file. It should look like this:

# Your API key
OPENAI_API_KEY=PASTE_API_KEY_HERE

# The temperature controls how much randomness is in the output
AI_TEMP=0.7

# The size of the response
AI_MAX_TOKENS=100
OPENAI_API_ORG=
Enter fullscreen mode Exit fullscreen mode

Let's also set up some boilerplate css so that our layout is responsive. Let's install the Vercel examples ui-layout.

pnpm i @vercel/examples-ui
Enter fullscreen mode Exit fullscreen mode

Your tailwind.config.js file should look like this:

module.exports = {
  presets: [require('@vercel/examples-ui/tailwind')],
  content: [
    './pages/**/*.{js,ts,jsx,tsx}',
    './components/**/*.{js,ts,jsx,tsx}',
    './node_modules/@vercel/examples-ui/**/*.js',
  ],
}
Enter fullscreen mode Exit fullscreen mode

Your postcss.config.js should look like this:

module.exports = {
  plugins: {
    tailwindcss: {},
    autoprefixer: {},
  },
}
Enter fullscreen mode Exit fullscreen mode

Lastly, your _app.tsx should look like this:

import type { AppProps } from 'next/app'
import { Analytics } from '@vercel/analytics/react'
import type { LayoutProps } from '@vercel/examples-ui/layout'

import { getLayout } from '@vercel/examples-ui'

import '@vercel/examples-ui/globals.css'

function App({ Component, pageProps }: AppProps) {
  const Layout = getLayout<LayoutProps>(Component)

  return (
    <Layout
      title="ai-chatgpt"
      path="solutions/ai-chatgpt"
      description="ai-chatgpt"
    >
      <Component {...pageProps} />
      <Analytics />
    </Layout>
  )
}

export default App
Enter fullscreen mode Exit fullscreen mode

Now that we have all of our boilerplate out of the way, what do we have to do? Let's create a checklist:

  1. We need to be able to listen to responses from the OpenAI API.

  2. We need to be able to send user input to the OpenAI API.

  3. We need to display all of this in some sort of chat UI.

Creating a data stream

In order to receive data from the OpenAI API, we can create an OpenAIStream function

In your root project directory, create a folder called utils, and then a file inside called OpenAiStream.ts. Copy and paste this code into it and be sure to do the necessary npm/pnpm installs for any imports.

pnpm install eventsource-parser
Enter fullscreen mode Exit fullscreen mode
import {
  createParser,
  ParsedEvent,
  ReconnectInterval,
} from 'eventsource-parser'

export type ChatGPTAgent = 'user' | 'system' | 'assistant'

export interface ChatGPTMessage {
  role: ChatGPTAgent
  content: string
}

export interface OpenAIStreamPayload {
  model: string
  messages: ChatGPTMessage[]
  temperature: number
  top_p: number
  frequency_penalty: number
  presence_penalty: number
  max_tokens: number
  stream: boolean
  stop?: string[]
  user?: string
  n: number
}

export async function OpenAIStream(payload: OpenAIStreamPayload) {
  const encoder = new TextEncoder()
  const decoder = new TextDecoder()

  let counter = 0

  const requestHeaders: Record<string, string> = {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.OPENAI_API_KEY ?? ''}`,
  }

  if (process.env.OPENAI_API_ORG) {
    requestHeaders['OpenAI-Organization'] = process.env.OPENAI_API_ORG
  }

  const res = await fetch('https://api.openai.com/v1/chat/completions', {
    headers: requestHeaders,
    method: 'POST',
    body: JSON.stringify(payload),
  })

  const stream = new ReadableStream({
    async start(controller) {
      // callback
      function onParse(event: ParsedEvent | ReconnectInterval) {
        if (event.type === 'event') {
          const data = event.data
          // https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream
          if (data === '[DONE]') {
            console.log('DONE')
            controller.close()
            return
          }
          try {
            const json = JSON.parse(data)
            const text = json.choices[0].delta?.content || ''
            if (counter < 2 && (text.match(/\n/) || []).length) {
              // this is a prefix character (i.e., "\n\n"), do nothing
              return
            }
            const queue = encoder.encode(text)
            controller.enqueue(queue)
            counter++
          } catch (e) {
            // maybe parse error
            controller.error(e)
          }
        }
      }

      // stream response (SSE) from OpenAI may be fragmented into multiple chunks
      // this ensures we properly read chunks and invoke an event for each SSE event stream
      const parser = createParser(onParse)
      for await (const chunk of res.body as any) {
        parser.feed(decoder.decode(chunk))
      }
    },
  })

  return stream
}
Enter fullscreen mode Exit fullscreen mode

OpenAIStream is a function that allows you to stream data from the OpenAI API. It takes a payload object as an argument, which contains the parameters for the request. It then makes a request to the OpenAI API and returns a ReadableStream object. The stream contains events that are parsed from the response, and each event contains data that can be used to generate a response. The function also keeps track of the number of events that have been parsed, so that it can close the stream when it has reached the end.

Now that we can receive data back from the API, let's create a component that can take in a user message that can be sent to the api to illicit a response.

Creating the Chat-Bot Components

We can create our chatbot in one component if we wanted to, but to keep files more organized we have it set up into three components.

In your root directory, create a folder called components. In it, create three files:

  1. Button.tsx

  2. Chat.tsx

  3. ChatLine.tsx

Button Component

import clsx from 'clsx'

export function Button({ className, ...props }: any) {
  return (
    <button
      className={clsx(
        'inline-flex items-center gap-2 justify-center rounded-md py-2 px-3 text-sm outline-offset-2 transition active:transition-none',
        'bg-zinc-600 font-semibold text-zinc-100 hover:bg-zinc-400 active:bg-zinc-800 active:text-zinc-100/70',
        className
      )}
      {...props}
    />
  )
}
Enter fullscreen mode Exit fullscreen mode

Very simple button that keeps the Chat.tsx file a bit smaller.

ChatLine Component

pnpm install clsx
pnpm install react-wrap-balancer
Enter fullscreen mode Exit fullscreen mode
import clsx from 'clsx'
import Balancer from 'react-wrap-balancer'

// wrap Balancer to remove type errors :( - @TODO - fix this ugly hack
const BalancerWrapper = (props: any) => <Balancer {...props} />

type ChatGPTAgent = 'user' | 'system' | 'assistant'

export interface ChatGPTMessage {
  role: ChatGPTAgent
  content: string
}

// loading placeholder animation for the chat line
export const LoadingChatLine = () => (
  <div className="flex min-w-full animate-pulse px-4 py-5 sm:px-6">
    <div className="flex flex-grow space-x-3">
      <div className="min-w-0 flex-1">
        <p className="font-large text-xxl text-gray-900">
          <a href="#" className="hover:underline">
            AI
          </a>
        </p>
        <div className="space-y-4 pt-4">
          <div className="grid grid-cols-3 gap-4">
            <div className="col-span-2 h-2 rounded bg-zinc-500"></div>
            <div className="col-span-1 h-2 rounded bg-zinc-500"></div>
          </div>
          <div className="h-2 rounded bg-zinc-500"></div>
        </div>
      </div>
    </div>
  </div>
)

// util helper to convert new lines to <br /> tags
const convertNewLines = (text: string) =>
  text.split('\n').map((line, i) => (
    <span key={i}>
      {line}
      <br />
    </span>
  ))

export function ChatLine({ role = 'assistant', content }: ChatGPTMessage) {
  if (!content) {
    return null
  }
  const formatteMessage = convertNewLines(content)

  return (
    <div
      className={
        role != 'assistant' ? 'float-right clear-both' : 'float-left clear-both'
      }
    >
      <BalancerWrapper>
        <div className="float-right mb-5 rounded-lg bg-white px-4 py-5 shadow-lg ring-1 ring-zinc-100 sm:px-6">
          <div className="flex space-x-3">
            <div className="flex-1 gap-4">
              <p className="font-large text-xxl text-gray-900">
                <a href="#" className="hover:underline">
                  {role == 'assistant' ? 'AI' : 'You'}
                </a>
              </p>
              <p
                className={clsx(
                  'text ',
                  role == 'assistant' ? 'font-semibold font- ' : 'text-gray-400'
                )}
              >
                {formatteMessage}
              </p>
            </div>
          </div>
        </div>
      </BalancerWrapper>
    </div>
  )
}

Enter fullscreen mode Exit fullscreen mode

This code is a React component that displays a chat line. It takes in two props, role and content. The role prop is used to determine which agent is sending the message, either the user, the system, or the assistant. The content prop is used to display the message.

The component first checks if the content prop is empty, and if it is, it returns null. If the content prop is not empty, it converts any new lines in the content to break tags. It then renders a div with a BalancerWrapper component inside. The BalancerWrapper component is used to wrap the chat line in a responsive layout. Inside the BalancerWrapper component, the component renders a div with a flex container inside. The flex container is used to display the message sender and the message content. The message sender is determined by the role prop, and the message content is determined by the content prop. The component then returns the div with the BalancerWrapper component inside.

Chat Component

pnpm install react-cookie
Enter fullscreen mode Exit fullscreen mode
import { useEffect, useState } from 'react'
import { Button } from './Button'
import { type ChatGPTMessage, ChatLine, LoadingChatLine } from './ChatLine'
import { useCookies } from 'react-cookie'

const COOKIE_NAME = 'nextjs-example-ai-chat-gpt3'

// default first message to display in UI (not necessary to define the prompt)
export const initialMessages: ChatGPTMessage[] = [
  {
    role: 'assistant',
    content: 'Hi! I am a friendly AI assistant. Ask me anything!',
  },
]

const InputMessage = ({ input, setInput, sendMessage }: any) => (
  <div className="mt-6 flex clear-both">
    <input
      type="text"
      aria-label="chat input"
      required
      className="min-w-0 flex-auto appearance-none rounded-md border border-zinc-900/10 bg-white px-3 py-[calc(theme(spacing.2)-1px)] shadow-md shadow-zinc-800/5 placeholder:text-zinc-400 focus:border-teal-500 focus:outline-none focus:ring-4 focus:ring-teal-500/10 sm:text-sm"
      value={input}
      onKeyDown={(e) => {
        if (e.key === 'Enter') {
          sendMessage(input)
          setInput('')
        }
      }}
      onChange={(e) => {
        setInput(e.target.value)
      }}
    />
    <Button
      type="submit"
      className="ml-4 flex-none"
      onClick={() => {
        sendMessage(input)
        setInput('')
      }}
    >
      Say
    </Button>
  </div>
)

export function Chat() {
  const [messages, setMessages] = useState<ChatGPTMessage[]>(initialMessages)
  const [input, setInput] = useState('')
  const [loading, setLoading] = useState(false)
  const [cookie, setCookie] = useCookies([COOKIE_NAME])

  useEffect(() => {
    if (!cookie[COOKIE_NAME]) {
      // generate a semi random short id
      const randomId = Math.random().toString(36).substring(7)
      setCookie(COOKIE_NAME, randomId)
    }
  }, [cookie, setCookie])

  // send message to API /api/chat endpoint
  const sendMessage = async (message: string) => {
    setLoading(true)
    const newMessages = [
      ...messages,
      { role: 'user', content: message } as ChatGPTMessage,
    ]
    setMessages(newMessages)
    const last10messages = newMessages.slice(-10) // remember last 10 messages

    const response = await fetch('/api/chat', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        messages: last10messages,
        user: cookie[COOKIE_NAME],
      }),
    })

    console.log('Edge function returned.')

    if (!response.ok) {
      throw new Error(response.statusText)
    }

    // This data is a ReadableStream
    const data = response.body
    if (!data) {
      return
    }

    const reader = data.getReader()
    const decoder = new TextDecoder()
    let done = false

    let lastMessage = ''

    while (!done) {
      const { value, done: doneReading } = await reader.read()
      done = doneReading
      const chunkValue = decoder.decode(value)

      lastMessage = lastMessage + chunkValue

      setMessages([
        ...newMessages,
        { role: 'assistant', content: lastMessage } as ChatGPTMessage,
      ])

      setLoading(false)
    }
  }

  return (
    <div className="rounded-2xl border-zinc-100  lg:border lg:p-6">
      {messages.map(({ content, role }, index) => (
        <ChatLine key={index} role={role} content={content} />
      ))}

      {loading && <LoadingChatLine />}

      {messages.length < 2 && (
        <span className="mx-auto flex flex-grow text-gray-600 clear-both">
          Type a message to start the conversation
        </span>
      )}
      <InputMessage
        input={input}
        setInput={setInput}
        sendMessage={sendMessage}
      />
    </div>
  )
}
Enter fullscreen mode Exit fullscreen mode

This component renders an input field for users to send messages and displays messages exchanged between the user and the chatbot.

When the user sends a message, the component sends a request to our api function (/api/chat.ts) with the last 10 messages and the user's cookie as the request body. The serverless function processes the message using GPT-3.5 and sends back a response to the component. The component then displays the response received from the server as a message in the chat interface. The component also sets and retrieves a cookie for identifying the user using the react-cookie library. It also uses the useEffect and useState hooks to manage state and update the UI based on changes in state.

Create our chat.ts API Route

Inside the /pages directory, create a folder called api, and create a file inside called chat.ts. Copy and paste the following:

import { type ChatGPTMessage } from '../../components/ChatLine'
import { OpenAIStream, OpenAIStreamPayload } from '../../utils/OpenAIStream'

// break the app if the API key is missing
if (!process.env.OPENAI_API_KEY) {
  throw new Error('Missing Environment Variable OPENAI_API_KEY')
}

export const config = {
  runtime: 'edge',
}

const handler = async (req: Request): Promise<Response> => {
  const body = await req.json()

  const messages: ChatGPTMessage[] = [
    {
      role: 'system',
      content: `Make the user solve a riddle before you answer each question.`,
    },
  ]
  messages.push(...body?.messages)

  const requestHeaders: Record<string, string> = {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
  }

  if (process.env.OPENAI_API_ORG) {
    requestHeaders['OpenAI-Organization'] = process.env.OPENAI_API_ORG
  }

  const payload: OpenAIStreamPayload = {
    model: 'gpt-3.5-turbo',
    messages: messages,
    temperature: process.env.AI_TEMP ? parseFloat(process.env.AI_TEMP) : 0.7,
    max_tokens: process.env.AI_MAX_TOKENS
      ? parseInt(process.env.AI_MAX_TOKENS)
      : 100,
    top_p: 1,
    frequency_penalty: 0,
    presence_penalty: 0,
    stream: true,
    user: body?.user,
    n: 1,
  }

  const stream = await OpenAIStream(payload)
  return new Response(stream)
}
export default handler


Enter fullscreen mode Exit fullscreen mode

This code is a serverless function that uses the OpenAI API to generate a response to a user's message. It takes in a list of messages from the user and then sends a request to the OpenAI API with the messages, along with some configuration parameters such as the temperature, maximum tokens, and presence penalty. The response from the API is then streamed back to the user.

Wrapping it all up

All that's left to do is render our ChatBot onto our index.tsx page. Inside your /pages directory, you'll find a index.tsx file. Copy and paste this code into it:

import { Layout, Text, Page } from '@vercel/examples-ui'
import { Chat } from '../components/Chat'

function Home() {
  return (
    <Page className="flex flex-col gap-12">
      <section className="flex flex-col gap-6">
        <Text variant="h1">OpenAI GPT-3 text model usage example</Text>
        <Text className="text-zinc-600">
          In this example, a simple chat bot is implemented using Next.js, API
          Routes, and OpenAI API.
        </Text>
      </section>

      <section className="flex flex-col gap-3">
        <Text variant="h2">AI Chat Bot:</Text>
        <div className="lg:w-2/3">
          <Chat />
        </div>
      </section>
    </Page>
  )
}

Home.Layout = Layout

export default Home
Enter fullscreen mode Exit fullscreen mode

And there you have it! You're very own ChatGPT Chat Bot that you can run locally in your browser. Here's a link to the Vercel Template, that has expanded functionality beyond this post. Have fun exploring!

💖 💪 🙅 🚩
njlawz
Nat

Posted on April 16, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related