OpenAI Assistant with NextJS
Duoc95
Posted on May 26, 2024
In this blog, I will show you how to use the OpenAI Assistant with NextJS.
What is the OpenAI Assistant?
The OpenAI Assistant is a purpose-built AI that uses OpenAI's models and can access files, maintain persistent threads, and call tools. Reference.
Let's Get Started
Create Your First Assistant
- Prerequisite: You need to have an OpenAI API subscription. Go to the OpenAI platform, and on the navigation sidebar, click on "Assistants."
On the assistant page, click "Create your assistant."
Give it a name and describe what you want your assistant to do. Remember, the more detailed your description is, the more precise the assistant's answers will be.
Create the NextJS UI
I assume you know how to create a NextJS project. In this project, I use NextJS with Shadcn UI.
- Create the Chat UI
"use client";
import CustomReactMarkdown from "@/components/CustomMarkdown";
import WithAuth from "@/components/WithAuth";
import { Button } from "@/components/ui/button";
import { Textarea } from "@/components/ui/textarea";
import { cloneDeep } from "lodash";
import { useForm } from "react-hook-form";
import {
BotIcon,
CircleSlash,
SendHorizonalIcon,
User,
Wand,
} from "lucide-react";
import { useEffect, useRef, useState } from "react";
type Message = {
text: string;
role: string;
};
function Page() {
const { register, handleSubmit, reset } = useForm();
const [chatLogs, setChatLogs] = useState<Message[]>([]);
const chatRef = useRef<HTMLDivElement>(null);
const lastMessage = chatLogs?.[chatLogs.length - 1]?.text;
const [processing, setProcessing] = useState(false);
const [isTyping, setIsTyping] = useState(false);
useEffect(() => {
if (chatRef.current) {
chatRef.current.scrollTo({
top: chatRef.current.scrollHeight,
behavior: "smooth",
});
}
}, [lastMessage]);
const onSubmit = async (data: any) => {
const prompt = data.prompt;
if (!prompt) {
return;
} else {
setProcessing(true);
setChatLogs((prev) => [
...prev,
{
text: prompt,
role: "user",
},
]);
const formdata = new FormData();
formdata.append("prompt", prompt);
reset();
const res = await fetch("/api/assistant", {
method: "POST",
body: formdata,
});
const reader = res.body?.pipeThrough(new TextDecoderStream()).getReader();
while (true) {
const val = (await reader?.read()) as {
done: boolean;
value: any;
};
if (val?.done) {
setIsTyping(false);
break;
}
if (val?.value) {
if (val?.value?.includes("in_progress")) {
setProcessing(false);
}
if (val?.value?.includes("completed")) {
setIsTyping(false);
}
let content;
const cleanedString = val?.value;
content = JSON.parse(cleanedString);
if (content?.event === "thread.message.delta") {
if (processing) {
setProcessing(false);
}
if (!isTyping) {
setIsTyping(true);
}
const text = content?.data?.delta?.content?.[0]?.text?.value ?? "";
setChatLogs((prev) => {
const newLogs = cloneDeep(prev);
const lastMessage = newLogs?.[newLogs.length - 1];
if (lastMessage?.role === "assistant") {
lastMessage.text += text;
} else {
newLogs.push({
text,
role: "assistant",
});
}
return newLogs;
});
}
}
}
}
};
return (
<div className="relative max-w-7xl mx-auto min-h-[calc(100vh-80px)]">
<h1 className="text-xl sm:text-2xl text-center mt-2 relative ">
<span className="flex items-center space-x-2 justify-center">
<span>Recipe Assistant </span>
<BotIcon color="blue" />
</span>
</h1>
<div
ref={chatRef}
className="overflow-y-auto mt-2 sm:mt-4 p-3 sm:p-8 rounded-lg no-scrollbar h-[calc(100vh-230px)]"
>
{chatLogs.map((log, index) =>
log.role === "user" ? (
<div key={index} className="relative p-2 sm:p-6">
<span className="text-gray-500">
<User
className="sm:absolute left-0 sm:-translate-x-[120%]"
size={27}
/>
</span>
<div bg-gray-50>{log.text}</div>
</div>
) : (
<div key={index} className="relative ">
<span className="text-gray-500 ">
<BotIcon
className="sm:absolute left-0 sm:-translate-x-[120%]"
size={27}
/>
</span>
<CustomReactMarkdown
content={log.text}
className="p-2 sm:p-6 bg-gray-100 my-3"
/>
</div>
)
)}
{processing && (
<div className="flex items-center space-x-2">
<span className="animate-spin ">
<CircleSlash />
</span>
</div>
)}
</div>
<div className="absolute w-full left-0 bottom-0 text-sm">
<div className="w-10/12 mx-auto sm:hidden"></div>
<form
onSubmit={handleSubmit(onSubmit)}
className="flex gap-4 w-10/12 mx-auto relative "
>
<Textarea
className="text-sm sm:text-md md:text-xl px-8 sm:px-4"
placeholder="I use the XS-10 camera. I will take pictures of a female model at 6 AM."
id="prompt"
{...register("prompt")}
onKeyDown={(e) => {
if (e.key === "Enter" && !e.shiftKey) {
e.preventDefault();
handleSubmit(onSubmit)();
}
}}
/>
<Button
size={"sm"}
variant={"link"}
type="submit"
className="absolute right-0 sm:right-3 px-1 sm:px-3 top-1/2 -translate-y-1/2"
disabled={processing || !prompt || isTyping}
>
<SendHorizonalIcon />
</Button>
</form>
</div>
</div>
);
}
This is my UI
Create the API Route
First, you need to install the OpenAI package:
npm i openai
- The API route code example: ```javascript
import OpenAI from 'openai';
import { NextRequest, NextResponse } from 'next/server';
const openai = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'],
});
export async function POST(req: NextRequest, res: NextResponse) {
const formData = (await req.formData()) as any;
const prompt = (formData.get("prompt") as string) ?? '';
const thread = await openai.beta.threads.create();
await openai.beta.threads.messages.create(
thread.id,
{
role: "user",
content: prompt,
}
);
const result = await openai.beta.threads.runs.create(thread.id, {
assistant_id: process.env['OPENAI_ASSISTANT_ID'] as string,
stream: true,
});
const response = result.toReadableStream();
return new Response(response, {
headers: {
'Content-Type': 'text/plain',
'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive',
},
});
}
Here is my own fujifilm recipes assistant [fujifilm assistant](https://fujixfilm.com/assistant)
And that's it! Create your own assistant and make it work. If you have any questions, feel free to drop a comment below.
Thanks and happy coding!
Posted on May 26, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.