🤖 From Chatbots to Personal Assistants: Building LLM Apps in JavaScript

anticoder03

Ashish prajapati

Posted on November 1, 2024

🤖 From Chatbots to Personal Assistants: Building LLM Apps in JavaScript

Large Language Models (LLMs) are transforming the way we build intelligent applications, allowing developers to create everything from helpful chatbots to full-fledged personal assistants! With LLMs like OpenAI's GPT models or Hugging Face’s Transformers, you can build responsive, human-like interactions that can help answer questions, set reminders, and much more.

In this article, we’ll walk through the process of building an LLM-powered app in JavaScript. We’ll cover the basics of setting up your environment, working with API calls, and designing a chatbot or assistant with real-world use cases. Let’s dive in! 🌊


🌟 Why Build with JavaScript?

JavaScript is incredibly popular, especially in web development. Here’s why it’s a great choice for building LLM applications:

  • Wide Adoption 🌍: JavaScript is everywhere, from frontend to backend, and is supported by a vast ecosystem of libraries and frameworks.
  • Easy Integration with APIs 🔌: JavaScript (and Node.js) makes it easy to send HTTP requests to APIs, perfect for working with LLMs.
  • Frontend and Backend Capabilities 🖥️: You can use JavaScript to build both the interface (frontend) and server (backend) of your app.

Image description

🛠️ Setting Up Your LLM Application

To get started, we need a basic environment for JavaScript development and access to an LLM API. In this example, we’ll use OpenAI’s API, which provides access to models like GPT-3 and GPT-4.

1. Create an OpenAI Account and API Key 🔑

  • Sign up at OpenAI and create an API key from the API settings.
  • This API key will allow us to send requests to OpenAI’s servers to generate responses.

2. Set Up a Project 📁

  • Make a new project directory and initialize it with npm:

     mkdir LLMAssistantApp
     cd LLMAssistantApp
     npm init -y
    
  • Install axios for making HTTP requests and dotenv to manage environment variables:

     npm install axios dotenv
    
  • Create a .env file to securely store your API key:

     OPENAI_API_KEY=your_openai_api_key_here
    

3. Write the API Interaction Code 📝

In your index.js file, write the code to interact with the OpenAI API. Here’s a basic example:

   require('dotenv').config();
   const axios = require('axios');

   async function getLLMResponse(prompt) {
     try {
       const response = await axios.post(
         'https://api.openai.com/v1/completions',
         {
           model: 'text-davinci-003',  // Choose a model here (or GPT-4, if available)
           prompt: prompt,
           max_tokens: 100
         },
         {
           headers: {
             'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
           },
         }
       );
       return response.data.choices[0].text.trim();
     } catch (error) {
       console.error('Error fetching response:', error);
     }
   }

   // Test the function
   getLLMResponse("What is the capital of France?").then(console.log);
Enter fullscreen mode Exit fullscreen mode

Now, whenever you run this file with node index.js, it will send your prompt to the OpenAI API and print the response!


💬 Building a Chatbot Interface

Now that we have the basics in place, let’s turn this into a chatbot that can interact with users. We’ll build a simple HTML frontend to display responses in a chat format.

1. Create an HTML Frontend 🌐

In your project directory, create an index.html file:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>LLM Chatbot</title>
  <style>
    body { font-family: Arial, sans-serif; max-width: 600px; margin: auto; padding: 1rem; }
    .chat-container { display: flex; flex-direction: column; gap: 0.5rem; }
    .chat { padding: 0.75rem; border-radius: 5px; }
    .user { align-self: flex-end; background-color: #007bff; color: white; }
    .bot { align-self: flex-start; background-color: #f1f1f1; }
  </style>
</head>
<body>
  <h1>Chat with AI Assistant</h1>
  <div class="chat-container" id="chatContainer"></div>
  <input type="text" id="userInput" placeholder="Type your message" onkeypress="handleKeyPress(event)">
  <button onclick="sendMessage()">Send</button>

  <script src="app.js"></script>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

2. Create the JavaScript Frontend Logic 📲

Create an app.js file to handle user input, send it to the backend, and display the response.

async function sendMessage() {
  const userInput = document.getElementById('userInput').value;
  addChatMessage('user', userInput);

  const response = await fetch('http://localhost:3000/chat', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ message: userInput })
  });

  const data = await response.json();
  addChatMessage('bot', data.reply);
}

function addChatMessage(role, message) {
  const chatContainer = document.getElementById('chatContainer');
  const chatMessage = document.createElement('div');
  chatMessage.className = `chat ${role}`;
  chatMessage.innerText = message;
  chatContainer.appendChild(chatMessage);
}

function handleKeyPress(event) {
  if (event.key === 'Enter') {
    sendMessage();
    document.getElementById('userInput').value = '';
  }
}
Enter fullscreen mode Exit fullscreen mode

🧠 Enhancing the Assistant with Custom Skills

To make your assistant smarter, you can program it to handle specific types of questions or perform tasks. Here are some fun additions:

1. Weather Assistant 🌤️

Integrate a weather API (like OpenWeatherMap) to allow your assistant to give weather updates when prompted with phrases like, “What’s the weather in New York?”

2. To-Do List Management 📝

Allow users to create and manage a to-do list within the chat, so they can add reminders, set due dates, and more.

3. Joke or Fact Generator 😆

Add a command to make the assistant tell a joke or share a fun fact when prompted—great for giving users a lighthearted experience!


🔗 Connecting Frontend to Backend

To connect the frontend and backend, let’s set up a simple Express.js server that routes chat requests to OpenAI’s API.

Backend Server Code (Express)

Create server.js:

const express = require('express');
const cors = require('cors');
const bodyParser = require('body-parser');
const axios = require('axios');
require('dotenv').config();

const app = express();
app.use(cors());
app.use(bodyParser.json());

app.post('/chat', async (req, res) => {
  const userMessage = req.body.message;

  try {
    const response = await axios.post(
      'https://api.openai.com/v1/completions',
      {
        model: 'text-davinci-003',
        prompt: userMessage,
        max_tokens: 100
      },
      {
        headers: {
          'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`
        }
      }
    );

    res.json({ reply: response.data.choices[0].text.trim() });
  } catch (error) {
    res.status(500).send('Error connecting to OpenAI API');
  }
});

app.listen(3000, () => console.log('Server running on http://localhost:3000'));
Enter fullscreen mode Exit fullscreen mode

Run the server with:

node server.js
Enter fullscreen mode Exit fullscreen mode

Now, when you type a message in the frontend, it sends the message to the server, which retrieves a response from the LLM and returns it to the chat!


🎉 Wrapping Up

And there you have it! You’ve built a basic LLM-powered chatbot using JavaScript. With this foundation, you can customize it for various use cases, from personal assistants to interactive FAQ bots. Now, take your creativity further and experiment with additional integrations and personalized responses!

Here is a full featured ai chatbot (Using MERN stack technology):

https://github.com/Anticoder03/ai-chat-bot

Overview :

Sign Up

Image description

Login

Image description

Bot

Image description

Happy coding! 🎈

💖 💪 🙅 🚩
anticoder03
Ashish prajapati

Posted on November 1, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related