Build A Dual-Purpose App: Text-to-Image and Custom Chatbot Using Comet, GPT-3.5, DALL-E 2, and Streamlit

theoonyejiaku

TheophilusOnyejiaku

Posted on May 29, 2024

Build A Dual-Purpose App: Text-to-Image and Custom Chatbot Using Comet, GPT-3.5, DALL-E 2, and Streamlit

Overview

In this guide, we will explore how to create a dual-purpose application: a chatbot powered by custom dataset and a text-to-image generator, using OpenAI’s GPT-3.5 turbo and DALL-E 2 models, along with Comet and Streamlit.

Now let’s take a brief look at the infrastructures we will be using.

Comet

Comet is a platform that offers real-time experiment tracking with additional collaboration features. With Comet you can log your object detection models (YOLO, Tensorflow), large language models, regression and classification models and the like, with their various parameters.  It also gives you the capability to monitor the training and prompting of all of these models and provides you with the option to share your logged projects publicly or privately with your team.

One advantage Comet has over similar platforms is its ability to easily integrate with your existing infrastructure and tools so you can manage, visualize, and optimize models from training runs to production monitoring. 

GPT-3.5 Turbo Model

According to OpenAI, the GPT-3.5 Turbo is a model that improves on GPT-3.5 and can understand as well as generate natural language or code. With the help of user feedback, OpenAI has improved the GPT-3.5 Turbo language model, making it more proficient at understanding and following instructions. Being a fine-tuned model by OpenAI, it has been given examples of inputs and expected outputs to train (fine-tune) it for a particular task. OpenAI created GPT-3.5 Turbo as an expansion of their well-liked GPT-3 model. The GPT-3.5-Turbo-Instruct is available in three model sizes: 1.3B, 6B, and 175B parameters.

DALL-E 2

DALL·E 2 is an AI system that can create realistic images and art from a description in natural language. Below is an image generated by this app by running a prompt “A cup pouring fire as a portal to another dimension.

Streamlit

Streamlit is a platform that enables you to build web applications that can be hosted in the cloud in just minutes. It helps you build interactive dashboards, generate reports, or create chat applications. Once you’ve created an app, you can use the Community Cloud platform to deploy, manage, and share your application.

This application is an example of deploying with Streamlit.

Prerequisites

In this section, we will quickly take a look at some of the tools you will need to successfully follow along with these steps and ultimately build your own application.

  • Python: A high level programming language for many use cases. For this project, we will be using Python version 3.9. This project will still work with any older version of Python. Proceed here to download any version of Python from the list of operating systems available. Ensure to add Python to your PC environment variable by following this guide.
  • pip: A package installer used in python. It is very important to have pip running in your PC for you to be able to flow along with this project. See this guide on how to install pip and add it to your PC path.
  • Pycharm IDE: Pycharm  is the integrated development environment we will be using to build the application. It is simply where we will be writing our code. It is easy to install and saves you a lot of coding time, by assisting with code completion, code navigation, code refactoring and debugging. The community edition of this software is free! Once you create and give a name to any new project, it provides you with a Python virtual environment (venv) that enables the installation of libraries specifically for that project as opposed to sharing them with all users of the computer.
  • Dataset: The dataset we will be using in this project for training the LLM can be found here. Taking a closer look at the dataset structure, as seen in the figure below for the first two movies from the dataset, we will need only the movie's "title", "year", "genre" and the "extract". This structure of the dataset is very important to take into consideration; when we get to the coding part of this project, we will look into that.

Now Let’s Get Started!

To achieve our objective, we will be following just 5 simple steps.

Step 1: Create a Comet account to log your LLM

Now, if you haven't already, go to Comet and create a new account. After successfully creating your account, head on to API key section to get a copy of your comet API key.

Here, you can generate an API key for all your projects. You will need the API key for this part of the project. Copy and save it somewhere.

Step 2: Create an OpenAI account to access OpenAI API

If you are new to OpenAI, create your OpenAI account here. Once you’ve successfully created an account, go on to API key section by using the same credentials you used when creating your account. On the left panel of the screen, click on “API Keys” and then proceed to click on “Create new secret key”. This is shown below:

Next, you get a pop-up, as shown below, asking you to give a name for your secret key. Proceed to give it any name and click the “Create secret key” option.

Once done, you get the response to save your key. Ensure to copy and save your API key somewhere as you might loose it if you do not copy it instantly.

 

Note: This API key is very important to save immediately. Once you close this pop up, you will not be able to get the key again and you will then need to create a new key from scratch. It is therefore necessary you copy it and save it somewhere on your PC. An MS word will do just fine.

Building the Application

Now its time to build the application. You can use any IDE of your choice (I used Pycharm). We will need the following libraries for the successful development of this application:

  • comet-llm: This is a tool that will be used to log and visualize our LLM prompts.
  • openai: This is the tool with which we will be using the GPT 3.5 turbo and DALL-E 2 API’s.
  • Streamlit: An open-source framework used for building data science and machine learning applications.
  • json: Python module for encoding and decoding JSON data.
  • urllib.request: Python module for making HTTP requests and working with URLs.

Step 3: Install all Dependencies

You first create a new project in your Pycharm IDE and give it any name. This way, you automatically have an environment to start coding with your Python interpreter and other packages.

Now, in your IDE terminal, run the following commands to install all the dependencies:

pip install openai streamlit comet_llm

Once done successfully, you will need to configure your API key from OpenAI.

Step 4: Configure your OpenAI API key

Inside your IDE directory, create a new folder called “.streamlit” and create a new file, “secrets.toml” file inside it. It will look like this snippet shown below:

Now open the “secrets..toml” file and add the following line of text:

MY_KEY = "Now copy your Openai API key you copied before now and paste it here to replace this."

Make sure to replace “Now copy your OpenAI API key you copied before and paste it here.” with your actual OpenAI API key. After adding this line, save the file.

Step 5: Write your Code

Now create a new python script and give it any name. For this, I named mine “dualapp”. Below is the code to build this dual-purpose app with inline explanation for each line of code.

import streamlit as st
import openai
from openai import OpenAI
import comet_llm
import json
from urllib.request import urlopen

# Initialize OpenAI client
client = OpenAI(api_key=st.secrets["MY_KEY"])


# Load the JSON content of movies from the provided URL
response = urlopen("https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies-2020s.json")

# Limit to the first 100 items
train_data = json.loads(response.read())[:100]

# Extract relevant information for training
movie_info = """
Hi! I am a chatbot designed to assist you. 
Here are some movies you might find interesting:
"""

for entry in train_data:
    title = entry.get('title', '')
    year = entry.get('year', '')
    genres = ", ".join(entry.get('genres', []))
    extract = entry.get('extract', 'No extract available')
    movie_info += f"- {title} ({year}) - Genres: {genres}\n"
    movie_info += f"  Extract: {extract}\n"


# Instruction for the model
instruction = """
You are strictly going to answer questions based on the movies provided to you. Do not discuss any other information that
has nothing to do with the movies provided to you. 
I want you to take note of the year, title, genre, and extract of the movies and be able to answer questions on them.
"""

# Combine movie_info and instruction for the system message
system_message = instruction + "\n\n" + movie_info

selection = st.sidebar.selectbox("Chat Bot to Text to Image", ("Custom Chat Bot", "Text to Image"))

if selection == "Custom Chat Bot":
    # Initialize Streamlit UI
    st.title("This is a chatbot about Theo")

    # Initialize chat history
    if "messages" not in st.session_state:
        st.session_state.messages = []

    # Display chat history
    for message in st.session_state.messages:
        if message["role"] == "user":
            st.markdown(f"**You:** {message['content']}")
        elif message["role"] == "assistant":
            st.markdown(f"**💼:** {message['content']}")

    # User input for new chat
    prompt = st.text_input("📝", key="user_input_" + str(len(st.session_state.messages)))

    if prompt:
        st.session_state.messages.append({"role": "user", "content": prompt})

        # Formulate message for OpenAI API
        messages = [{"role": "system", "content": system_message}]
        for message in st.session_state.messages:
            messages.append({"role": message["role"], "content": message["content"]})

        full_response = ""
        for response in client.chat.completions.create(
                messages=messages,
                model="gpt-3.5-turbo",
                stream=True,
        ):
            full_response += (response.choices[0].delta.content or "")
        st.session_state.messages.append({"role": "assistant", "content": full_response})
        st.markdown(f"**💼:** {full_response}")

        # Display user input field for next chat
        st.text_input("📝", key="user_input_" + str(len(st.session_state.messages)))

        # log LLM prompt on comet
        comet_llm.log_prompt(
            api_key="9HibPMbc18shhthis_is_my_api_key",
            prompt=prompt,
            output=full_response,
            metadata={
                "model": "gpt-3.5-turbo"
            }
        )

else:
    # Initialize OpenAI client
    client = OpenAI(api_key=st.secrets["MY_KEY"])

    # Streamlit UI for Text to Image
    st.title("DALL-E-2 Text-to-Image Generation")

    # User input for text prompt
    text_prompt = st.text_input("Enter a text prompt")

    if text_prompt:
        # Use the OpenAI API to generate image from text prompt
        response = client.images.generate(
            model="dall-e-2",
            prompt=text_prompt,
            size="1024x1024",
            quality="standard",
            n=1,
        )

        # Get the generated image URL from the OpenAI response
        image_url = response.data[0].url

        # Display generated image
        st.image(image_url, caption="Generated Image", use_column_width=True)

Key take-aways from the code above:

  • Initialize OpenAI client using the API key you copied from your OpenAI account.
  • With the variable system_message we are able to teach or give instruction to our model about any information.
  • Initialize the chat history.
  • We display the chat history.
  • We also provide a new chat for user input right away.
  • We formulate the message for OpenAI, then iteratively generate completions from a chat client using a GPT-3.5 Turbo model based on the provided messages.
  • We log the LLM prompt on Comet using the API key from Comet.

Run your App!

Run the command below to run your app. The name I gave to this app is “dualapp” as mentioned before. 

streamlit run dualapp.py

Bravo! You’ll get the response shown below:

Click on the link in the output message to view your app.

This is the home page of the app

Below is a Prompt using the chat bot

 

Below is the corresponding LLM log on comet. Visit here to view this page. Make sure to click on “Columns” in order to select the variables of the table you want to see as shown in the figure below:

 

Now lets explore the app:

[video width="1280" height="644" mp4="https://www.hitsubscribe.com/wp-content/uploads/2024/04/YouCut_20240409_113404658.mp4"][/video]

Summary

To successfully create this dual purpose app that integrates both text-to-image and custom chatbot, we followed the following steps:

  • Step 1: Create a Comet account to log your LLM.
  • Step 2: Create an OpenAI account to access your OpenAI API keys.
  • Step 3: Install all dependencies.
  • Step 4: Configure your OpenAI API key.
  • Step 5: Write your code.

Thank you for your time!

Credit: Dataset from Peter Rust 

💖 💪 🙅 🚩
theoonyejiaku
TheophilusOnyejiaku

Posted on May 29, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related