Python & OpenAI beginner journey 6 | Progress first, then frustration

gregorschafroth

Gregor Schafroth

Posted on January 3, 2024

Python & OpenAI beginner journey 6 | Progress first, then frustration

Hi everyone,

Today I continued working on my Python Flask OpenAI Chatbot app and also created a small program to display OpenAI the message history based on a thread_id (I used this to error-test my main chatbot) - you can find all the code below.

So, after a great day with a lot of progress yesterday (see post 5 in this series) I struggled more today.

What I wanted to do today is to create a bot that can answer questions about Swiss Value Added Tax in German (specifically because I know someone who will pay me some money if I can get such a bot working for him)

So I started from the code I deployed yesterday on Heroku, added the relevant knowledge base, and made some modifications (changed bot instructions, button language, etc.). I share the full code below.

The good news is that Celery and redis successfully solve my 30 second timeout error and the Chatbot now reliably answers also complicated prompts that take 2-3 minutes to answer.

The bad news is that the introduction of the knowledge base leads to a rather wide variety in answer quality. Sometimes itโ€™s great, other times the answers are totally useless. Perhaps it has something to do with the fact that just merged several PDFs in a large txt document for this to work (I am having this thought just as I am writing this text, I will test this tomorrow!)

The most frustrating thing about my situation now is that I donโ€™t have a clear way to move forward. If anyone of you sees how I could improve my bots unpredictable answer quality would love to hear!

And here is todays code:

"""
app.py is the main file for the Flask application. It contains the routes for the home page and the send_message endpoint.
On top of the last iteration knowledge retrieval was added
"""

from celery_app import process_openai_response
from flask import Flask, render_template, request, jsonify
from openai_functions import create_assistant, create_thread, create_message, create_run, wait_for_response, retrieve_response
import logging

# Configuring logging
logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s: %(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) 

# Setting up Flask
app = Flask(__name__)

# Initialize the OpenAI assistant and thread as global variables
assistant = None
thread = None

@app.route('/')
def home():
    return render_template('chat.html')

@app.route('/send_message', methods=['POST'])
def send_message():
    global assistant, thread  # Access the global variables

    # Initialize assistant and thread if they are None (first message)
    if assistant is None or thread is None:
        assistant = create_assistant()
        thread = create_thread()

    message = request.form['message']  # Get the received message
    if not message.strip():
        return jsonify({'message': 'Please enter a question.'})

    create_message(thread.id, message)
    task = process_openai_response.delay(thread.id, assistant.id)  # Dispatch Celery task
    return jsonify({'task_id': task.id})  # Return task ID to client

@app.route('/get_response/<task_id>')
def get_response(task_id):
    task = process_openai_response.AsyncResult(task_id)
    if task.state == 'PENDING':
        return jsonify({'status': 'waiting'})
    elif task.state == 'SUCCESS':
        return jsonify({'status': 'complete', 'message': task.result})
    return jsonify({'status': 'error'})   

if __name__ == '__main__':
    app.run(debug=True)
Enter fullscreen mode Exit fullscreen mode
"""
celery.py supports the main flask app by taking care of potentially long-running tasks.
"""

from celery import Celery
from openai_functions import create_assistant, create_thread, create_message, create_run, wait_for_response, retrieve_response
import logging
import os

# Configuring logging
logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s: %(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) 

# Initialize Celery with the name of your application
celery = Celery(__name__)

# Configure Celery using environment variables
celery.conf.broker_url = os.getenv('REDIS_URL', 'redis://localhost:6379/0')
celery.conf.result_backend = os.getenv('REDIS_URL', 'redis://localhost:6379/0')

# Define tasks to be executed by Celery workers
@celery.task
def process_openai_response(thread_id, assistant_id):
    logger.info("Celery: Starting OpenAI response processing")
    try:
        run = create_run(thread_id, assistant_id)
        wait_for_response(thread_id, run)
        ai_response = retrieve_response(thread_id)
        return ai_response
    except Exception as e:
        logger.error(f"Error processing OpenAI response: {e}")
        raise
Enter fullscreen mode Exit fullscreen mode
"""
openai_functions.py stores OpenAI related functions
"""

from openai import OpenAI
import logging
import os
import time

logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s: %(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) 

api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)

def create_assistant():
    logger.debug('Uploading file...')
    file = client.files.create(
        file=open("mwst-branchen-info.txt", "rb"),
        purpose='assistants'
    )

    logger.debug('Creating assistant...')
    assistant = client.beta.assistants.create(
        name="SwissVAT Bot",
        instructions="You help businesses look up Swiss VAT regulations. Use mwst-branchen-info.txt to find the regulations (the user did not upload this, you were prepared by SwissVAT to have this available). You answer only in German. Your user is a lawyer, so always provide chapter number and name where your information is from, otherwise the information is useless. Provide answers in great detail",
        tools=[{"type": "retrieval"}],
        model="gpt-4-1106-preview",
        file_ids=[file.id]
    )
    return assistant

def create_thread():
    logger.debug('Creating thread...')
    thread = client.beta.threads.create()
    return thread

def create_message(thread_id, user_input):
    logger.debug('Creating message...')
    client.beta.threads.messages.create(
        thread_id=thread_id,
        role="user",
        content=user_input
    )

def create_run(thread_id, assistant_id):
    logger.debug('Creating run...')
    run = client.beta.threads.runs.create(
        thread_id=thread_id,
        assistant_id=assistant_id,
    )
    return run

def wait_for_response(thread_id, run):
    logger.info(f'run.status: {run.status}')
    while run.status == 'queued' or run.status == 'in_progress':
        logger.debug('Retrieving run...')
        run = client.beta.threads.runs.retrieve(
            thread_id=thread_id,
            run_id=run.id
        )
        time.sleep(1)
        logger.info(f'run.status: {run.status}')

def retrieve_response(thread_id):
    logger.debug('Retrieving messages...')
    messages = client.beta.threads.messages.list(
        thread_id=thread_id
    )
    if messages.data:    
        # Retrieve the message object
        message = client.beta.threads.messages.retrieve(
        thread_id=thread_id,
        message_id=messages.data[0].id
        )
        # Extract the message content
        message_content = message.content[0].text
        annotations = message_content.annotations
        citations = []
        # Iterate over the annotations and add footnotes
        for index, annotation in enumerate(annotations):
            # Replace the text with a footnote
            message_content.value = message_content.value.replace(annotation.text, f' [{index}]')

            # Gather citations based on annotation attributes
            if (file_citation := getattr(annotation, 'file_citation', None)):
                cited_file = client.files.retrieve(file_citation.file_id)
                citations.append(f'[{index}] {file_citation.quote} aus {cited_file.filename}')
            elif (file_path := getattr(annotation, 'file_path', None)):
                cited_file = client.files.retrieve(file_path.file_id)
                citations.append(f'[{index}] Click <here> to download {cited_file.filename}')
                # Note: File download functionality not implemented above for brevity

        # Add footnotes to the end of the message before displaying to user
        message_content.value += '\n\n' + '\n'.join(citations)
        ai_response = message_content.value
    else:
        ai_response = 'No response from AI.'

    # Ensure ai_response ends with exactly one line break
    ai_response = ai_response.rstrip() + '\n\n'

    return ai_response
Enter fullscreen mode Exit fullscreen mode
<!-- chat.html has all html, css, and js -->
<!DOCTYPE html>
<html>

<head>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>Chatbot</title>
    <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
    <style>
        body,
        html {
            height: 100%;
            margin: 0;
            padding: 0;
            display: flex;
            flex-direction: column;
        }

        #messageInput {
            width: calc(100% - 120px);
            padding: 0px;
            margin-bottom: 0px;
        }

        #chatbox {
            flex-grow: 1;
            overflow-y: auto;
            border: 1px solid #ccc;
            padding: 10px;
            margin-bottom: 0px;
        }

        #sendButton {
            padding: 10px 10px;
        }

        #inputArea {
            display: flex;
            justify-content: space-between;
            align-items: center;
            width: 100%;
            padding: 10px;
            box-shadow: 0px -2px 5px rgba(0, 0, 0, 0.1);
            z-index: 1000;
        }

        /* Input field inside the input area */
        #messageInput {
            flex-grow: 1;
            /* Allow the input to grow and fill available space */
            margin-right: 10px;
            /* Add some space between the input field and the button */
            padding: 10px;
            border: 1px solid #ccc;
            height: 40px;
            /* Specify the height */
        }

        /* Send button */
        #sendButton {
            padding: 10px 10px;
            border: 1px solid #ccc;
            height: 40px;
            /* Make sure this is the same as the input field */
            background-color: #f9f9f9;
            /* Just an example, set to your preferred color */
            cursor: pointer;
            /* Changes the cursor to signify this is clickable */
        }

        .bot-message {
            color: #880000;
        }

        @media (max-width: 600px) {

            #messageInput,
            #sendButton {
                flex-grow: 0;
                /* Prevent growing in smaller screens */
                width: calc(50% - 5px);
                /* Adjust width for smaller screens */
                font-size: 16px;
                /* Larger font size for readability on mobile */
            }

            /* This container will hold the chat interface and should fill the screen or the desired portion of the screen */
            #chatContainer {
                display: flex;
                flex-direction: column;
                height: 100vh;
                /* Use 100% of the viewport height, or adjust to desired height */
                /* ... other styles ... */
            }

            #chatbox {
                overflow-y: auto;
                font-size: 1.2em;
                min-height: 300px;
            }

            /* Add this to ensure all text is easily readable without zooming */
            body,
            input,
            button,
            select,
            textarea {
                font-size: 16px;
                /* Minimum recommended font size for mobile devices */
            }
        }

        *,
        *:before,
        *:after {
            -webkit-box-sizing: border-box;
            /* Safari/Chrome, other WebKit */
            -moz-box-sizing: border-box;
            /* Firefox, other Gecko */
            box-sizing: border-box;
            /* Opera/IE 8+ */
        }
    </style>
</head>

<body>
    <div id="chatbox">
        <!-- Messages will be displayed here -->
    </div>

    <!-- Input area fixed to the bottom -->
    <div id="inputArea">
        <input type="text" id="messageInput" placeholder="Nachricht eingeben...">
        <button id="sendButton" onclick="sendMessage()">Senden</button>
    </div>

    <script>
        $(document).ready(function () {
            $('#messageInput').keypress(function (event) {
                if (event.which === 13) { // 13 is the key code for Enter key
                    event.preventDefault(); // Prevents the default action of the Enter key (which is to submit the form)
                    sendMessage();
                }
            });
            setTimeout(function () {
                var welcomeMessage = "Guten Tag, haben Sie eine Frage zur Mehrwertsteuer?<br><br>";
                var timestamp = getCurrentTimestamp();
                $('#chatbox').append('<div class="bot-message">' + timestamp + ' Bot: ' + welcomeMessage + '</div>');
            }, 1000); // 1000 milliseconds = 1 second
        });
        function getCurrentTimestamp() {
            var now = new Date();
            return now.getHours().toString().padStart(2, '0') + ':' +
                now.getMinutes().toString().padStart(2, '0') + ':' +
                now.getSeconds().toString().padStart(2, '0');
        }
        function sendMessage() {
            var message = $('#messageInput').val();
            if (message) {
                var formattedMessage = message.replace(/\n/g, '<br>'); // Replace line breaks with <br>
                var timestamp = getCurrentTimestamp();
                $('#chatbox').append('<div>' + timestamp + ' Nachricht: ' + formattedMessage + '</div>');
                $('#messageInput').val('');
                $('#messageInput').prop('disabled', true); // Disable the input field
                $('#sendButton').prop('disabled', true); // Disable the send button
                $('#messageInput').attr('placeholder', '๐Ÿค– AI Antwort wird generiert... '); // Set loading message

                $.post('/send_message', { message: message }, function (data) {
                    if (data.task_id) {
                        checkTask(data.task_id);  // Call checkTask with the returned task ID
                    } else {
                        // Handle the case where no task ID is returned (error handling)
                        $('#chatbox').append('<div>Error: Unable to process message.</div>');
                        $('#messageInput').prop('disabled', false); // Re-enable the input field
                        $('#sendButton').prop('disabled', false); // Re-enable the send button
                        $('#messageInput').attr('placeholder', 'Nachricht eingeben...'); // Reset placeholder
                    }
                });
            }
        }
        function checkTask(taskId) {
            $.get('/get_response/' + taskId, function (data) {
                if (data.status === 'complete') {
                    var formattedResponse = data.message.replace(/\n/g, '<br>'); // Replace line breaks with <br>
                    var timestamp = getCurrentTimestamp();
                    $('#chatbox').append('<div class="bot-message">' + timestamp + ' SwissVAT Bot: ' + formattedResponse + '</div>');
                    $('#messageInput').prop('disabled', false);
                    $('#sendButton').prop('disabled', false);
                    $('#messageInput').attr('placeholder', 'Nachricht eingeben...');
                    $('#messageInput').focus();
                } else if (data.status === 'waiting') {
                    setTimeout(function () { checkTask(taskId); }, 2000);  // Poll every 2 seconds
                }
            });
        }
    </script>
</body>

</html>
Enter fullscreen mode Exit fullscreen mode

And next my quickly put together code that displays messages history based on an OpenAI thread_id. Itโ€™s not very elegant and not finished but it works ๐Ÿ˜„

"""
show.py displays a conversation a conversation with the OpenAI Assistants API based on a thread ID.
"""
import logging
import os

from openai import OpenAI

# Set up basic configuration for logging
logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s: %(message)s')

# Create own logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)

thread_id = input('Enter thread_id: ')

logger.debug('get messages')
messages = client.beta.threads.messages.list(
    thread_id=thread_id
)

print(f'1 messages: {messages}\n')

logger.debug('get response')
message = client.beta.threads.messages.retrieve(
    thread_id=thread_id,
    message_id=messages.data[0].id
)

print(f'2 message: {message}\n')
print(f'3 message.content[0].text.value: {message.content[0].text.value}\n')

# Retrieve the message object
message = client.beta.threads.messages.retrieve(
  thread_id=thread_id,
  message_id=message.id
)

# Extract the message content
message_content = message.content[0].text
annotations = message_content.annotations
citations = []

# Iterate over the annotations and add footnotes
for index, annotation in enumerate(annotations):
    # Replace the text with a footnote
    message_content.value = message_content.value.replace(annotation.text, f' [{index}]')

    # Gather citations based on annotation attributes
    if (file_citation := getattr(annotation, 'file_citation', None)):
        cited_file = client.files.retrieve(file_citation.file_id)
        citations.append(f'[{index}] {file_citation.quote} from {cited_file.filename}')
    elif (file_path := getattr(annotation, 'file_path', None)):
        cited_file = client.files.retrieve(file_path.file_id)
        citations.append(f'[{index}] Click <here> to download {cited_file.filename}')
        # Note: File download functionality not implemented above for brevity

# Add footnotes to the end of the message before displaying to user
message_content.value += '\n' + '\n'.join(citations)

for message in message.content:
    print(f'4 message_content.value: {message_content.value}\n')
Enter fullscreen mode Exit fullscreen mode

Any thoughts? If you made it this far would love to see your comment, whatever you might think ๐Ÿ˜Š๐Ÿ™

๐Ÿ’– ๐Ÿ’ช ๐Ÿ™… ๐Ÿšฉ
gregorschafroth
Gregor Schafroth

Posted on January 3, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

ยฉ TheLazy.dev

About