I fine-tuned my model on a new programming language. You can do it too! ๐Ÿš€

nevodavid

Nevo David

Posted on April 25, 2024

I fine-tuned my model on a new programming language. You can do it too! ๐Ÿš€

I have been using OpenAI ChatGPT-4 for a while now.
I don't have a lot of bad stuff to say about it.
But sometimes, it's not enough.

In Winglang, we wanted to use OpenAI and ChatGPT-4 to answer people's questions based on our documentation.

Your options are:

  • Use OpenAI assistant or any other vector-based database with (RAG). It worked nicely since Wing looked like JS, but there were still many mistakes.
  • Passing the entire documentation into the context window is super expensive.

Soon enough, we realized that was not going to work.
It's time to host our own LLM.

Problem


Your LLM dataset

Before we train our model, we need to create data that will be used to train the model. In our case, the Winglang documentation. I will do something pretty simple.

  1. Extract all the URLs from the sitemap, set a GET request, and collect the content.
  2. Parse it; we want to convert all the HTML into readable content.
  3. Run it with ChatGPT 4 to convert the content into a CSV as the dataset.

It should be something like this:

LLM DATASET

Once you finish, save the CSV with one column named text and add the question and the answer. We will use it later. It should look something like this:

text
<s>[INST]How to define a variable in Winglang[/INST] let a = 'Hello';</s>
<s>[INST]How to create a new lambda[/INST] bring cloud; let func = new cloud.Function(inflight () => { log('Hello from the cloud!'); });</s>
Enter fullscreen mode Exit fullscreen mode

Save it on your computer in a new folder called data.


Autotrain, your model

My computer is pretty weak, so I have decided to go into a smaller model - 7b parameters: mistralai/Mistral-7B-v0.1

There are millions of ways to train a model. We will use Huggingface Autotrain. We will use their CLI without running any Python code ๐Ÿš€

When you use Autotrain from Huggingface, you can train it on your computer (my approach here) or train it on their servers (pay money) and train larger models.

I have no GPU with my old Macbook Pro M1 2021. thank you, Apple ๐ŸŽ.

Let's install autotrain.

pip install -U autotrain-advanced
autotrain setup > setup_logs.txt
Enter fullscreen mode Exit fullscreen mode

Then, all we need to do is run the autotrain command:

autotrain llm \
--train \
--model "mistralai/Mistral-7B-Instruct-v0.2" \
--project-name "autotrain-wing" \
--data-path data/ \
--text-column text \
--lr "0.0002" \
--batch-size "1" \
--epochs "3" \
--block-size "1024" \
--warmup-ratio "0.1" \
--lora-r "16" \
--lora-alpha "32" \
--lora-dropout "0.05" \
--weight-decay "0.01" \
--gradient-accumulation "4" \
--quantization "int4" \
--mixed-precision "fp16" \
--peft
Enter fullscreen mode Exit fullscreen mode

Once finished you will have a new directory called "autotrain-wing" with the new fine-tuned model ๐Ÿš€


Playing with the model

To play with the model, start by running:

pip install transformers torch
Enter fullscreen mode Exit fullscreen mode

Once completed, create a new Python file named invoke.py with the following code:

from transformers import pipeline

# Path to your local model directory
model_path = "./autotrain-wing"

# Load the model and tokenizer from the local directory
classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)

# Example text to classify
text = "Example text to classify"
result = classifier(text)
print(result)
Enter fullscreen mode Exit fullscreen mode

And then you can run it by running the CLI command:

python invoke.py
Enter fullscreen mode Exit fullscreen mode

And you are done ๐Ÿš€


Keep on working on your LLMs

I am still learning about LLMs.
One thing I realized is that it's not so easy to track changes with your models.

You can't really use it with Git because a model can reach a very large size > 100 GB; it doesn't make much sense - git doesn't handle it nicely.

A better way to do this is with a tool called KitOps.

I think it will soon be a standard in the world of LLM, so make sure you star this library so you can use it later.

  1. Download the latest KitOps release and install it.

  2. Go to the model folder and run the command to pack your LLM:

    kit pack .
    
  3. You can also push it to Docker hub by running

    kit pack . -t [your registry address]/[your repository name]/mymodelkit:latest
    

    ๐Ÿ’ก To learn how to use DockerHub check this

ย 

โญ๏ธ Star KitOps so you can find it again later โญ๏ธ

StarRepo


I started a new YouTube channel mostly about open-source marketing :)

(Like how to get Stars, Forks and Client)

If that's something that interests you, feel free to subscribe to it here:
https://www.youtube.com/@nevo-david?sub_confirmation=1

๐Ÿ’– ๐Ÿ’ช ๐Ÿ™… ๐Ÿšฉ
nevodavid
Nevo David

Posted on April 25, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

ยฉ TheLazy.dev

About