Working on Different Branches & Merging

arilloid

arinak1017

Posted on September 27, 2024

Working on Different Branches & Merging

We are already in the 4th week of our Open Source course, which means Hacktoberfest is coming! But before September ends, me and my classmates continue to work on the labs and learn/review essential Git features.

This week was more laid back. Instead of working on our classmates' repositories, we were adding new features to our own projects. The main requirement was: each feature had to be developed on a separate branch and merged into the main branch individually.

GitHub logo arilloid / addcom

a CLI tool for adding comments to your source code files

ADDCOM

addcom is a CLI source code documenter tool which provides coders with an easy way to add comments to their source code files Give it a relative/absolute path to your file and it will analyze its contents and add comments using a Large Language Model's chat completion.

addcom

See a demo of the functionality on YouTube: addcom-demo

Setup Instructions

Prerequisites

Make sure Python is installed on your system (you can download it here: https://www.python.org/downloads/).

1. After cloning the repo cd into the project folder and simply run:

pip install .
Enter fullscreen mode Exit fullscreen mode

2. Default: Create an account and generate the API key here: https://console.groq.com/

By default, addcom uses the Groq API endpoint for chat completion. However, you can specify a custom endpoint using the --base-url or -u flag option. (If you do this, make sure to obtain an appropriate API key and specify the model supported by the chosen provider using…

Features

The features I decided to implement were:

  1. Adding support for streaming the LLM responses to stdout (Issue #6)
  2. Allowing users to provide context file as comment-style reference (Issue #7)

Implementing the features

Feature 1: Streaming
Implementing the 1st feature was rather straightforward. I started from reading the OpenAI API docs' 'Streaming' section, and learnt that the key to implementing the streaming functionality was setting the stream parameter to true. After that, I just needed to add proper parsing and printing.

Image description

Feature 2: Context File
Figuring out the 2nd feature was more of a hurdle. To ensure consistent results, I had to completely refactor the system prompt and simulate previous interactions using a few-shot approach.

To do this, I added a new function: build_prompt_messages(), it constructs the messages to be used as input for a LLM: generates a system prompt that sets the assistant's behavior and then appends the user's code content, forming the structure of the conversation. If a sample file is provided, an additional message is added to simulate the user showing sample code to the assistant to use as a reference to base the commenting style off of.

# Add context file's contents as previous interaction - few-shot learning
    if context is not None:
        messages.append({"role": "user", "content": f"Example:\n{context}"})
        messages.append({
            "role": "assistant", 
            "content": (
                "Great! Please provide another example if you have one, "
                "or share the source code you'd like me to add comments to."
            )})
Enter fullscreen mode Exit fullscreen mode

Merging

One of the main objectives of the lab was successfully merging the changes from two separate feature (issue) branches into main.

Merging the first issue-6 branch was easy. It caused no conflicts and resulted in the automated merge (merging issue-6 commit)

Merging issue-7 branch resulted in the three-way recursive merge. To my surprise, although there were a couple of merge conflicts, everything went smoothly. My screen got split into three sections: current changes, incoming changes, and target. I carefully examined the differences between the files, combined the changes in my code editor (to resolve the merge conflicts), and, luckily, nothing broke (yay!) (merging issue-7 commit)

Outcomes

Although I was already familiar with merging and resolving conflicts, this lab gave me an opportunity to revisit these concepts and apply best practices, such as creating separate branches for each feature.

💖 💪 🙅 🚩
arilloid
arinak1017

Posted on September 27, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Progress on Automatic Feedback Loop
python Progress on Automatic Feedback Loop

November 29, 2024

MySQL GUI Powered with AI
webdev MySQL GUI Powered with AI

November 29, 2024

The coolest chat
programming The coolest chat

November 26, 2024