chatminal

fadingna

fadingNA

Posted on September 5, 2024

chatminal

How Large Language Models are Transforming Software Development

In today’s fast-paced digital world, Large Language Models (LLMs) are revolutionizing the way we work, communicate, and solve problems. from hype of AI generation I have developed the open source completion project that build-up the foundation of chat completion leverage by langchain and openai, and release 0.1.0 in this Github Repository feels free to contribute and make it better!

GitHub logo fadingNA / chat-completion-api

Chat Minal is a completion and some specific define prompt for user eg summarize, analze and more.

chat-minal

The chat-minal is a command-line tool work on terminal that makes it easy to interact with the OpenAI Chat Completion API using Langchain. This tool allows you to send text prompts and receive AI-generated responses directly from the command line. You can customize the model, set parameters like temperature and token limits, and save the output to a file.

Added new feature 0.1.1 pre defined prompt user can select choice for make genAI running task without input text or prompt.

References Langchain Document

Demo walkthrough the chatminal

Demo Link chat-minal tutorial

Example of Usage

Code Review in Chat Completion Tool Figure 1: Code Review in Chat Completion Tool

Convert JSON to CSV Figure 2: Converting JSON to CSV using the Tool

Generate Markdown from Text Figure 3: Generating Markdown from Text with Ease

Summarize Text Figure 4: Summarizing Text in the Chat Completion Tool

Overview

This tool allows you to interact with the ChatOpenAPI from Langchain via a command-line interface (CLI). You can provide input…

What is Chatminal?

Chatminal is a command-line interface (CLI) tool designed to integrate seamlessly into your daily workflow. By leveraging the capabilities of Groq Free models, Chatminal can handle a range of tasks, from generating code snippets and writing documentation to translating source code between languages and even suggesting fixes for merge conflicts.

Imagine typing a simple command in your terminal to generate a full-fledged unit test suite or receive suggestions on optimizing your latest script. That’s the kind of power Chatminal puts at your fingertips.

How I Built Chatminal

Building Chatminal involved integrating the Groq API with Python, using command-line arguments, and perform leveraging asynchronous programming with asyncio for handling multiple tasks efficiently for the future development that we change from command line CLI to API instead.

Instead of using the argparse library, I created a custom function, generic_set_argv, to dynamically parse command-line arguments. This function processes the arguments by iterating through the provided keywords, identifying their position in sys.argv, and storing their associated values or flags.

def generic_set_argv(*args):
    """
    Set the command line arguments passed to the script.

    Args:
    argv (list): The command line arguments to set.
    """
    parsed_args = {}
    for key in args:
        try:
            index = sys.argv.index(key)
            print(f"Index: {index}")
            if len(sys.argv) > index + 1 and not sys.argv[index + 1].startswith("-"):
                parsed_args[key] = sys.argv[index + 1]
            else:
                parsed_args[key] = True
        except ValueError:
            print(f"Error: {key} must be an integer.")
            parsed_args[key] = None
    logger.info(f"Command line arguments: {parsed_args}")
    return parsed_args

# Example how to use it 
# Parse command-line arguments
    arguments = generic_set_argv(
        '--version', '-v', '--help', '-h', '--howto',
        '--input_text', '-i', '--output', '-o',
        '--temperature', '-t', '--max_tokens',
        '--api_key', '-a', '--model', '-m',
        '--base-url', '-u'
    )

    # Handle specific commands
    if arguments.get('--models'):
        api_key = arguments.get('--api_key') or arguments.get('-a')
        if not api_key:
            logger.error("API Key is missing")
            return
        get_models_from_open_ai = get_available_models(api_key=api_key)
        if get_models_from_open_ai:
            logger.info("Available models from OpenAI:")
            pprint.pprint(get_models_from_open_ai)
            return

    # Display version information
    if arguments.get('--version') or arguments.get('-v'):
        print(f"{TOOL_NAME} version: {VERSION}")
        logger.info(f"{TOOL_NAME} version: {VERSION}")
        return

Enter fullscreen mode Exit fullscreen mode
  • Command-Line Interface (CLI): I built custom function to create args by passing keyword and value into it. to create a flexible and user-friendly CLI, allowing users to specify tasks like code translation, documentation generation, and more, all from the terminal.

  • Asynchronous Requests: By utilizing Python’s asyncio, Chatminal can handle API requests concurrently, speeding up the processing time for multiple tasks and for streaming purpose with stream or astream from langchain.

  • Error Handling and Logging: Robust error handling ensures that users receive meaningful error messages, while a logging system tracks activity for debugging purposes.

  • Dynamic File Handling: Chatminal can accept one or multiple files as input, reading their content, and processing them based on user commands.

Example: Using Chatminal to Generate Completions

# First we have to set some setup to take arguments from CLI 
def generic_set_argv(*args):
    """
    Set the command line arguments passed to the script.

    Args:
    argv (list): The command line arguments to set.
    """
    parsed_args = {}
    for key in args:
        try:
            index = sys.argv.index(key)
            print(f"Index: {index}")
            if len(sys.argv) > index + 1 and not sys.argv[index + 1].startswith("-"):
                parsed_args[key] = sys.argv[index + 1]
            else:
                parsed_args[key] = True
        except ValueError:
            print(f"Error: {key} must be an integer.")
            parsed_args[key] = None
            #sys.exit(1)
    logger.info(f"Command line arguments: {parsed_args}")
    return parsed_args

def get_file_content(file_path):
    """
    Read the content of the provided file.

    Args:
    file_path (str): Path to the file.

    Returns:
    str: Content of the file as a string.
    """
    try:
        if file_path.endswith(".json"):
            logger.info(f"Reading context from JSON file: {file_path}")
            with open(file_path, "r") as f:
                json_content = json.load(f)
                return json.dumps(json_content, indent=4)  # Convert JSON to a formatted string
        else:
            logger.info(f"Reading context from text file: {file_path}")
            with open(file_path, "r") as f:
                return f.read()
    except Exception as e:
        logger.error(f"Error reading file {file_path} at line {e.__traceback__.tb_lineno}: {e}")
        return None
Enter fullscreen mode Exit fullscreen mode

With these 2 function we can accept dynamics argument base on what we define and what we need to put into Chat Completion

  • This function takes a list of arguments (like --help, --version, --input) and checks if they are provided in the command line. If the arguments exist, it stores them in a dictionary for easy access.
  • Read File Content Dynamically

Chatminal can take one or more files as input and process them accordingly. This function handles reading content from files, whether JSON or plain text:


    Parameters:
    input_text (str): The input text to generate the completion.
    output_file (str): The output file to save the generated completion.
    temperature (str): The temperature for the completion.
    max_tokens (str): The maximum tokens for the completion.
    api_key (str): The OpenAI API key.
    model (str): The model for the completion.
    context (str): The context for the completion.

    Returns:
    str: The generated completion or None if an error occurs.
    """
    try:


        response = LangChainOpenAI(
            base_url=base_url,
            api_key=api_key,
            model=model if model else "llama-3.1-8b-instant",
            temperature=temperature if temperature else 0.5,
            max_tokens=max_tokens if max_tokens else 100,
            max_retries=2,
        )
Enter fullscreen mode Exit fullscreen mode

From code snippet above we can know that. The message array defines the conversation’s context and the user’s input. The loop streams the output from the Groq API directly to the terminal, allowing for real-time feedback. The flush=True argument ensures that the output is immediately written to the terminal without buffering.


  message = [
            (
                "system",
                f"You are a professional analyst who is working on a different and you will use {context} as context and then provide answer based on user question.",
            ),
            (
                "human", f"{input_text}"
            )
        ]
Enter fullscreen mode Exit fullscreen mode

From code snippet above we can see that, the streaming response from llm just printing token by token and then we are using end of token equal to "" and we flush every token that print out.

        answer = []
        print("\n")
        print(f"*" * 100)
        for chunk in response.stream(message):
            print(chunk.content, end="", flush=True)
            answer.append(chunk.content)
        print("\n")
        print(f"*" * 100)

Enter fullscreen mode Exit fullscreen mode

Feature available on arguments

Option Description
-h, --help, --howto Show the help message.
-v, --version Show the version of the tool.
--input_text, -i Input text to generate completion.
--output, -o Output file to save the generated completion.
--temperature, -t Temperature for the completion.
--max_tokens Maximum tokens for the completion.
--api_key, -a OpenAI API Key.
--model, -m Model for the completion.
--models List all available models on OpenAI.
--select_choices Will use a pre defined prompt
--token-usage Token usage for the completion.

Conclusion

By integrating Large Language Models into tools like Chatminal, we can significantly improve productivity and streamline software development processes. Example for creating README.MD from source code, code review, and etc.

💖 💪 🙅 🚩
fadingna
fadingNA

Posted on September 5, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related