How Large Language Models are Transforming Software Development
In today’s fast-paced digital world, Large Language Models (LLMs) are revolutionizing the way we work, communicate, and solve problems. from hype of AI generation I have developed the open source completion project that build-up the foundation of chat completion leverage by langchain and openai, and release 0.1.0 in this Github Repository feels free to contribute and make it better!
Chat Minal is a completion and some specific define prompt for user eg summarize, analze and more.
chat-minal
The chat-minal is a command-line tool work on terminal that makes it easy to interact with the OpenAI Chat Completion API using Langchain. This tool allows you to send text prompts and receive AI-generated responses directly from the command line. You can customize the model, set parameters like temperature and token limits, and save the output to a file.
Added new feature 0.1.1 pre defined prompt user can select choice for make genAI running task without input text or prompt.
Chatminal is a command-line interface (CLI) tool designed to integrate seamlessly into your daily workflow. By leveraging the capabilities of Groq Free models, Chatminal can handle a range of tasks, from generating code snippets and writing documentation to translating source code between languages and even suggesting fixes for merge conflicts.
Imagine typing a simple command in your terminal to generate a full-fledged unit test suite or receive suggestions on optimizing your latest script. That’s the kind of power Chatminal puts at your fingertips.
How I Built Chatminal
Building Chatminal involved integrating the Groq API with Python, using command-line arguments, and perform leveraging asynchronous programming with asyncio for handling multiple tasks efficiently for the future development that we change from command line CLI to API instead.
Instead of using the argparse library, I created a custom function, generic_set_argv, to dynamically parse command-line arguments. This function processes the arguments by iterating through the provided keywords, identifying their position in sys.argv, and storing their associated values or flags.
defgeneric_set_argv(*args):"""
Set the command line arguments passed to the script.
Args:
argv (list): The command line arguments to set.
"""parsed_args={}forkeyinargs:try:index=sys.argv.index(key)print(f"Index: {index}")iflen(sys.argv)>index+1andnotsys.argv[index+1].startswith("-"):parsed_args[key]=sys.argv[index+1]else:parsed_args[key]=TrueexceptValueError:print(f"Error: {key} must be an integer.")parsed_args[key]=Nonelogger.info(f"Command line arguments: {parsed_args}")returnparsed_args# Example how to use it
# Parse command-line arguments
arguments=generic_set_argv('--version','-v','--help','-h','--howto','--input_text','-i','--output','-o','--temperature','-t','--max_tokens','--api_key','-a','--model','-m','--base-url','-u')# Handle specific commands
ifarguments.get('--models'):api_key=arguments.get('--api_key')orarguments.get('-a')ifnotapi_key:logger.error("API Key is missing")returnget_models_from_open_ai=get_available_models(api_key=api_key)ifget_models_from_open_ai:logger.info("Available models from OpenAI:")pprint.pprint(get_models_from_open_ai)return# Display version information
ifarguments.get('--version')orarguments.get('-v'):print(f"{TOOL_NAME} version: {VERSION}")logger.info(f"{TOOL_NAME} version: {VERSION}")return
Command-Line Interface (CLI): I built custom function to create args by passing keyword and value into it. to create a flexible and user-friendly CLI, allowing users to specify tasks like code translation, documentation generation, and more, all from the terminal.
Asynchronous Requests: By utilizing Python’s asyncio, Chatminal can handle API requests concurrently, speeding up the processing time for multiple tasks and for streaming purpose with stream or astream from langchain.
Error Handling and Logging: Robust error handling ensures that users receive meaningful error messages, while a logging system tracks activity for debugging purposes.
Dynamic File Handling: Chatminal can accept one or multiple files as input, reading their content, and processing them based on user commands.
Example: Using Chatminal to Generate Completions
# First we have to set some setup to take arguments from CLI
defgeneric_set_argv(*args):"""
Set the command line arguments passed to the script.
Args:
argv (list): The command line arguments to set.
"""parsed_args={}forkeyinargs:try:index=sys.argv.index(key)print(f"Index: {index}")iflen(sys.argv)>index+1andnotsys.argv[index+1].startswith("-"):parsed_args[key]=sys.argv[index+1]else:parsed_args[key]=TrueexceptValueError:print(f"Error: {key} must be an integer.")parsed_args[key]=None#sys.exit(1)
logger.info(f"Command line arguments: {parsed_args}")returnparsed_argsdefget_file_content(file_path):"""
Read the content of the provided file.
Args:
file_path (str): Path to the file.
Returns:
str: Content of the file as a string.
"""try:iffile_path.endswith(".json"):logger.info(f"Reading context from JSON file: {file_path}")withopen(file_path,"r")asf:json_content=json.load(f)returnjson.dumps(json_content,indent=4)# Convert JSON to a formatted string
else:logger.info(f"Reading context from text file: {file_path}")withopen(file_path,"r")asf:returnf.read()exceptExceptionase:logger.error(f"Error reading file {file_path} at line {e.__traceback__.tb_lineno}: {e}")returnNone
With these 2 function we can accept dynamics argument base on what we define and what we need to put into Chat Completion
This function takes a list of arguments (like --help, --version, --input) and checks if they are provided in the command line. If the arguments exist, it stores them in a dictionary for easy access.
Read File Content Dynamically
Chatminal can take one or more files as input and process them accordingly. This function handles reading content from files, whether JSON or plain text:
Parameters:input_text (str):Theinputtexttogeneratethecompletion.output_file (str):Theoutputfiletosavethegeneratedcompletion.temperature (str):Thetemperatureforthecompletion.max_tokens (str):Themaximumtokensforthecompletion.api_key (str):TheOpenAIAPIkey.model (str):Themodelforthecompletion.context (str):Thecontextforthecompletion.Returns:str:ThegeneratedcompletionorNoneifanerroroccurs."""
try:
response = LangChainOpenAI(
base_url=base_url,
api_key=api_key,
model=model if model else "llama-3.1-8b-instant",
temperature=temperature if temperature else 0.5,
max_tokens=max_tokens if max_tokens else 100,
max_retries=2,
)
From code snippet above we can know that. The message array defines the conversation’s context and the user’s input. The loop streams the output from the Groq API directly to the terminal, allowing for real-time feedback. The flush=True argument ensures that the output is immediately written to the terminal without buffering.
message=[("system",f"You are a professional analyst who is working on a different and you will use {context} as context and then provide answer based on user question.",),("human",f"{input_text}")]
From code snippet above we can see that, the streaming response from llm just printing token by token and then we are using end of token equal to "" and we flush every token that print out.
By integrating Large Language Models into tools like Chatminal, we can significantly improve productivity and streamline software development processes. Example for creating README.MD from source code, code review, and etc.