GENEREADME is a command-line tool that takes in a source code file and generates a README.md file that explains the code in the file by utilizing an LLM.
Contrubutions
Contributions to GENEREADME are welcome! Please checkout CONTRIBUTING.md for guidelines on setting up the environment, how to run and test the tool, and submitting changes.
GENEREADME
GENEREADME is a command-line tool that takes in a file, processes it, and generates a README file with an explanation or documentation of the contents of the file. The tool utilizes OpenAI chat completion to analyze the file and generate content.
Usage
Install the tool by running the following command:
npm i -g genereadme
The tool currently supports Groq and OpenRouter, which uses Groq by default. A valid API key for the appropriate provider must be provided.
Provide a valid API key either by creating a .env file or through the -a or --api-key flag when using the command:
API_KEY=API_KEY
or
genereadme <files> -a API_KEY
genereadme <files> --api-key API_KEY
Run the tool with the existing sample files or start using your own:
Publishing the tool itself was no trouble at all however, there were some extra steps I had to take to ensure a proper release.
Research
Before I focused on the release process, I took some time to research best practices and steps for publishing a package to the npm registry. Here's what I learned:
1. How to publish a package to npm
To understand the basic process, I referred to the official npm documentation. This guide provided an overview of essential steps, including setting up a package.json, running npm publish, and managing versions.
2. Using Semantic Versioning
Versioning plays a crucial role in signaling changes to users. I looked at the principles of Semantic Versioning, which uses the MAJOR.MINOR.PATCH format to describe breaking changes, new features, and bug fixes. This ensured my tool would have a meaningful version number for each release.
3. Managing .npmignore
I researched how to effectively use the .npmignore file to ensure my npm package includes only the necessary files for end-users. By carefully creating this file, I was able to exclude development-specific files like configuration files, tests, and documentation that aren't required in the published package. This not only reduced the size of the package but also made it more professional by focusing solely on what the users actually need to run the tool. Properly managing .npmignore is a critical step in preparing a polished release.
Release
After doing my research, I double checked any requirements, any possible bugs, and any failing tests.
After everything looked all good to me, I proceeded to publish the package by running:
npm publish
NOTE: Running this command will require the user to be logged in to their npm account by running the command:
npm login
Testing
Now that I have publish v1.0.0 of GENEREADME, it was time to ask some end-users to test if the package works.
As expected, there were a couple of bugs that made it through. One which does not affect how the tool performs, and one that actually breaks it.
Adjustment
Version command bug
The simple bug that was found is about using the command genereadme -v. This command should print the tool's name and the current version. However, the way I coded this part is that I retrieve the project name and version from the package.json in the current directory. This means that if an end-user runs this command, it will display the name and version of THEIR project instead of mine. So this was a simple fix to just make sure that it will always retrieve it from the correct project.
Outputs directory bug
Now this one is a bug that breaks the tool which technically worked in my local testing, but I forgot a simple test case.
The folder structure of the project only had developers and contributors in mind, so having the outputs folder in the project is expected, which I also pushed to the main repo containing a sample result of an output.
Now, I had to keep in mind that it will be slightly different for the end-user.
Previously, the code was written to just write to the outputs/ directory without checking for its existence, and to make one if it doesn't. This caused the manual testing of the published package fail since the end-user did not have a outputs/ directory, the tool will just fail instead of making one if it doesn't exist.
After this discovery, I thought pretty simple, right?
Until I tried pushing the changes thinking "Okay, that fixes it!" but to my surprise, my CI failed!
The culprit:
if (!fs.existsSync("./outputs")){fs.mkdirSync("./outputs");}
So this is how I check and create the outputs/ directory. However, in my end-to-end testing, I was mocking the fs.existsSync() to return a value of false for this reason:
exportfunctionreadConfigFile(){consthomeDir=os.homedir();constconfigFilePath=path.join(homeDir,"./genereadme-config.toml");if (fs.existsSync(configFilePath)){try{constconfigFileContent=fs.readFileSync(configFilePath,"utf-8");returntoml.parse(configFileContent);}catch (error){console.error("Error parsing the config file:",error.message);process.exit(1);}}return{};}
This function checks for a toml config file which uses fs.existsSync(), and I did not want to use a toml config file for my end-to-end testing, I mocked this method to return false, which is now conflicting with the bugfix I did.
I am yet to master mocking and find out ways to possibly make different mock values for the same method for different conditions. So until I learn that procedure, I made the temporary fix to ensure that the outputs/ directory gets removed before every test case.