Building a Web Crawler in Node.js to Discover AI-Powered JavaScript Repos on GitHub

dailysandbox

Art

Posted on November 25, 2024

Building a Web Crawler in Node.js to Discover AI-Powered JavaScript Repos on GitHub

Building a Web Crawler in Node.js to Discover AI-Powered JavaScript Repos on GitHub

GitHub is a treasure trove of innovative projects, especially in the ever-evolving world of artificial intelligence. But sifting through the countless repositories to find those that combine AI and JavaScript? That’s like finding gems in a vast sea of code. Enter our Node.js web crawler—a script that automates the search, extracting repository details like name, URL, and description.

In this tutorial, we’ll build a crawler that taps into GitHub, hunting down repositories that work with AI and JavaScript. Let’s dive into the code and start mining those gems.


Part 1: Setting Up the Project

Initialize the Node.js Project

Begin by creating a new directory for your project and initializing it with npm:

mkdir github-ai-crawler
cd github-ai-crawler
npm init -y

Enter fullscreen mode Exit fullscreen mode

Next, install the necessary dependencies:

npm install axios cheerio

Enter fullscreen mode Exit fullscreen mode
  • axios : For making HTTP requests to GitHub.
  • cheerio : For parsing and manipulating HTML, similar to jQuery.

Part 2: Understanding GitHub’s Search

GitHub provides a powerful search feature accessible via URL queries. For example, you can search for JavaScript repositories related to AI with this query:

https://github.com/search?q=ai+language:javascript&type=repositories

Enter fullscreen mode Exit fullscreen mode

Our crawler will mimic this search, parse the results, and extract relevant details.


Part 3: Writing the Crawler Script

Create a file named crawler.js in your project directory and start coding.

Step 1: Import Dependencies

const axios = require('axios');
const cheerio = require('cheerio');

Enter fullscreen mode Exit fullscreen mode

We’re using axios to fetch GitHub’s search results and cheerio to parse the HTML.


Step 2: Define the Search URL

const SEARCH_URL = 'https://github.com/search?q=ai+language:javascript&type=repositories';

Enter fullscreen mode Exit fullscreen mode

This URL targets repositories related to AI and written in JavaScript.


2220+ FREE RESOURCES FOR DEVELOPERS!! ❤️ 😍🥳 (updated daily)

1400+ Free HTML Templates

351+ Free News Articles

67+ Free AI Prompts

315+ Free Code Libraries

52+ Free Code Snippets & Boilerplates for Node, Nuxt, Vue, and more!

25+ Free Open Source Icon Libraries

Visit dailysandbox.pro for free access to a treasure trove of resources!


Step 3: Fetch and Parse the HTML

const fetchRepositories = async () => {
    try {
        // Fetch the search results page
        const { data } = await axios.get(SEARCH_URL);
        const $ = cheerio.load(data); // Load the HTML into cheerio

        // Extract repository details
        const repositories = [];
        $('.repo-list-item').each((_, element) => {
            const repoName = $(element).find('a').text().trim();
            const repoUrl = `https://github.com${$(element).find('a').attr('href')}`;
            const repoDescription = $(element).find('.mb-1').text().trim();

            repositories.push({
                name: repoName,
                url: repoUrl,
                description: repoDescription,
            });
        });

        return repositories;
    } catch (error) {
        console.error('Error fetching repositories:', error.message);
        return [];
    }
};

Enter fullscreen mode Exit fullscreen mode

Here’s what’s happening:

  • Fetching HTML : The axios.get method retrieves the search results page.
  • Parsing with Cheerio : We use Cheerio to navigate the DOM, targeting elements with classes like .repo-list-item.
  • Extracting Details : For each repository, we extract the name, URL, and description.

Step 4: Display the Results

Finally, call the function and log the results:

(async () => {
    const repositories = await fetchRepositories();
    console.log('AI-Powered JavaScript Repositories Found:', repositories);
})();

Enter fullscreen mode Exit fullscreen mode

Part 4: Running the Crawler

Save your script and run it with Node.js:

node crawler.js

Enter fullscreen mode Exit fullscreen mode

You’ll see a list of AI-related JavaScript repositories, each with its name, URL, and description, neatly displayed in your terminal.


Part 5: Enhancing the Crawler

Want to take it further? Here are some ideas:

  1. Pagination : Add support for fetching multiple pages of search results by modifying the URL with &p=2, &p=3, etc.
  2. Filtering : Filter repositories by stars or forks to prioritize popular projects.
  3. Saving Data : Save the results to a file or database for further analysis.

Example for saving to a JSON file:

const fs = require('fs');
const saveToFile = (data) => {
    fs.writeFileSync('repositories.json', JSON.stringify(data, null, 2));
    console.log('Data saved to repositories.json');
};

// Call saveToFile after fetching repositories
(async () => {
    const repositories = await fetchRepositories();
    saveToFile(repositories);
})();

Enter fullscreen mode Exit fullscreen mode

The Beauty of Automation

With this crawler, you’ve automated the tedious task of finding relevant repositories on GitHub. No more manual browsing or endless clicking—your script does the hard work, presenting the results in seconds.

Here is Part 2 of the setup!

For more tips on web development, check out DailySandbox and sign up for our free newsletter to stay ahead of the curve!

💖 💪 🙅 🚩
dailysandbox
Art

Posted on November 25, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related