šŸ¦– APIs, fetch and Deno or how I make rate limiter (part 1)

sevapp

Vsevolod

Posted on September 25, 2023

šŸ¦– APIs, fetch and Deno or how I make rate limiter (part 1)

GitHub deno.land/x npm

āœ… DENO RU COMMUNITY

PART 2

Hello, my name is Vsevolod! And you know what? I've grown very fond of Deno! And you know what else? I really love TypeScript! These two tools faithfully serve me on the challenging path of developing various integrations, automations, and all sorts of tools.

ā¤ļø And please, support the project by starring it on GitHub or subscribing to my Telegram channel IT NIGILIZM!
This motivates me to continue developing this topic, writing new articles, and improving this tool!

Of course, during our work, we often have to interact with various APIs. For example, these could be the Telegram Bot API or the Notion API. Do you know what they have in common? Well, all these APIs come with certain rate limits. For instance, when using the Notion API, it's recommended not to exceed an average of 3 requests per second. But what do you do if your application generates far more requests than the limits of a specific API allow? What if the minimum time intervals between requests can vary dynamically?


šŸ’­ Let's explore:

The first thing that came to my mind is to create a queue of requests. The idea is simple: instead of using fetch, we push all our requests into a queue (which could be a regular Array). Then, in a loop, we gradually process these requests, calling our beloved fetch for each one and storing the received data somewhere. Of course, the request processing loop should adhere to the intervals between them according to the limits of the specific API. Isn't it beautiful? Let's look:

import { delay } from "https://deno.land/std@0.202.0/async/delay.ts";

interface IMyRequest {
  id: number;
  url: string;
}

let id = 0;
const queue: IMyRequest[] = [];
const responses: { [id: number]: Response | Error } = [];

const loop = async (interval: number) => {
  while (true) {
    await delay(interval);
    const req = queue.shift();
    if (!req) continue;

    try {
      const response = await fetch(req.url);
      responses[req.id] = response;
    } catch (error) {
      responses[req.id] = error;
    }
  }
};

const myFetch = (url: string): number => {
  id += 1;
  queue.push({
    id,
    url,
  });
  return id;
};
Enter fullscreen mode Exit fullscreen mode

Excellent! Now we can create a single request or a series of requests to be executed sequentially with a 100ms interval. Yes, we'll have to wait for a while, and then retrieve the results from responses by their id:

loop(100);
myFetch(`https://jsonplaceholder.typicode.com/todos/1`);
Enter fullscreen mode Exit fullscreen mode

Of course, tracking errors/responses in this example would not be very convenient. We could add a callback to IMyRequest that would be invoked after receiving the data. However, that would bring us back to callback hell))) But this code already partially solves this problem!

šŸ’” Idea:

Yes, fetch is much more convenient. Why? Well, fetch returns a promise, and we can easily wait for it using either await or then (whichever you prefer). And I think it would be great to preserve this mechanism. In other words, we need a fetch that allows us to await a response but sends requests in accordance with the specified limits through a queue. What if myFetch returns a Promise that will resolve when the request reaches the queue and the data is returned or an error occurs? In that case, we can use async/await to wait for the data, even though the requests go through a queue. Let's give it a try:

import { delay } from "https://deno.land/std@0.202.0/async/delay.ts";

interface IMyRequest {
  url: string;
  resolve: (value: Response | Error) => void;
}

const queue: IMyRequest[] = [];

const myFetch = (url: string) => {
  const promise = new Promise((resolve) => {
    queue.push({
      url,
      resolve,
    });
  });

  return promise;
};

const loop = async (interval: number) => {
  while (true) {
    await delay(interval);
    const req = queue.shift();
    if (!req) continue;

    try {
      const response = await fetch(req.url);
      req.resolve(response);
    } catch (error) {
      req.resolve(error);
    }
  }
};
Enter fullscreen mode Exit fullscreen mode

Amazing! Now we can create a large number of requests and await their responses using await or then:

loop(100);

for (let i = 100; i--;) {
  myFetch(`https://jsonplaceholder.typicode.com/todos/${i}`).then(
    async (response) => {
      if (response instanceof Error) {
        console.log(response);
      } else {
        console.log(await response.json());
      }
    },
  );
}
Enter fullscreen mode Exit fullscreen mode

šŸ Final result:

Not bad already)
But that was just an example, not "production" code. Using it would be challenging because myFetch is far from perfect; it needs to be made similar to fetch.

Well, I decided to create a basic version and publish it as a separate package on deno.land/x. Try it out:

import fetchify from "https://deno.land/x/fetchify@0.0.2/mod.ts";

const limiter = new fetchify.HTTPLimiter({
  interval: 100,
});

for (let i = 10; i--;) {
  console.log(i);
  await limiter.fetch(
    `https://jsonplaceholder.typicode.com/todos/${i}`,
  );
}
Enter fullscreen mode Exit fullscreen mode

šŸ„° Conclusion!

I hope that these thoughts will be helpful to someone. I understand that there may already be existing modules for Deno or Node.js that partially solve this problem, but I was interested in figuring it out on my own. And don't let my imperfect English deter you. Thank you for your attention!

GitHub deno.land/x npm

šŸ’– šŸ’Ŗ šŸ™… šŸš©
sevapp
Vsevolod

Posted on September 25, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related