Architecture planning of an on top filter over multiple public search APIs?

fetchworkglenn

Glenn

Posted on April 14, 2020

Architecture planning of an on top filter over multiple public search APIs?

I'm in the beginning stages of creating a service that gives users more filtering options on top of multiple public APIs.

I think I've gathered the basic pros and cons of setting something like this up with different stacks and wanted to see if I was missing anything big or I have anything wrong. (Or if you have any other recommendations)

Option 1:
Normal front end + back end for auth and config saves to DB.

Front end > User requests search > User search configs > Request from individual APIs > Filter > Display
Back end > handles user auth and config saves


Pros:

  • Simple to set up.
  • Public API end point requests come from client so there’s less cost for my server.
  • No chance of being blacklisted for too many API calls from 1 IP address.

Cons:

  • All filtering done on client side, code will be accessible to anyone who wants to copy.
  • UX speed dependent on client machine/connection – Must wait for multiple API requests from different sources and then filter.
  • Speed dependent on 3rd party API servers and whether they’re available at time of request.

Indifferent:

  • Will still need to implement an auth server and save user settings.

Option 2:
Normal front end + REST back end API + auth and config saves to DB.

Front end > User saves configs > User requests search
Back end > Use user search configs > Calls 3rd party APIs > Results formatted and filtered in server > Return to front end > Display results


Pros:

  • Simple to set up.
  • Don’t have to worry about bogging down client UX.
  • Hides filtering from anyone who wants to just copy the code and business model.

Cons:

  • Increase in cost due to many api requests from my server.
  • Risk of being blacklisted if there are a high number of users.

Option 3:
Normal front end + REST back end API + Redis cache + auth and config saves to DB.

Front end > User saves configs > User requests search
Back end > Use user search configs > check if end-points are in DB or end-point results are in Redis cache

  • If false, Calls 3rd party APIs > Format results > Results saved Redis cache > Filter results > Return to front end
  • If true, get results from Redis cache > Filter results based on user configs > Return to front end ________________________________________

Pros:

  • Less requests done from my server than option 2.
  • Less risk of being blacklisted.
  • Requests are only called when needed or the cache result has timed out.
  • If there are a lot of users with the same base API search requests, this will have a constant number of requests within a specified cache timeout time-frame.

Cons:

  • Same speed as 2 if users request search after a cache item has timed out.
  • Increase in setup complexity.

Option 4:
Normal front end + REST back end API + Redis cache + auto rolling API calls+ auth and config saves to DB.
Same as Option 3 but instead of getting results on demand when Redis cache item times out, the server automatically gets the results for each unique api endpoint upon timeout. (aka 4 times a day it’ll make all unique api calls and cache results)


Pros:

  • Same as 3
  • Constant number of 3rd party API calls, x:1 relation in number of calls and Unique API. X being how many times I want my server to auto get results.
  • Decreases costs substantially as user numbers get high. Scales effectively.

Cons:

  • Not as cost effective as 3 in the beginning as it may be making calls unnecessarily with small user numbers ( could halve the number of calls at first to 2 instead of 4 times a day) and change that as users increase.

Option 5:
Front end with GraphQL + Apollo GraphQL backend + Redis cache + auto rolling API calls + auth and config saves to DB

Setup same as Option 4 but basically implement it as a GraphQL API


Pros:

  • Reduces number of calls between client and my server for relational calls
  • Decreases costs on my end when implemented for large number of users.
  • Would be fun to learn and useful in the future for other projects

Cons:

  • Time to learn setting up GraphQL
  • Not really worth it if I don’t have a large number of users accessing relational data
💖 💪 🙅 🚩
fetchworkglenn
Glenn

Posted on April 14, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related