Taming network with redux-requests, part 7 - Caching
Konrad LisiczyĆski
Posted on July 29, 2020
In the previous part of this series we discussed optimistic updates and how to avoid some traps when using them.
In this part we will cover caching.
What is caching?
Caching is a way to improve performance of an operation by saving its result somewhere, which can be retrieved later when needed instead of repeating the same operation again. One of such operation can be AJAX request, which is worth caching if possible, because with caching we can decrease communication with server. This not only could make our app much more responsive, especially on mobile devices, but also would decrease our server load.
When to cache?
Of course you cannot usually cache everything. Sometimes you cannot afford to do it, because you might need a fresh data from your server all the time. But if data is static, or relatively static and you could consider caching something even for a small period of time, if might be worth it, especially for slow networks or endpoints, or if a request could be repeated many times in a short period of time.
Caching with redux-requests
To activate it, just pass cache: true
to handleRequests
:
import { handleRequests } from '@redux-requests/core';
handleRequests({
...otherOptions,
cache: true,
});
After this, you can use meta.cache
in request actions:
const fetchBooks = () => ({
type: FETCH_BOOKS,
request: { url: '/books'},
meta: {
cache: 10, // in seconds, or true to cache forever
},
});
What will happen now is that after a succesfull book fetch (to be specific after FETCH_BOOKS_SUCCESS
is dispatched), any FETCH_BOOKS
actions for 10
seconds won't trigger any AJAX calls and the following FETCH_BOOKS_SUCCESS
will contain previously cached server response. You could also use cache: true
to cache forever.
Cache with cacheKey
Sometimes you would like to invalidate your cache based on a key, so if a key is changed, then you would bypass the cache and network would be hit. You can use meta.cacheKey
for that:
const fetchBooks = language => ({
type: FETCH_BOOKS,
request: { url: '/books', params: { language } },
meta: {
cache: 10,
cacheKey: language, // if language changes, cache won't be hit and request will be made
},
});
Cache with requestKey
Another use case is that you might want to keep a separate cache for the same request action based on a key. Then, like for usual not cached queries, you could use meta.RequestKey
. For example:
const fetchBook = id => ({
type: FETCH_BOOK,
request: { url: `/books/${id}`},
meta: {
cache: true,
requestKey: id,
},
});
/* then, you will achieve the following behaviour:
- GET /books/1 - make request, cache /books/1
- GET /books/1 - cache hit
- GET /books/2 - make request, cache /books/2
- GET /books/2 - cache hit
- GET /books/1 - cache hit
*/
cacheKey
and requestKey
together
You can also use cacheKey
and requestKey
at the same time, then different cacheKey
will be able to invalidate cache for each requestKey
individually, like:
const fetchBook = (id, language) => ({
type: FETCH_BOOK,
request: { url: `/books/${id}`, params: { language } },
meta: {
cache: true,
cacheKey: language,
requestKey: id,
},
});
/* then, you will achieve the following behaviour:
- GET /books/1?language=en - make request, cache /books/1
- GET /books/1?language=en - cache hit
- GET /books/2?language=de - make request, cache /books/2
- GET /books/2?language=en - make request, cache /books/2 again due to changed language
- GET /books/2?language=en - cache hit
*/
There is an interesting requestKey
and cacheKey
relation. Passing the same requestKey
and cacheKey
is the same like passing only requestKey
, because requests are stored separately for each requestKey
, so cache invalidation with the same cacheKey
could never happen.
Cache with requestCapacity
When you use cache
with requestKey
, like without caching you can be worried about storing too many queries in state. You can use requestsCapacity
to prevent that:
const fetchBook = id => ({
type: FETCH_BOOK,
request: { url: `/books/${id}`},
meta: {
cache: true,
requestKey: id,
requestsCapacity: 2,
},
});
/* then, you will achieve the following behaviour:
- GET /books/1 - make request, cache /books/1
- GET /books/1 - cache hit
- GET /books/2 - make request, cache /books/2
- GET /books/2 - cache hit
- GET /books/1 - cache hit
- GET /books/3 - make request, cache /books/3, invalidate /books/1 cache
- GET /books/1 - make request, cache /books/1, invalidate /books/2 cache
*/
Manual cache clearing
If you need to clear the cache manually for some reason, you can use clearRequestsCache
action:
import { clearRequestsCache } from '@redux-requests/core';
// clear the whole cache
dispatch(clearRequestsCache());
// clear only FETCH_BOOKS cache
dispatch(clearRequestsCache([FETCH_BOOKS]));
// clear only FETCH_BOOKS and FETCH_AUTHORS cache
dispatch(clearRequestsCache([FETCH_BOOKS, FETCH_AUTHORS]));
Note however, that clearRequestsCache
won't remove any query state, it will just remove cache timeout so that the next time a request of a given type is dispatched, AJAX request will hit your server. So it is like cache invalidation operation. To remove also data you can use resetRequests
action.
What's next?
In the next part we will touch imperfect world in which we live, when frontend developer is expected to start working on a feature even if backend is not yet ready.
Posted on July 29, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.