Counter-intuitive web devs mistakes

vaclavnovotny

Václav Gröhling

Posted on December 28, 2023

Counter-intuitive web devs mistakes

Through my career as a web developer I saw a few mistakes that are very common and people are doing them again and again for they think it is a good practice. Even experienced developers does it.

There is a devil in a detail, at first glance your intuition tells you that you are right, but when you look deeper, you will realize, the the oposite is true.

1. .gitignore-ing IDE directories

This is not limited to web projects. Since different contributors uses different IDE, you might be tempted to ignore /.vscode/ and /.idea/ so your git repository is cleaner and this is a valid point. On the other hand, these folders contain a lot of usefull settings which are worth sharing, e.g. IntelliJ IDEs are known for it's slow indexing, which enables instant search through the repo. Certain files or folders needs to be excluded from indexing otherwise it noticeably slows the IDE. The exclusion rules are stored in the .idea folder and if you don't commit this folder in the repository, you will end up sharing this settings either verbally with each new contributor who is runniong IntelliJ or you will create a new paragraph in the contributors quide - all of this is unnecessary if you simply commit the .idea folder into the repo.

In the official documentation from IntelliJ, they state

What needs to be shared ... all files under the .idea directory in the project root except the items that store user-specific settings
How to manage projects under Version Control Systems

The .idea folder contains its own .gitignore which excludes the user-specific stuff, so no action is needed from your side, simply don't ignore the .idea folder and you are good to go!

With .vscode is not that simple as there is no .gitignore in that folder. It might happen that your personal settings sneak into the repo, so you need to be cautious to not change other peoples keymaps or keep your secrets private. Check this Stack Overflow answer which contains good guidance, or just commit the whole folder and watch out for any suspicious content.

The settings which are shared in .vscode includes recommended extensions, so each contributor would see the curated list of extension and is able to install them with a single click. Also, launch configurations are shared which is very usefull for debugging. Again, if you decide to ignore the .vscode folder, you will most likely end up re-phrasing the settings somewhere in the documentation anyway which is a less convenient way of sharing.

2. Meaningless alt attribute of the <img> tag

In the old days, most of web devs ignored the alt attribute, then the a11y ESLint rules became standard, and everybode fills it with nonsense. Countless times I saw a site logo with alt="logo" or illustration photo with alt="photo" 🤦‍♂️.

The purpose of this attribute is twofold

  • to display textual description of the image when it fails to load,
  • to provide a textual description of the image for sightless users.

If you ommit the alt attribute, the path (URL) to the image is displayed/voiced, which is mostly useless, so the ESLint rule is correct, you should add the alt attribute to every <img> tag, but only a few people know, that empty string is a valid value. If you add alt="" to the img tag, the image will be ignored by the screen readers, which is the right thing to do when the image is only decorative or when it is not important for the understanding of the page.

Take the site logo as an example. If you would add alt="logo" to it, the sightless user would hear

image, logo, ...

Which is totally useless. I'm sightless, I don't care there is an image, I cannot see the logo, so why should I care? You just slows me down. You should either describe the content of the logo or skip the image completely. Does it makes sense to describe the logo? Imagine you write a web page for Microsoft, do you want to hear

image, four colored squares - red, green, blue and yellow - arranged in a larger square formation resembling a split window

on every page? Sometimes it does make sense, but I would say, only in article about company logos, so sighless user can learn how the logo looks like. In case of the site logo, which is displayed on every page, it is too verbose. The site logo most probably acts as a link to the home page, so it is wrapped in <a href="/"> and you should rather describe the link, not the image, so somethink like

<a href="/" title="Home page"><img src="/logo.png" alt="" /></a>
Enter fullscreen mode Exit fullscreen mode

Is more appropriate. Here I use the title attribute to describe the link, because the anchor contains otherwise no textual content. The image has an empty alt so the screen reader skips it.

What about other images? Lets say you have a funny photo of a cat scared by a cucumber,

you can write

image, funny photo of a cat scared by a cucumber

if the page is all about funny photos, the description is appropriate, some people are not completely blind, so they can use screen reader to read the text and then use magnifier to see the image, but if the photo is just illustrative and not important to the understanding of the page, you can skip it with empty alt as I did in this example.

If you have an e-shop, you should describe the product images, e.g. for computer there might be images described as "front look", "detail of keyboard" and "back-plate with connectors", etc.

In many systems, the API which provides you the data will not contain descriptions of the image, you should design the system in a way that it is possible to add them manually and then it is a valid option use alt="product photo 1..n" as a fallback.

But alt="logo" is always wrong, don't do that.

3. The /login route et al.

The web applications started as a server-generated content, e.g. in PHP. If there was a need for secured section of the site, the developer authored a "Login" page, where they put all the login-specific logic (yes, database calls inside the markup, that's how I started 😈). Today, many web applications are single page applications with client only routes and we still author /login routes. Why? It does not makes sense! When you have a client-side router, you can wrap all your secured pages in a logic, that shows login form to unauthenticated user while the url in the browser remains intact. This has many benefits

  • no longer a "return-to" attribute, where you remember where to redirect the user after successful login,
  • the Sign-in button on the public pages leads directly to the secured page, e.g. /dashboard. If the users are not authenticated, they will see the login form, if they are authenticated, they will see the dashboard. You don't need any skipping logic on the /login that redirects the already authenticated user into the secured page.
  • You don't need the /forgotten-password or /register URLs neither, all these pages can be part of the login form and driven by the ephemeral state (e.g. useState in React).

Think about URLs, where you need them?

  1. You want your users to create a bookmark to the specific page - it is very unlikely your users would want to bookmark /login or /registration or /forgotten-password, right? They want to bookmark the /dashboard or /settings or /orders.
  2. You want to share link to certain part of the site. Did you ever needed share a link to /login? If so, why don't share a link directly to private /dashboard instead? Maybe you have a marketing campaign which points to the /registration page, but what should happen if already authenticated user clicks on it? The /registration should be then skipped, right? So why not to skip it always? For marketing campaigns you can create dedicated landing page or special query parameter like /dashboard?source=foo, which the login form can parse and handle in event the user will complete registration.

If you have web application with client-side routing, don't create a dedicated /login routes, your app will be simpler. You can achieve this with server-side routing as well, but the implementation migh be little bit more complicated, e.g. think about logging. In any way, the secured pages should not be indexed by web-crawlers, so don't add them into the sitemap.xml and set them as noindex in the robots.txt.

4. Using fetch API naively

Browser fetch API is low-level and streaming. Most web devs are unaware of that. Many times I heard

we don't use Axios because we have fetch which has promise-like API and is built-in

which is correct, but fetch draggs behind an illusion of simplicity. Read 10 technical articles about React that uses fetch and 9 of them would use fetch in useEffect ⛔️ and would not test for response.ok ⛔️. Even outside of React community, the typical example of fetch usage is

fetch('/api/users')
  .then(response => response.json())
  .then(data => console.log(data))
Enter fullscreen mode Exit fullscreen mode

Or wrapped in an async function with await keyword which gives it a sprinkle of modern practices. Sometimes, the authors of the article are aware of the code deficiencies, but they don't want to blow the example with 20 more lines of code, because the message of the article is tangential to the fetch, but they practically never disclose the deficiencies and the readers then copy-pastes the example into their code and everything work OK, for the time being...

The semi-correct way to use fetch is

fetch('/api/users')
  .then(response => {
    if (!response.ok) {
      // we obtained the response with status code other than 2xx,
      // the response might be a JSON but it also can be a HTML
      // page from some middleware, BEFORE READING THE BODY, CHECK
      // THE `Content-Type` HEADER! AND HANDLE THE ERROR IN SOME
      // WAY TYPICAL FOR YOUR APPLICATION!
    }
    // we obtained the response with status code 2xx, and let's
    // say our API always returns JSON so we can read the stream
    // and parse with `json()`  call. If your API can return
    // non-JSON response, don't parse the body blindly!
    return response.json()
  }, (error) => {
    // there was an error before we even got the response,
    // e.g. network is down or the server is unreachable,
    // HANDLE THIS CASE SOMEHOW!
  })
  .then(result => {
    // What we get here depends on the logic
    // in the previous `then` block.
    console.log(result)
  })
Enter fullscreen mode Exit fullscreen mode

You see, there can be at least two types of error

  1. network error - the service you are calling is not reachable,
  2. service error - the service is reachable, but it responds error.

When the fetch detects network error, it rejects. We handle that with second argument of Promise#then method1, but how we deal with the error? The implementation is specific to your application, you can log it into the console, send it into extenal logging service (which will be unavailable, if network is down), you can re-throw the error to reject the Promise chain or you can as well just resolve the promise with { error } object and later test for 'error' in result - it is up to you.

In the first Promise#then callback, the fetch reached the service and onbtained the response. We check the response.ok property, which is true for HTTP status codes 2xx. The fetch is OK that we reached the service, and it is agnostic to the service behaviour, you must handle the code youself. Again! Some services might respond with 2xx HTTP status all the time and discloses success: boolean in the response body - in that case even the response.ok is misleading - only you know your service and you must handle the response carefully.

In many APIs it is granted that the successful response will be always JSON, but error responses might be HTML pages from proxy or load-balancer. Don't call Response#json() blindly. What it does behind the scene is consuming the stream of the response body and parsing it into the JSON, since response body is a readable stream which can be read only once, you cannot re-read it as Response#text() after failed Respone#json() so if the response body is not valid JSON, the stream will be consumed, but parsing fails and you will loose all the response body in your code. Either check that response.headers.get('Content-Type')?.includes('application/json') to verify that the response contain JSON or read the body as text() and parse it to JSON afterwards in a try ... catch block, which is less efficient, but at least you will not loose the textual response body, so you can log it somewhere or read it in a debugging session.

Once you start to handle all the corner-cases of the fetch, you will find that you don't want to repeat the boilerplate each time you call network, so you will write some wrapper around the fetch or use redaxios library from Jason Miller, which provides axios-like API on top of fetch so it weights only 800 bytes, which is nice. But then you might need the axios interceptors which redaxios do not implement and if your application upload files and you want to track the upload progress with ProgressEvent, the fetch does not support that, only XMLHttpRequest does, on which the original axios is based. And after you write all your custom wrappers around fetch and upload wrappers around XMLHttpRequest, you might reconcider the original statement, that axios library is obsoleted.

But don't get me wrong, I'm using fetch for most cases, except the file uploads, only I think it is little too much overhyped and I see people calling promise.json() without testing promise.ok or promise.headers.get('Content-Type') way too often.


  1. You might think that Promise.then(onResolve).catch(onReject) is identical to binary (2-arguments) Promise.then(onResolve, onReject) but it is not. In the former case, the onReject runs after the onResolve in another micro-task which might be source of bugs, in the later case, the onResolve and onReject runs in single micro-task and are exclusive. As a rule of thumb, you should use binary Promise#then unless there is a good reason not to. Async is always hard, more to it sometimes later. 

💖 💪 🙅 🚩
vaclavnovotny
Václav Gröhling

Posted on December 28, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related