What frontend engineers should know about backend
March 2, 2020
|by Alexander
The vast majority of things a frontend engineers need to do can be done without knowing anything about the backend other than the API, the interface they provide for how to communicate. If you work on different parts of the frontend for long enough though, you'll probably run into parts that do require certain bits of backend knowledge. Here's the short list of topics that a frontend engineer should know about the backend.
Request rates
The backend has finite resources and can only handle a certain rate of requests. In general, you should not care — the frontend should do what it needs to do to create a great user experience and the backend can optimize and scale. Still, network requests to the backend are not free, and they are not all equally expensive (in terms of resources used).
How do you measure how expensive a call is? A rule of thumb is that writes are more expensive than reads, so the more data is changed, the more expensive it is. An example of when this would be prohibitive: let's say you're implementing Google docs. You want the user to never lose their work if they exit, so you save often. Can you save on every single character insertion and deletion though? Your backend servers would probably not be able to handle it, or the cost of infrastructure to handle it would be unnecessarily high. Throttling to only save after the user stops typing can achieve 99% of the intended effect without the huge cost of the extra 1%.
Let's say you want to poll your backend for changes. You want the most up-to-date version of a doc, so how often should you make a request? Reads are much cheaper than writes, so you can do it more than writes, but there's still a limit.
The limit depends on many factors, like max number of active clients at any given time, backend infrastructure, and budget. If you think a change you're making might approach the limit, talk to the backend team. Otherwise, you might end up DDOSing your own company.
Downtime
You should expect and prepare for every backend request to fail at some point for some users. It's an inevitability that even the most robust of servers will go down, or for specific endpoints to fail while the rest still work. You should distinguish which calls in your app are critical where a failure constitutes showing an app-wide error screen with a message to try again later, and which calls can be handled with graceful experience degradation (e.g., grey out the button for that feature with a hover error message saying it's currently unavailable).
If your backend is split up into multiple microservices, the likelihood of a subset of endpoints failing is higher. If your backend is just one server, a single failure can take down every endpoint. Either way, a good frontend needs to always wrap the call to backend endpoints in a try-catch, and have error paths prepared. Javascript has no panic recovery. If you don't handle it, the app will crash.
HTTP
The backend and frontend should use the appropriate HTTP status codes (to an extent). Hopefully your backend doesn't treat every error as a 400, but some will for simplicity. The frontend should know every status the backend plans to return. Don't parse error messages to detect a sign-in failed, a 401 is more consistent. Don't retry the exact same request if you're given a 400 because it probably won't work again, but a 500 might indicate the server is just rebooting and a retry would succeed.
Other properties of HTTP worth knowing:
- HTTP requests can be closed by the server if it takes too long to finish. If you think some task might hit that limit (~20 seconds is a good rule of thumb), you should switch from a single request-response to a request followed by polling for the result, or a different mechanism like web sockets.
- If you're sending large amounts of data back to the server (e.g. a video), you should use a multipart HTTP request, which splits the data up into chunks to be sent.
- Something that occasionally comes up unexpectedly is that there exists a URL limit size. Some frontends will pass data back to the server with query parameters, but if it's past 2048 characters, you'll have to switch to encoding it in the HTTP body.
Delegate business logic
If some business logic for a feature you're building can be done on both frontend and backend, where should you encode it? In general, you should do it on the backend. Reasons:
- The backend can be changed much faster — one deployment to all the servers and the stale business logic is gone, but frontend clients are in the hands of users, and a deployment doesn't mean you won't still have the broken business logic running in production.
- If the business logic requires computing power, it's hard to test the spectrum of machines your client might run on. If you're only testing with your company-provided top-of-the-line Macbook, you won't realize how much slower the computation might be a $100 Chromebook.
- It's more secure to lock business logic on the backend. Let's say you have a feature that only pro users can access. If you only encode the restriction on the frontend, someone could potentially reverse engineer your client via the API calls it makes, and access the features. This happens in the wild (e.g. music players that bypass limits).
Cross origin requests
As a security protocol, if a request to the backend comes from a different domain, it will be rejected due to being a "cross origin request". This is called the "Same Origin Policy". This trips people up in development, because ports count as part of the domain, and people are usually running an NPM/Yarn server for their frontend and the backend on another port, thus making every request a cross origin request.
Solutions:
- Map your server domains to some hostname in your dev environment's host config.
- Enable cross origin requests on your server conditional on an environment variable that's true in development and false in production.
- Whitelist your development domain as an exception.
Cross site request forgery is the name for an attack that makes an unauthorized request from a user that was initiated from another site. E.g. you click on a button on some website and it executes Javascript to try to have you execute a request on your banking website. To prevent this, the server gives a one-time token for every session, so that the attempt will fail due to not having the token. This is called a CSRF token. Attach it to headers of authorized requests.
Cache busting
Every request goes through multiple caches on the way to the backend. If you visit a website for the first time, wait for it to load, and then reload the page, the web app loads faster than the first time because your browser's cached assets like favoritewebsite.com/static/script.js. What if you want to make a change to script.js? You change the filename. Let's say you switch script.js to script.js?v=2 in what index.html references. The cached script.js becomes irrelevant, since there will never be another request to it (unless index.html is cached! The request for index.html needs to be invalidated in the backend). Modern build pipelines include cache-busting for every build, that's why most Javascript file outputs look like script.4e885f13.js. Usually this is only applied to stylesheets and scripts, but you can apply it to images and other assets too. Assets are usually very infrequently changed though, and it's worth leaving them out of automated cache-busting for performance reasons, and just manually updating them when needed.