Replacing Express with uWebSockets
Matt Krick
Posted on January 6, 2020
One of the best parts of running an enterprise SaaS is that our traffic takes a nosedive at the end of the year while clients universally take vacation. The low traffic is a great excuse for larger refactors and with our crazy growth this year, we've been considering scaling our server horizontally. Before we do, I figured it'd be smart to squeak out as much performance as possible. So, after 4 years, we ditched Express for something faster: uWebSockets.
uWebSockets is lighting fast because it is so low level. Saying it is like Express without the training wheels is an understatement. It's more like taking off the training wheels, reflectors, mud guards, hand grips, seat, and then stripping the paint because, well, speed. While I appreciate the speed and low memory footprint, I also don't want to run the risk of my sloppy code crashing the server, so my goal is to make a couple reasonable performance sacrifices to make it as safe as an Express framework. In other words, I'll take the bike-- just give me a darn helmet.
Practically, that means I don't want to worry about a call to Redis somehow failing, which throws an uncaught promise rejection, thus hanging the response and in turn hanging the server. To save myself from myself, I came up with a few reasonable patterns to avoid both rewriting my sloppy code and a crash. Hopefully, you find them useful, too.
Response Handling
At all costs, we must close the HttpResponse
or it will hang and bring the server to its knees. There are 2 ways the response can close: calling a terminating method (end
, tryEnd
or close
) or being hung up on by the client (onAborted
fires). Unfortunately, once the response has been closed, you cannot attempt to close it again. That restriction creates a race condition. Imagine the scenario where a request comes in to read a record from the DB:
const handler = async (res) => {
const doc = await db.get('user')
res.end(doc)
}
In a perfect world, the doc from the DB returns and gets sent as the response. In the real world, the client disconnects just after the call to the DB is made. When that happens, the socket is closed, onAborted
fires, and by the time res.end
is called, the response has already been invalidated, which produces an error.
To tackle this problem, I need to guarantee 3 things:
- A terminating method is not called after
onAborted
fires - A terminating method is not called after a terminating method was already called
- There is only 1
onAborted
handler for each response
To accomplish #1 & #2 without rewriting all my route handlers, I decided to monkeypatch the response with some safety checks. For example, I put a done
one-way flag on the response and if a terminating method is called after the response is already done
, it is ignored:
res._end = res.end
res.end = (body) => {
if (res.done) {
console.log(`uWS DEBUG: Called end after done`)
return res
}
res.done = true
return res._end(body)
}
Guaranteeing a single onAborted
handler was necessary because there were some cases where the thing I was trying to clean up (e.g. a ReadStream
or setInterval
id) was created after onAborted
was already made. To keep my code modular, I again monkeypatched onAborted
to support multiple handlers:
res.onAborted(() => {
res.done = true
if (res.abortEvents) {
res.abortEvents.forEach((f) => f())
}
})
res.onAborted = (handler) => {
res.abortEvents = res.abortEvents || []
res.abortEvents.push(handler)
return res
}
Async Handler Wrapping
With uWebSockets, async http handlers also require extra care. Aside from having to cork
response methods to achieve maximum performance, errors can creep in from various sources:
- Attempting to close an already closed response (as discussed above)
- An unplanned error (uncaught exception, unhandled promise rejection)
- Returning without closing the response
Since I'm not sure where these errors may live, the safest bet is to apply the guards as soon as possible, at the beginning of the handler. To keep my code DRY, I wrapped each async handler in a higher order function that catches the 3 error types:
const uWSAsyncHandler = (handler: uWSHandler) => async (res: HttpResponse, req: HttpRequest) => {
monkeyPatchRes(res)
try {
await handler(res, req)
if (!res.done) {
throw new Error('Async handler did not respond')
}
} catch (e) {
res.writeStatus('500').end()
sendToReportingService(e)
}
}
It's a pretty simple function. First, it monkeypatches the res so we get free done
tracking (Type #1). Then, it tries to execute the handler. If the handler throws an error (Type #2), or it returns without closing the response (Type #3), it gracefully closes the connection and reports the error to our monitoring service. With very little computational overhead, I can keep on writing sloppy code and not worry about crashing the server. Success! π
Body Parsing
The code example in the uWebSockets repo does a great job of showing how to parse an incoming body. Written as a promise, it can be quite elegant:
const parseBody = (res: HttpResponse) => {
return new Promise<string>((resolve) => {
let buffer: Buffer
res.onData((chunk, isLast) => {
const curBuf = Buffer.from(chunk)
buffer = buffer ? Buffer.concat([buffer, curBuf]) :
isLast ? curBuf : Buffer.concat([curBuf])
if (isLast) {
resolve(buffer.toString())
}
})
})
}
The funny buffer ternary is necessary because onData
reuses the same memory allocation for the following chunk. That means we'll need to clone the buffer by calling concat
or toString
before yielding. I like to return the stringified JSON instead of parsed JSON because sometimes I need the string itself (e.g. SAML response processing or verifying a Stripe webhook payload).
It's worth noting that uWebSocket's onData
handler doesn't play well with breakpoints when using Node's built-in debugger: Issue #191. To mitigate that issue, you can simply clone the chunk and resolve inside a setImmediate
call. Since that has a nontrivial amount of overhead, I only do it when Node is in debuging mode (process.execArgv.join().includes('inspect')
).
Serve Static
Almost all of our assets are served from our CDN in production. However, there are a few exceptions: index.html
, serviceWorker.js
, and everything in development mode. So, I needed something like Express' serve-static
that did the following:
- Serve whitelisted items from memory to reduce disk reads
- Serve those whitelisted items in a compressed format, if supported
- Support webpack-dev-middleware by serving webpack assets in development
While the first two were rather simple to implement (actual code here), supporting webpack-dev-middleware is a bit more interesting. Since performance in development isn't an issue and I wasn't trying to rewrite webpack-dev-middleware from scratch, I decided to simply pass it something that looked like an Express handler:
const makeExpressHandlers = (res: HttpResponse, req: HttpRequest) => {
res.setHeader = (key: string, value: unknown) => {
res.writeHeader(key.toLowerCase(), String(value))
}
const headers = {}
req.forEach((key, value) => {
headers[key] = value
})
return {
req: {
url: req.getUrl(),
method: req.getMethod().toUpperCase(),
headers
},
res,
next: () => {}
}
}
Since the middleware thinks it's getting a standard Express response, checking the result is as easy as checking the res.statusCode
as seen here.
WebSocket Performance
The biggest benefit to moving to uWebSockets is, of course, the fast & memory efficient WebSocket server. While most WebSocket messages are tiny, frequent messages like pongs and peer updates, some initial payloads could get rather large, up to 45KB in our case. Assuming an MTU of 1500 bytes, that's 30 packets! Since WebSockets are built on top of TCP, which guarantees that packets arrive in order, this meant users with less-than-great connectivity could experience significant lag. Combatting this was easy: reduce the number of packets via compression. Using uWebSocket's SHARED_COMPRESSOR
and monitoring packet size with WireShark, I could reduce the 45KB payload down to 6KB with no additional memory overhead, but I was left wondering if I could still do better. Deduplicating JSON objects and using msgpack only yielded savings of an extra 100 bytes each, which was hardly worth the extra computational overhead. So, I decided to look deeper.
First, WebSocket extensions only support the DEFLATE compression algorithm, which yields results about 30% bigger than Brotli compression. Second, there's no way to selectively compress messages, which means CPU cycles were being wasted compressing messages from the browser as well as single-packet messages from the server. So, I brought compression to the application layer. Since most browser messages to the server were very small, it made no sense compressing them, which means the client only needed a decompressor. I wrapped a Brotli decompressor written in Rust into a WASM package. I chose WASM over JS because in my tests (using Chrome 79), it was over 10x faster at decompression. On the server, I only compressed messages larger than 1400 bytes (100 bytes smaller than the MTU limit to account for headers) to guarantee compression would result in at least 1 less packet. The end result is best-in-class compression where you need it, and no compression where you don't. Best of both worlds! The only drawback is the size: the WASM decompressor compiles to about 300KB. To get around this, I compress it and persist it with a service worker to make sure it doesn't affect returning users. This works for us because we only use WebSockets for users who are logged in, however your business logic may be different and it's very likely that the added complexity of custom compression may not be worth the marginal savings. The only way to know is to measure, so I'll be testing that over the coming months.
Conclusion
Overall, I'm thrilled with uWebSockets. Not simply for the performance boost, but because it forced me to refactor a server that grew a little unwieldy as we've grown from nothing to a seed-funded startup with over 16,000 users. If this stuff sounds like fun to you, get paid to work on it! We're a remote team, our codebase is open source, and if you're reading articles like this one, chances are we already like you. Reach out to me directly or apply at https://www.parabol.co/join.
Posted on January 6, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.