As websocket communication makes large real-time web experiences more common, we are increasingly faced with the problem of how to build apps that work just as well with many concurrent users as they do with a few. The problem touches both the technical limits of the server infrastructure and the design of the user experience. This post argues that imposing an artificial constraint on concurrent users and then designing around that constraint will lead to better real-time apps. It then describes a scalable waiting list implementation (with a working demonstration project) built around Redis sorted sets.
We’re going to do what?
Your users are coming in droves, and I’m about to suggest you should prevent some (in rare cases most) of them from getting in. Intentionally denying service to your users is, and should be, an uncomfortable idea, but real-time web applications sometimes require counterintuitive thinking. If your application allows users to broadcast messages to each other within rooms (or is just one big room), intentionally limiting the size of the room can create a better experience for everyone. This is partly because real-time apps tend to consume bandwidth and CPU cycles as the square of the number of occupants in a room (i.e an app that consumes x with 10 occupants will tend to consume 10^6*x with 10000). No matter how beefy your infrastructure, there is going to be some room size at which your system is either going to fail or become too expensive to maintain (and, sadly, that number isn’t high enough to buy you a new server farm and a bigger engineering department).
Technical constraints aside, limiting room sizes is often still a good idea. When a real-time app really works, the participants all feel some sense of immediacy and connection to each other. As the number of connections grows, the sense of togetherness fades. There is a big difference between a coffee shop chat, a dinner party, a town hall meeting, and a panel discussion at a large conference. The same is true of live web experiences. At each stage, the setting becomes a little less intimate and the discourse less personal. Sometimes it is better to create a great experience for a few than a chaotic, empty experience for many.
So the size of a real-time web experience that is both worthwhile and won’t break your servers or the bank is effectively constrained no matter what we do. The goal is to manage that constraint so that some (hopefully most) people get in and have a great experience, and the experience for the rest is fair and unbroken. Let’s start by assuming there is some number, N, that is the maximum reasonable room capacity for your app. N can be 30, 100, 1000, 20000, or whatever; It really depends on what you’re trying to do. The important thing is that it is a number that is low enough to prevent a bad experience for the people in the room. Once we’ve agreed that there’s a fixed maximum room capacity, we can start talking about design goals for the people that don’t get in immediately.
First come first serve: So long as the room size hasn’t reached N, everyone is allowed in.
First in, first out: After N connections, newcomers are told they are waiting rather than being connected to the room. As some people drop out of the room, the people that have been waiting the longest are automatically connected to the room.
No holding places: People on the waiting list have to wait. You can’t leave the site, then come back in 30 minutes and expect to keep your place in line.
Transparency: People on the waiting list need to know how many people are in front of them, and this number needs to stay up-to-date so they can make good decisions about whether to keep waiting.
Ins and outs (limited): People in the room should be allowed to disconnect briefly without going to the back of the waiting list. This eases the pain of temporary network glitches or page refreshes. People can only be allowed to leave the room for a short time, however, before their slot will be filled by someone on the waiting list.
A waiting list implementation on a single process would be relatively easy, but would put severe limits on how many rooms we can operate at a time. A multi-process (multiple cores and/or multiple servers) application is more challenging because all the processes must agree on who is in and who is out. Fortunately, Redis takes care of most of the heavy lifting and its hash and sorted set data structures lets us perform exactly the tasks we need efficiently. The outline of a solution looks like this:
We use a Redis hash to track who is currently in the room. We add users to the set when they join the room, and remove them when their socket disconnects. This gives us constant time queries to lookup a particular user or to determine the current occupancy of the room. We need to do something like this whether or not we decide to cap the room size.
When a user makes a request to enter a room, we check the current room size. If it’s at the limit, we add the user to the waiting list and send them to a ‘waiting room’ page instead. A Redis sorted set makes the perfect data structure for the waiting list. This gives us the ability to quickly add or remove the user, and it provides a quick way to determine how many people are ahead of the user in line.
The people on the waiting list will make requests at regular intervals to check their status. If a space has opened in the room and the user is at the front of the list, the client will refresh and be accepted into the room.
Tracking the people that drop out of the waiting list is just as important a tracking the people that remain within it. We can use the length of time between check-ins to determine when a person has stopped waiting, but we can’t feed this into our existing data structures to efficiently query the dropouts. We’ll need to introduce a new sorted set that will be scored by last check-in time. Each check-in, we’ll lookup all the users that have fallen out of date, remove them from both sets and then recalculate the current user’s rank. Alternatively, we could run this check at a regular intervals for all the rooms. This would probably improve overall performance, but at the cost of some additional system complexity.
To support ins and outs, we simply add the users leaving the room to the waiting list with a score of 0 when their socket disconnects. If they return quickly, they will be immediately accepted back into the room. After a short period of time, their check-in time will expire and they will be struck from the list. Although not the easiest solution to implement, it covers all the design goals and it is relatively efficient and fault tolerant.
Connecting to a room
When a user successfully enters a room we call connect() which executes the following Redis commands using Redis pipelining:
HSET [room]:participants [userId] [userData] EXPIRE [room]:participants [PARTICIPANT_TTL] ZREM [room]:waiting [userId]
The first command adds the user to the hash. The second ensures that the hash will expire some time in the future (so we don’t have to worry about leaking stale data). The third command possibly removes the user from the waiting list (more on this later).
With all connected users added to the participant hash, we can query for the active participants:
or count the active participants:
Checking the waiting list
Whenever a user attempts to enter a room or a user on the waiting list updates their waiting status, we call the check() command. Depending on the status of the user and room, the check command can follow several different paths. The following pseudo code gives the rough idea:
response = checkQuery(roomId, userId) participantCount = response.participantCount waitingCount= response.waitingCount participant = response.participant dropouts = response.dropouts rank = response.rank // if we have dropouts, our rank is incorrect. // Remove them and recalculate if exists(dropouts) waitingCount, rank = removeDropouts(roomId, dropouts) // If the user is already a participant or we don’t need // a waiting list, just let the user through. if participant or participantCount + waitingCount < MAX_OCCUPANCY return “READY” // If there are room openings and the user is close enough // the front of the line, give them a “ticket” and let them through if participants + rank < MAX_OCCUPANCY addTicket(roomId, userId) return “READY” // If the user is not currently in the waiting list, find the // score of the last user and add one to it to get the user’s score. // In either case make sure the user’s checkin status is up to date. if not exists(rank) lastUser = findLastUser() rank = addToWaitingList( roomId, userId, lastUser.score + 1) else updateCheckinStatus(roomId, userId) // The user is still waiting. Notify them of their place in line. return ‘WAITING’, rank
The demonstration implementation has a few extra wrinkles, but this covers all the basic ideas.
The initial checkQuery() queries all of the room state relative to the user at once. It looks like this:
HLEN [room]:participants ZCARD [room]:waiting ZRANK [room]:waiting [userId] HGET [room]:participants [userId] ZRANGEBYSCORE [room]:checkin 0 [now() - DROPOUT_TIME]
That is, we check the participant hash for the room size, we check the waiting list set for the waiting list size, we check the waiting list set for the current rank of the user, we check the participant hash for the user, and we check the checkin set for the ids of users who have dropped out of the waiting list.
If we find dropouts, we can remove them and recalculate the rank with a single request:
ZREM [room]:checkin dropout1 dropout2 [etc...] ZREM [room]:waiting dropout1 dropout2 [etc...] ZCARD [room]:waiting ZRANK [room]:waiting [userId]
That is, remove the dropouts from the checkin set, remove them from the waiting set, recalculate the size of the waiting list and then recalculate the user’s rank (position in line).
If the user is coming off the waiting list, we still need to update the checkin status to make sure they don’t get dropped out between now and the next time they are checked. We’re not entirely sure they are on the waiting list at this point, so we go ahead and set their waiting score to 0 to give them a slightly better chance of getting in. This is called addTicket() and it’s implemented like this:
ZADD [room]:waiting 0 [userId] EXPIRE [room]:waiting [WAITING_TTL] ZADD [room]:checkin [now()] [userId] EXPIRE [room]:checkin [CHECKIN_TTL]
That is, we add (re-add) the user with 0 score, and we add the user to the checkin set with the current time. Whenever we add something to a Redis data structure, we go ahead and set the expiration time for that data structure just to make sure it happens.
If the user needs to wait and the user is not currently on the waiting list, we need to determine the end of the line (what is the current highest waiting score). findLastUser() looks like this:
ZRANGE [room]:waiting -1 -1 WITHSCORES
Which finds the last entry in the waiting set (including the score). We then add 1 to it (or just use 1 if the waiting set was empty) and then add the user to the waiting set, make the first checkin, and calculate the user’s new rank:
ZADD [room]:waiting [score] [userId] EXPIRE [room]:waiting [WAITING_TTL] ZADD [room]:checkin [now()] [userId] EXPIRE [room]:checkin [CHECKIN_TTL] ZRANK [room]:waiting [userId]
If the user is already in the waiting list (already has a rank), we just update the checkin status:
ZADD [room]:checkin [now()] [userId] EXPIRE [room]:checkin [CHECKIN_TTL]
And return the rank.
Disconnecting from a room
When a user leaves the room by disconnecting the web socket (intentionally or otherwise), we need to remove the user from the participant hash. This keeps the hash accurate and paves the way for waiting list users to enter the room. At the moment the user disconnects, however, we don’t know if it was a network glitch, a page reload, or if the user really wants to move on. We don’t want anyone who intends to come back immediately to be relegated to the back of the waiting list. The solution is to remove the user from participant hash and then immediately add the user to the waiting set at the front of the line (i.e. with score 0). This means the user will pass the gate check if they do return in a short amount of time, and they will drop out of the waiting list – letting someone else in – if they don’t. The disconnect command looks like this:
HDEL [room]:participant [userId] ZADD [room]:waiting 0 [userId] EXPIRE [room]:waiting [WAITING_TTL] ZADD [room]:checkin [now()] [userId] EXPIRE [room]:checkin [CHECKIN_TTL]
And that, plus a few [server] and [client side] hooks, is a complete multi-process, real-time waiting list implementation.
Your real-time web app is going to have an upper limit on concurrent users no matter what. If you don’t manage that limit, the experience for all users is going to degrade as you approach it. A better design caps the number of users at the highest number that still supports a great experience. A online waiting list will help manage the experience of people outside the cap. The Redis-based waiting list implementation implemented in the demonstration project and described above provides a reasonable user experience without imposing any (known) scaling constraints.