On my way to work this morning my successful network connections had a latency of between 544 and 6937 milliseconds. That’s for connections over an active interface, if the interface had been inactive to save power it might need an additional 10 seconds before the interface would be available. Apps need to be able to account for multi-second latency (at a minimum) in order to provide a good user experience.
If my app blocks the user until they successfully authenticate against some web backend then it might take 7 seconds fore them to get past the login screen, 14 seconds if I have to make a second request to get the data to display once they log in. Wherever possible I want to avoid those kind of sequential requests.
Instead I try to allow several requests to be in-flight at once, especially requests I expect to have a small payload so I will be limited by latency rather than bandwidth. Depending on the application this might be possible through HTTP pipelining or by allowing the queue which manages my network requests to have several requests in flight at a time.
With multiple requests in flight I cannot guarantee the order in which those requests will finish so my server needs to be able to handle receiving a request to delete an object before the request to create it. On a recent application we included a sequence number with each request and managed a server-side queue of received requests which would not be processed if a request in the middle of the sequence was missing.
A final concern when dealing with high latency is that responses to my requests may arrive long after the user has moved on to another task within the app. A common pattern seems to be to allow view controllers to manage network requests related to their views and to cancel any pending requests when a view controller is destroyed or disappears. This works well in some cases but can lead to users triggering many requests for new data but never remaining on a single view controller long enough to see the results. Instead I prefer to have a (non-view) controller responsible for managing requests which update my app’s model.
Testing network latency
In fallacy #1 I mentioned using Charles to run a local proxy so I can test different network conditions on my development devices. Charles can also be configured to adjust network latency and introduce significant delays between test devices and servers.