‘Latency’; the term is generally something you’d expect to hear from a network engineer – especially one that’s working with real-time applications that are critical for the delivery of a service or product.
If you take a second to consider that statement, however – you’ll realize that it applies to businesses who put applications out for customer use too. The roles are changed slightly – but essentially, there’s someone using your application – and, if the performance is poor, they’re going to have a diminished experience.
Now, when you think about that in more detail – it becomes more apparent why thinking about latency should be at the heart of what an app developer is doing. If you create an application for a business end-user to interact with, they’re going to do so – whether or not the involves apologizing to a customer for slow progress. Customers, on the other hand, don’t have the constraints of a system to work within; if they don’t like the app experience – they’re going to walk away.
Here, we’ll take a look at what latency is – and the app end-user facts that demand you take it into consideration when you next think about how you’ll deliver a project…
What Is Latency?
When data is transmitted over any connection it’s divided down into packets of information. The communication protocol your devices and programs use to dictate what these are – but, essentially, the speedier the delivery of these packets, the better end-user experience you’re going to have.
High latency means a slower process – and diminished user experience – whereas low latency means a quicker, more user-friendly experience.
Why Does It Happen?
Generally speaking, an internet communication protocol (for instance, TCP/IP) will test any connection between a service provider and service user. When an app install seeks information from your database, it sends a small packet of data to the receiving service – which then returns an ‘acknowledgement’ – essentially telling the application that it’s looking in the right place.
The speed of this back and forward interaction is determined by the amount of latency across the connections required between the two points on a network (usually the internet) – and will set the pace for the subsequent transfer of information.
So, what happens during that process?
Well, this is where ‘bandwidth’ and ‘throughput’ come into play. A connection’s bandwidth is the term used to consider how much information can be carried – whereas throughput is the actual amount of information that’s being carried.
If you’re expecting to send a lot of information (for instance, real-time video, VoIP, etc) then you’re likely to put a strain on some of the connections you’re relying on.
‘Some’ is an important distinction here. It’s highly unlikely you’re going strain internet service provider infrastructure – but you may well put a heavy strain on the connections closer to your end-user – especially if they’re accessing your application with home networking hardware.
It’s at these points that bandwidth is reduced – but, when a certain throughput is required to keep a service running, bottlenecks can occur. In essence, a bottleneck is a part of the connection that cannot keep up with the throughput demand – so, data packets begin to be dropped.
From an end-user point of view, this will generally look and feel like a slow running service – or, in some cases (especially where real-time information is being transmitted) a deterioration in the quality of service. When pushed to the extreme, too much data is lost – and the service breaks down completely – either dropping the connection to your service – or crashing the application.
Why Does It Matter?
Since we’ve already talked about reduced quality – or complete loss – of service, it’s probably already evident why latency has a big impact on the use of applications. Generally, latency causes packet loss – and packet loss causes errors. In the majority of cases, apps will time-out waiting for information to be delivered.
Now, unless you’ve got some ingenious way of dressing this up to the user (perhaps if your application is running more than one task at a time), this is going to cause you problem – as generally, you user’s going to be experiencing a spinning or crashing app. If the data never lands – the application never responds.
So, how do end users react?
Well, 50% of users will stop using an app if it runs slowly. What’s more, 55% of users will uninstall or stop using an app if it crashes or shows errors.
1-2 seconds doesn’t feel like a long time when you’re sitting, sipping your coffee now – but, in actual fact, it’s this kind of time frame that end users are concerned with. Application load and delivery time expectations are very similar to website loading time expectations – so these are both instances when fractions of seconds count. Got an app that spins for 5-10 seconds before access? You can kiss goodbye to around 70% of your audience if that’s the case.
What Can You Do About It?
It might seem grossly unfair to think about what you can do about latency when the problem may lay out of your hands – but, in the world of application development – it’s often not facts that matter – it’s appearances.
Who do you think gets the blame when an application crashes due to poor networking hardware? The ISP? The company that has cheaply made the router? No – it’s the app developer – and the feedback is generally there for all to see on your chosen marketplace.
So, the answer is simply to account for the slowest runner in the team. If you know what latency figures you’re likely to be up against – you can use a tool like this one to calculate the maximum level of throughput you can expect without any deterioration of service.
Unfortunately, you can’t expect people to adjust their online behavior or equipment account for a service that delivers high-quality service. In reality, if your app is being let down by issues beyond your control – you simply have to account for those – because if you don’t, someone else will – and, even if their service doesn’t come close to yours on paper, a less-the-perfect service is far better than a non-existent one…