Want a free year on Tuts+ (worth $180)? Start an InMotion Hosting plan for $3.49/mo.
Make Sure Your App Is Always Fast
In today’s hyperconnected world, people want results fast. As mobile developers, we’re more aware of this than most. Our users don’t sit down in front of a desk. They’re on the go, running our apps while trying to walk, talk, and drive, so they expect snappy experiences. Get in, get out.
Multiple studies from Akamai, Google, and others have correlated website speed with user retention. And so far, evidence suggests people are at least as demanding when using native apps. In a survey of mobile users done by Apigee, the top complaint about mobile applications is freezes, and over 44% of the surveyed users said they would delete a slow performing app immediately.
Ask Facebook about the importance of fast mobile apps. When their stock bottomed out in the high teens, Mark Zuckerberg said that basing their app on HTML5 was the biggest mistake they made as a company. Why? Because it was slow. Within three weeks after the release of Facebook's new, faster native app, the application's rating had climbed from 1.5 stars to 4. Slow applications cause significant business pain. Lost users. Lost dollars.
What Could Possibly Go Wrong?
When first talking to developers about monitoring their app's performance in production, the most common response was "My app is already fast."
The trouble is, as the world of mobile fragments, it's hard to deliver a consistently fast experience. How does your app perform in China on an old phone and slow network? I’m willing to bet you have no idea. It’s certainly not the same as it performs on your brand new iPhone connected to your office Wi-Fi.
Performance is completely dependent on the context in which your application runs. Here’s a quick—but certainly not complete—list of performance gotchas:
We're used to thinking of internet issues in terms of bandwidth limitations, but in cellular networks, latency is often the dominant factor. On a 3G network, it can take around 2.5 seconds to go from idle to connected before a single byte is transmitted. And Sprint says the average latency on their 3G network is 400ms. It doesn’t matter how fast your server processes a request if the response is slow getting to the phone.
As geeks, we often develop using the latest and greatest, but most of the world, including massive markets you would like to penetrate, give up speed in order to achieve affordability. Our tests show that CPU bound code on an iPod 4G takes roughly four times longer than on an iPhone 5S. On Android, the disparity is even more significant.
If your app uses too much memory, it’s killed by the operating system. To the user, this looks the same as a null pointer exception. Even if your code is squeaky clean without a single memory leak, your memory high water mark may lead to crashes on less powerful, but popular, phones in important markets.
Batteries are one of the first things to get downsized when manufactures are trying to save space and money. But that won’t make users more understanding when your app drains all their power.
Built for Mobile
Let's say for a moment you're convinced that you need a fast application, and it should be fast everywhere, not just for you when you’re running your app through Apple’s Instruments CPU profiler. What is a developer to do? Right now, you have two basic options:
Option 1: Monitor Your Servers
A fast API means a fast app. Right? This is a web developer's mentality, and if you’re a mobile developer, it’s wrong.
Native, mobile apps, on the other hand, are thick clients. They have large, multi-threaded code bases. They maintain state. And they have to perform on a huge variety of handsets, operating systems, and networks. Your server team can still screw up the user's experience, but there's a whole new set of issues that aren't going to show up in your server alerts.
Option 2: QA the Hell Out of Your App
Fine. You get it. You need to make sure you test your apps in a bunch of real world scenarios. So you’re going to build a fancy QA lab with 100 different handsets. Then you’re going to enclose them in a faraday cage so you can simulate adverse network conditions, and hire an army of QA folks to run each new release through every possible action in your application.
I’ll admit, if you can afford it, this isn’t a bad idea. But the combinations quickly become overwhelming. Imagine you care about the top 100 phones, 10 network speeds, 20 different foreign markets with different latencies, and 5 different OS versions. Now imagine you have 50 distinct actions in your app. Ignoring the interdependency between the actions and varying user data, you have 1 million combinations to test. Ouch!
This is a classic QA problem. Quality assurance means doing your best to test the most common use cases. But it’s never meant to be a replacement for monitoring. It’s simply impossible to stay on top of all the possible failure cases.
A New Type of Tool
We need a new toolset, built from the ground up to specifically measure the performance issues of mobile apps. What metrics should these new tools capture?
Nothing annoys a user more than a frozen screen. By capturing each time your app takes above a time threshold to render a frame, you can get an idea of how often they see a noticeable freeze.
If you follow good UI/UX practices, anytime you need to do work that’s going to take more than a few milliseconds, you should do it in the background and throw up a spinner. But even if you are on top of your threading, users still have limited patience.
After 1 second, users have a mental context switch, and after 10 seconds, users abandon their task. If you capture each time you show a spinner, you have a good generic indicator of how long the typical user is waiting on your app.
Memory bugs are one of the hardest things to track down, especially since the Out of Memory killer on iOS doesn’t result in a stack trace. It’s too expensive to track every allocation, but recording resident memory on iOS or VM Heap use on Android are good, low overhead measurements.
Network Latency, Download Time, and Bandwidth
Latency and bandwidth are both highly variable on cellular networks, and play a key role in the user experience. For each API request, you can record how long it takes to get the initial response (latency), how long it takes to get the full response (download time), and bytes downloaded (bytes/download time equals bandwidth).
One of the few reasons I uninstall apps are high battery use. There are obvious battery sucks, like using the device's GPS, but there are other unexpected gotchas, like activating the wireless antenna too often. Both iOS and Android offer APIs for monitoring battery charge levels.
In mobile, context is everything. When something goes wrong, you should at a minimum know application version, location, carrier network, version of the operating system, and device.
Introducing the Pulse.io SDK
If you're ambitious, you may have some homegrown performance instrumentation in your application. You probably have some basic timers for key actions in your app, then phone home the data via either a log or a specialized packet of JSON.
If so, pat yourself on the back. You’ve done far more than most. But there are many drawbacks to this approach. What if you have performance problems in unexpected places in your app? If you have a problem, how do you know what caused it? Was it a long method call, a slow API request, or too much data on the wire?
And once you get the raw performance data, how do you analyze and visualize it? If you write a one-off script, how often do you run it? And, God forbid, what happens if your performance instrumentation causes performance issues?
At Pulse.io, we've been hard at work for the past year building an SDK chock-full of monitoring goodness. We capture all of the metrics listed above while maintaining a very light footprint. We consume less than 3% of CPU, batch send our data to avoid turning on the radio, and limit our memory use by discarding low priority information.
The best part about Pulse.io is that it captures all of this stuff automagically. It’s one thing to manually instrument your app with your home grown solution. It’s another thing entirely to convince every engineer on your team to do so, and to apply the same instrumentation methodology consistently over time.
With Pulse.io, you just drop in the SDK and it automatically finds all the user interactions within your app and records when those interactions cause bad behavior like screen freezes or long asynchronous tasks.
Getting Started Monitoring Performance
Installing Pulse.io will take you less time than reading this article. We're currently in private beta, but if you shoot us an email at beta[at]pulse[dot]io and mention you read about us on Tuts+, we'll set you up with an account.
Once you’ve downloaded the SDK, installation is super simple. Drop the SDK into your app, add a few dependencies, and call
[PulseSDK monitor:@"YOUR_APP_KEY"] within your app's
main.m. You're done.
Hopefully I've convinced you of three things:
- Slow apps lose users and therefore dollars.
- Fast apps in development can be slow apps in production.
- Existing tools don't do a good job monitoring real world app performance.
I encourage you to investigate your own app's real world performance. Give Pulse.io a try. There's not much to lose and a whole lot of performance to gain.