top

Performance optimisation for the mobile Web

We are already all too aware that interacting with web-based content on a phone or tablet ranges from barely acceptable to horrendous. Pages often load slowly, render erratically and behave unpredictably. Ads and related tracking techniques not only add bulk and additional requests to an already bandwidth and CPU-constrained device, but pages often behave as though they’re possessed as they convulse around multiple document.write() calls. While most responsively designed websites look fine on screens of all sizes, they often contain a lot of the baggage of desktop websites when viewed on mobile. Due to this, the bounce rate is extremely high due to the load time and content blocking resources. A lot has been going on to identify user behaviour in order to devise countermeasures and improve user experience and today I would like to discuss some of these ongoing efforts and strategies. The first thing that we need to understand is that generic optimisation strategies that might work well for desktops may be completely useless in the mobile web due to the huge difference in context and user expectations. A mobile user is expecting to get some information quickly and doesn’t really care about the fancy animations that ‘woo’ the desktop users, for the lack of a better term. Understanding the intent of your user plays a major role in devising the ideal optimisation strategy for your website. A lot of people believe that responsive = optimised, which is a wrong notion. When our website simply resizes based on our screen size, sure, a website becomes more usable, but we’re not taking into consideration the possible changes in the environment. For eg. Grid layout for text looks very readable on the desktop but when it stacks down in mobile, it may become too much to scroll through. Similarly, engagement differentiators, capabilities tradeoff, prioritising CTAs, handling navigation and providing app-like functionalities leads to the need for a separate mobile optimisation strategy. So one obvious solution that comes to the minds of most people is to have a separate mobile website or porting to modern options like PWAs and AMP. While these are excellent options(will do a different article covering these in detail as well), they might not suit every use case, so it is only natural we talk about some strategies that might help users achieve better results without porting. Let’s start by discussing a few of them. Libraries - know the tradeoffs. - Understand the cost of using libraries, the trade off between performance and faster development process is crucial to understand. There are tons of resources available that let you eliminate heavy libraries out of your workflow. For eg. jQuery is something that all of us use a lot, but there is this awesome website called You might not need jQuery, that shows you how common use cases can be done simply in vanilla js saving you an additional network call. Concat, minify, uglify, automate - I believe a lot of us take these for granted and how important these are. To briefly summarize - concatenation is just appending all of the static files into one large file, minification is just removing unnecessary whitespace and redundant/optional tokens like curly and semicolons and can be reversed by using a linter and uglification is the act of transforming the code into an "unreadable" form, that is, renaming variables/functions to hide the original intent.   A lot of module bundlers come with these functionalities inbuilt, webpack, being one of my favourites. Fonts matter but not that much ¯\_(ツ)_/¯  By deferring font loading, the browser will display copy in whatever font it has available, to begin with. This means the user will always get the content first. Deferring font loading can be achieved by separating the part of the CSS that links to the font files, and loading it after the rest of the page has been rendered. Note, however, that the text may briefly flash to change when the web font is loaded. Prioritize above the fold. - Separate the CSS used to render the visible part of the page (above the fold) first; defer the rest of the styles to load after the page has been rendered. Adding the top CSS directly into the page header can do this. But, bear in mind this will not be cached like the rest of the CSS file, so must be restricted to key content. A variety of tools can help you determine the CSS to separate, including Scott Jehl’s Critical CSS and Paul Kinlam’s Bookmarklet tool. Optimise images - Heavy colored photos work better as JPEG files, whereas flat color graphics should be in PNG8. Gradients and more complex icons work best as PNG24/32. Always use compressed images - use TinyPNG or ImageOptim to compress them. You could also make use of the new HTML5 <picture> element and srcset and size attributes for images. These two additions to the language help you define responsive images directly in the HTML, so the browser will only download the image that matches the given condition. Data URLs - Instead of linking to an external image file, image data can be converted into a base64 (or ASCII) encoded string and embedded directly into the CSS or HTML file. A simple online conversion tool is available. Data URLs are helpful, as they save HTTP requests and can transfer small files more quickly. Load as they scroll - Identify what's absolutely required in order to render the page initially? The rest of the content and components can wait. JavaScript is an ideal candidate for splitting before and after the onload event. Hidden content, images below the fold, and interactions that come after initial page rendering are other ideal candidates. Post-loaded scripts should be viewed as a progressive enhancement - without them the page should still work. Code splitting allows you to split your code into various bundles which can then be loaded on demand or in parallel. It can be used to achieve smaller bundles and control resource load prioritization which, if used correctly, can have a major impact on load time. Enhancements, not requirements - This idea is part of progressive enhancement, where web technologies are layered to provide the best experience across environments. Here, define a basic set of features that must work on the least capable browsers, and only add further complexity after testing whether browsers can handle it. Detecting whether the browser can support HTML5 and CSS features helps us write conditional code to cover all eventualities: enhancing and adding features when supported while staying safe and simple for devices and browsers that do not. In recent times, CSS has had its own native feature detection mechanism introduced — the @supports at-rule. This works in a similar manner to media queries — except that instead of selectively applying CSS depending on a media feature like a resolution, screen width or aspect ratio, it selectively applies CSS depending on whether a CSS feature is supported. Tailor, whenever possible - Requiring entire library just for using a single feature is bad practice, tailor if you're using only a certain function out of an entire library to reduce network load. Tree shaking is a term commonly used in the JavaScript context for dead-code elimination. It relies on the static structure of ES2015 module syntax, i.e. import and export. UglifyJSPlugin in webpack also supports dead code removal which can be integrated into the workflow quite easily. While we have specified in the beginning that generic optimisation strategies might not work well in the context of mobile, some of them don’t really hurt and can also help you achieve better performance. Let us quickly look at some of them as well. Caching - back to basics. - Dynamic web pages require multiple database queries, taking valuable time to process output and format data, then render to browser-legible HTML. It’s recommended to cache content previously rendered for that device. For returning visitors, instead of processing from scratch, it will check the cache, and only send updates. Use a server handler (like an .htaccess file) to instruct the browser on which type of content to store, and how long they should keep copies. KPI goals and load impact testers - Define key performance indicator (KPI) goals. These are the milestone metrics that indicate project success, based on business objectives. Given their importance, performance-related goals should appear here. Setting and adhering to a strict performance budget: establishing a target for the final website’s speed and size is always a good idea. Load test your site frequently to see what might be causing bottlenecks. Enable gzip compression. - Gzip is another form of compression which compresses web pages, CSS, and javascript at the server level before sending them over to the browser Apache - You can enable compression by modifying your .htaccess file. Nginx - You can enable compression by modifying your nginx.conf file. Redirects - don’t. - Redirects are performance killers. Avoid them whenever possible. A redirect will generate additional round trip times (RTT) and therefore quickly doubles the time that is required to load the initial HTML document before the browser even starts to load other assets. CDNs - Yay or nay?  - You can improve asset loading by using a CDN like CloudFlare alongside your usual hosting service. Here, static content (like images, fonts and CSS) is stored on a network of global servers. Every time a user requests this content, the CDN detects their location and delivers assets from the nearest server, which reduces latency. It increases speed by allowing the main server to focus on delivering the application instead of serving static files. Hot link protection - Hotlink protection refers to restricting HTTP referrers in order to prevent others from embedding your assets on other websites. Hotlink protection will save you bandwidth by prohibiting other sites from displaying your images. Example: Your site URL is www.domain.com. To stop hotlinking of your images from other sites and display a replacement image called donotsteal.jpg from an image host, modify your .htaccess file: Prefetching and preconnect. - Domain name prefetching is a good solution to already resolve domain names before a user actually follows a link. Here an example how to implement it in the HEAD section of your HTML: <link rel="dns-prefetch" href="//www.example.com"> Preconnect allows the browser to set up early connections before an HTTP request is actually sent to the server. Connections such as DNS Lookup, TCP Handshake, and TLS negotiation can be initiated beforehand, eliminating roundtrip latency for those connections and saving time for users. The example below shows what it looks like to enable preconnect for the zone alias link for KeyCDN. <link href='https://cdn.keycdn.com' rel='preconnect' crossorigin> Minimise render blocking resources. - When it comes to analyzing the speed of your web pages you always need to take into consideration what might be blocking the DOM, causing delays in your page load times. These are also referred to as render blocking resources, such as HTML, CSS (this can include web fonts), and Javascript. Async allows the script to be downloaded in the background without blocking. Then, the moment it finishes downloading, rendering is blocked and that script executes. Render resumes when the script has executed. In the image below, the blue dom content line separates render blocking resources from non blocking ones. Well, congratulations if you have made it this far, that was a lot of information to digest in a single article but I really hope you learnt something valuable out of this. Feel free to reach out to me on my social handles in case you would like to discuss something or have any doubts. Cheers.
Rated 4.0/5 based on 20 customer reviews
Normal Mode Dark Mode

Performance optimisation for the mobile Web

Deepak Pathania
Blog
28th Dec, 2017
Performance optimisation for the mobile Web

We are already all too aware that interacting with web-based content on a phone or tablet ranges from barely acceptable to horrendous. Pages often load slowly, render erratically and behave unpredictably.

Ads and related tracking techniques not only add bulk and additional requests to an already bandwidth and CPU-constrained device, but pages often behave as though they’re possessed as they convulse around multiple document.write() calls. While most responsively designed websites look fine on screens of all sizes, they often contain a lot of the baggage of desktop websites when viewed on mobile.

Due to this, the bounce rate is extremely high due to the load time and content blocking resources. A lot has been going on to identify user behaviour in order to devise countermeasures and improve user experience and today I would like to discuss some of these ongoing efforts and strategies.


 optimisation for the mobile Web

The first thing that we need to understand is that generic optimisation strategies that might work well for desktops may be completely useless in the mobile web due to the huge difference in context and user expectations.

A mobile user is expecting to get some information quickly and doesn’t really care about the fancy animations that ‘woo’ the desktop users, for the lack of a better term. Understanding the intent of your user plays a major role in devising the ideal optimisation strategy for your website.

A lot of people believe that responsive = optimised, which is a wrong notion. When our website simply resizes based on our screen size, sure, a website becomes more usable, but we’re not taking into consideration the possible changes in the environment. For eg. Grid layout for text looks very readable on the desktop but when it stacks down in mobile, it may become too much to scroll through.

Similarly, engagement differentiators, capabilities tradeoff, prioritising CTAs, handling navigation and providing app-like functionalities leads to the need for a separate mobile optimisation strategy.

So one obvious solution that comes to the minds of most people is to have a separate mobile website or porting to modern options like PWAs and AMP. While these are excellent options(will do a different article covering these in detail as well), they might not suit every use case, so it is only natural we talk about some strategies that might help users achieve better results without porting.

Let’s start by discussing a few of them.

  1. Libraries - know the tradeoffs. - Understand the cost of using libraries, the trade off between performance and faster development process is crucial to understand. There are tons of resources available that let you eliminate heavy libraries out of your workflow.
    For eg. jQuery is something that all of us use a lot, but there is this awesome website called You might not need jQuery, that shows you how common use cases can be done simply in vanilla js saving you an additional network call.

  2. Concat, minify, uglify, automate - I believe a lot of us take these for granted and how important these are. To briefly summarize - concatenation is just appending all of the static files into one large file, minification is just removing unnecessary whitespace and redundant/optional tokens like curly and semicolons and can be reversed by using a linter and uglification is the act of transforming the code into an "unreadable" form, that is, renaming variables/functions to hide the original intent.  
    A lot of module bundlers come with these functionalities inbuilt, webpack, being one of my favourites.

  3. Fonts matter but not that much ¯\_(ツ)_/¯  By deferring font loading, the browser will display copy in whatever font it has available, to begin with. This means the user will always get the content first.
    Deferring font loading can be achieved by separating the part of the CSS that links to the font files, and loading it after the rest of the page has been rendered.
    Note, however, that the text may briefly flash to change when the web font is loaded.

  4. Prioritize above the fold. - Separate the CSS used to render the visible part of the page (above the fold) first; defer the rest of the styles to load after the page has been rendered.
    Adding the top CSS directly into the page header can do this. But, bear in mind this will not be cached like the rest of the CSS file, so must be restricted to key content.
    A variety of tools can help you determine the CSS to separate, including Scott Jehl’s Critical CSS and Paul Kinlam’s Bookmarklet tool.

  5. Optimise images - Heavy colored photos work better as JPEG files, whereas flat color graphics should be in PNG8. Gradients and more complex icons work best as PNG24/32.
    Always use compressed images - use TinyPNG or ImageOptim to compress them.
    You could also make use of the new HTML5 <picture> element and srcset and size attributes for images. These two additions to the language help you define responsive images directly in the HTML, so the browser will only download the image that matches the given condition.

    Data URLs - Instead of linking to an external image file, image data can be converted into a base64 (or ASCII) encoded string and embedded directly into the CSS or HTML file. A simple online conversion tool is available. Data URLs are helpful, as they save HTTP requests and can transfer small files more quickly.

  6. Load as they scroll - Identify what's absolutely required in order to render the page initially? The rest of the content and components can wait. JavaScript is an ideal candidate for splitting before and after the onload event. Hidden content, images below the fold, and interactions that come after initial page rendering are other ideal candidates. Post-loaded scripts should be viewed as a progressive enhancement - without them the page should still work.
    Load as they scroll
    Code splitting allows you to split your code into various bundles which can then be loaded on demand or in parallel. It can be used to achieve smaller bundles and control resource load prioritization which, if used correctly, can have a major impact on load time.

  7. Enhancements, not requirements - This idea is part of progressive enhancement, where web technologies are layered to provide the best experience across environments.
    Here, define a basic set of features that must work on the least capable browsers, and only add further complexity after testing whether browsers can handle it.
    Detecting whether the browser can support HTML5 and CSS features helps us write conditional code to cover all eventualities: enhancing and adding features when supported while staying safe and simple for devices and browsers that do not.
    Enhancements,
    In recent times, CSS has had its own native feature detection mechanism introduced — the @supports at-rule. This works in a similar manner to media queries — except that instead of selectively applying CSS depending on a media feature like a resolution, screen width or aspect ratio, it selectively applies CSS depending on whether a CSS feature is supported.

  8. Tailor, whenever possible - Requiring entire library just for using a single feature is bad practice, tailor if you're using only a certain function out of an entire library to reduce network load.
    Tree shaking is a term commonly used in the JavaScript context for dead-code elimination. It relies on the static structure of ES2015 module syntax, i.e. import and export.
    UglifyJSPlugin in webpack also supports dead code removal which can be integrated into the workflow quite easily.

While we have specified in the beginning that generic optimisation strategies might not work well in the context of mobile, some of them don’t really hurt and can also help you achieve better performance. Let us quickly look at some of them as well.

  1. Caching - back to basics. - Dynamic web pages require multiple database queries, taking valuable time to process output and format data, then render to browser-legible HTML. It’s recommended to cache content previously rendered for that device. For returning visitors, instead of processing from scratch, it will check the cache, and only send updates. Use a server handler (like an .htaccess file) to instruct the browser on which type of content to store, and how long they should keep copies.

  2. KPI goals and load impact testers - Define key performance indicator (KPI) goals. These are the milestone metrics that indicate project success, based on business objectives. Given their importance, performance-related goals should appear here. Setting and adhering to a strict performance budget: establishing a target for the final website’s speed and size is always a good idea. Load test your site frequently to see what might be causing bottlenecks.

  3. Enable gzip compression. - Gzip is another form of compression which compresses web pages, CSS, and javascript at the server level before sending them over to the browser

    1. Apache - You can enable compression by modifying your .htaccess file.

    2. Nginx - You can enable compression by modifying your nginx.conf file.

  4. Redirects - don’t. - Redirects are performance killers. Avoid them whenever possible. A redirect will generate additional round trip times (RTT) and therefore quickly doubles the time that is required to load the initial HTML document before the browser even starts to load other assets.

  5. CDNs - Yay or nay?  - You can improve asset loading by using a CDN like CloudFlare alongside your usual hosting service.
    Here, static content (like images, fonts and CSS) is stored on a network of global servers.
    Every time a user requests this content, the CDN detects their location and delivers assets from the nearest server, which reduces latency. It increases speed by allowing the main server to focus on delivering the application instead of serving static files.

  6. Hot link protection - Hotlink protection refers to restricting HTTP referrers in order to prevent others from embedding your assets on other websites. Hotlink protection will save you bandwidth by prohibiting other sites from displaying your images.

Example: Your site URL is www.domain.com. To stop hotlinking of your images from other sites and display a replacement image called donotsteal.jpg from an image host, modify your .htaccess file:

  1. Prefetching and preconnect. - Domain name prefetching is a good solution to already resolve domain names before a user actually follows a link. Here an example how to implement it in the HEAD section of your HTML:
    <link rel="dns-prefetch" href="//www.example.com">

    Preconnect allows the browser to set up early connections before an HTTP request is actually sent to the server. Connections such as DNS Lookup, TCP Handshake, and TLS negotiation can be initiated beforehand, eliminating roundtrip latency for those connections and saving time for users.
    The example below shows what it looks like to enable preconnect for the zone alias link for KeyCDN.
    <link href='https://cdn.keycdn.com' rel='preconnect' crossorigin>

  2. Minimise render blocking resources. - When it comes to analyzing the speed of your web pages you always need to take into consideration what might be blocking the DOM, causing delays in your page load times. These are also referred to as render blocking resources, such as HTML, CSS (this can include web fonts), and Javascript.
    Async allows the script to be downloaded in the background without blocking. Then, the moment it finishes downloading, rendering is blocked and that script executes. Render resumes when the script has executed.
    In the image below, the blue dom content line separates render blocking resources from non blocking ones.

Well, congratulations if you have made it this far, that was a lot of information to digest in a single article but I really hope you learnt something valuable out of this. Feel free to reach out to me on my social handles in case you would like to discuss something or have any doubts. Cheers.

Deepak

Deepak Pathania

Blog Author
Product Engineer at Postman. Seasoned hacker, mentor, and tech speaker. A full stack web developer with a focus on writing clean and scalable code. Open source enthusiast, loves everything javascript.

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE OUR BLOG

Follow Us On

Share on

other Blogs

20% Discount