top
Sort by :

NPM report on the State of Major JavaScript Frontend Frameworks

On January 9, NPM (Node Package Manager) has released a report on the state of JavaScript frameworks in 2017. “The JavaScript community is growing at a rate unprecedented in the history of programming languages, and the npm registry is growing right along with it,” said Laurie Voss, npm co-founder and COO. “As the central hub where developers distribute and discover JavaScript code, we’re able to see trends in the applications developers write and the tools they use.” The report is evaluated by analysing different trends from November 2016 to November 2017. According to the report, React is the most leading JavaScript framework pack for web development. It has gained massive adoption in Desktop application, web, and mobile development. React usage has grown 500% more compared to all the other npm registry downloads. Preact, a smaller and faster alternative for React has also seen a growth of 145 percent.   Angular is not expected to grow much, though it is gaining more popularity. Only 0.008% of downloads were recorded for Angular from the npm registry. “A package with this growth curve might not ordinarily seem a good choice for a new project, but Google’s enormous resources and continued backing mean it can be depended upon to stick around,” according to the report. Though Vue is not that popular as Angular and Ember, the report stated that it is expected to grow much faster than any other tool in popularity. Eventually, the report demonstrates that Webpack is the most effective way to develop web applications. It is used to develop React code and transforms the JavaScript into code which can be availed by the browser. Since the usage of Webpack has grown faster than React in 2017, the company strongly believes that developers are using Webpack now to develop other apps too. Source: NPM Official Blog
NPM report on the State of Major JavaScript Frontend Frameworks 64 NPM report on the State of Major JavaScript Frontend Frameworks What's New
Casey Lehman 12 Jan 2018
On January 9, NPM (Node Package Manager) has released a report on the state of JavaScript frameworks in 2017. “The JavaScript community is growing at a rate unprecedented in the history of progr...
Continue reading

Understanding the “this” Keyword in JavaScript

In this article, we are going to learn about Javascript keyword “this” and how the value of “this” is assigned in different scenarios. The best way to digest the content of this article is by quickly executing the code snippet in browser’s console. Follow below steps to launch console in Chrome browser: Open new tab in chrome Right click on page and select “inspect element” from context menu Go to the console panel Start executing Javascript code Objects are the basic building blocks in Javascript. There is one special object available in Javascript, “this” object. You can see the value of “this” at every line of Javascript execution. The value of “this” is decided based on how the code is being executed. Before getting started about “this”, we need to understand a little about Javascript runtime environment and how a Javascript code is executed. Javascript Interpreter and Execution Context Javascript is a scripting language which means that there is no compilation step in code execution. Interpreter reads the code and executes it line by line. The environment (or scope) in which the line is being executed is know as “Execution Context”. The Javascript runtime maintains a stack of these execution contexts and the execution context present at the top of this stack is currently being executed. The object that “this” refers changes every time execution context is changed. “this” refers to global object By default, the execution context for an execution is global which means that if a code is being executed as part of a simple function call then “this” refers to global object. “window” object is the global object in case of browser and in Node.JS environment, a special object “global” will be the value of “this”. For example: Immediately Invoked Function Expression (IIFE)   Note: If the strict mode is enabled for any function then the value of this will be “undefined” as in strict mode, global object refers to undefined in place of windows object. For Example:   “this” refers to new instance When a function is invoked with “new” keyword then the function is known as constructor function and returns a new instance. In such cases, the value of “this” refers to newly created instance. For Example: In case of person.displayName, “this” refers to new instance person and in case of person2.displayName(), “this” refers to person2 which is a different instance of Person. “this” refers to invoker object (parent object) In Javascript, property of an object can be a method or a simple value. When an Object’s method is invoked, “this” refers to the object which contains the method being invoked. In this example, we are going to use method foo defined in first example. By above example, it is clear that how value of “this” can be confusing in some cases. The function definition of foo1 is same but when it is being called as a simple function call then “this” refers to global object and when same definition is invoked as an object’s method then this refers to the parent object. So the value of “this” depends on how a method is being invoked as well. “this” with call, apply methods A function in javascript is also a special type of object. Every function has call, bind and apply methods. These methods can be used to set custom value of “this” to the execution context of function. We are going to use the second example defined above to explain the use of call: The only difference between call and apply method is the way argument is passed. In case of apply, second argument is an array of arguments where in case of call method, arguments are passed individually.   “this” with bind method bind method returns a new method with “this” refers to the first argument passed. We are going to use above example to explain bind method. “this” with fat arrow function As part of ES6, there is a new way introduced to define a function. let displayName = (fn, ln) => {     console.log(Name: ${fn} ${ln}); }; When a fat arrow is used then it doesn’t create a new value for “this”.  “this” keeps on referring to the same object it is referring, outside the function. Let’s look into some more examples to test our knowledge about “this” Since the callback is invoked as a simple function call inside multiple function, “this” refers to global object windows inside execution context of callback method. Summary So now, you can figure out the value of “this” by following these simple rules: By default, “this” refers to global object which is global in case of NodeJS and windows object in case of browser When a method is called as a property of object, then “this” refers to the parent object When a function is called with “new” operator then “this” refers to the newly created instance. When a function is called using call and apply method then “this” refers to the value passed as first argument of call or apply method. As you have seen above, the value of “this” can sometimes be confusing but above rules can help you to figure out the reasoning of the value of “this”.  
Understanding the “this” Keyword in JavaScript

Understanding the “this” Keyword in JavaScript

Blog
In this article, we are going to learn about Javascript keyword “this” and how the value of “this” is assigned in different scenarios. The best way to digest the content of thi...
Continue reading

Netatmo’s Smart Home Bot now using Artificial Intelligence to help managing connected devices

Recently, a French company, Netatmo has announced that it is going to add one more way to control smart objects in your house. Using a chatbot in messenger, users can now talk and control all their connected devices. Currently, the feature is available in English and more languages coming later this year. The idea of Smart Home Bot has evolved from Netatmo’s new “with Netatmo” program. Currently, Netatmo devices are compatible with various virtual assistants including Alexa, Siri, and Google Assistant. At CES, Netatmo is now unveiling few of the newest “with Netatmo” devices including smart lights, blinds, and radiators those are yet to debut in 2018. Netatmo’s own products or any of the “with Netatmo” devices can be controlled through the Smart Home Bot via Messenger. It is using artificial intelligence algorithms and natural language processing to decipher text commands sent by users to different smart home devices. Users can just type queries, such as “turn on the lights in the living room” or “adjust the temperature in the bedroom to 72 degree”. But, Netatmo has designed the chatbot to handle more complex queries, such as “who’s at home right now” and “what’s the weather like right now”. The company is all set to come up with connected radiators with Groupe Muller, Intuiv with Netatmo. This device is a smart heating device that automatically adjusts the temperature based on user habits. Netatmo has developed a connected module compatible with many different Groupe Muller radiators. Source: Netatmo Official Blog
Netatmo’s Smart Home Bot now using Artificial Intelligence to help managing connected devices

Netatmo’s Smart Home Bot now using Artificial Intelligence to help managing connected devices

What's New
Recently, a French company, Netatmo has announced that it is going to add one more way to control smart objects in your house. Using a chatbot in messenger, users can now talk and control all their co...
Continue reading

Samsung Delivers Vision for Open and Intelligent IoT Experiences to Simplify Everyday Life

At Consumer Electronics Show (CES) 2018, Samsung Electronics America, Inc. showcased its vision and strategy for IoT experiences. Samsung has partnered with the Open Connectivity Foundation (OCF), which is the largest IoT standardization body in the world. The body has already certified Samsung’s ARTIK chip, air conditioner and Family Hub refrigerator for interoperability criteria required for IoT. In Spring 2018, Samsung is going to move all its IoT applications into the SmartThings app so that any SmartThings-enabled device can be connected or controlled directly from the phone. The company has also announced that they are going to connect HARMAN Ignite to the SmartThings Cloud, taking the IoT experience to the car. This will help consumers control their connected home from the car and vice versa. “At Samsung, we believe IoT should be as easy as flipping a switch. With the new products and services announced today, we’re making IoT easier and more seamless,” said Hyunsuk (HS) Kim, President, Head of Samsung’s Consumer Electronics Division and Samsung Research. Samsung has also unveiled its increased investments in breakthrough technologies. The company has spent more than $14 billion on R&D and also increased investments in the form of Samsung NEXT. As a part of newly integrated Samsung Research Unit, the company has also created a new AI Center. With Bixby, Samsung is working to bring its personalized intelligence service to connect more devices. Also to make everyday tasks easier, Samsung is introducing Smart TVs and new Family Hub refrigerators with voice control via Bixby. Samsung has also integrated its Knox technology into its connected devices, including Smart TVs, mobile products, and Smart appliances. The technology is integrated with hardware security system and firmware updates so that devices remain protected.   Source: Samsung Official Blog
Samsung Delivers Vision for Open and Intelligent IoT Experiences to Simplify Everyday Life

Samsung Delivers Vision for Open and Intelligent IoT Experiences to Simplify Everyday Life

What's New
At Consumer Electronics Show (CES) 2018, Samsung Electronics America, Inc. showcased its vision and strategy for IoT experiences. Samsung has partnered with the Open Connectivity Foundation (OCF), ...
Continue reading

An Introduction to Progressive Web Apps (PWAs)

Mobile usage is increasing continuously and companies have now started focusing more on the development of mobile-friendly sites to ensure optimal user experience. It was the year 2015 when developers at Google created Progressive Web Apps (PWAs). At Google, I/O 2016, Alex Russell, a software engineer at Google stated that Progressive Web Apps “blur the line between web content and apps, but they keep the strengths of the web”. Developers are continuously in search of easier and conventional ways to develop and deploy apps across the web and mobile. Progressive Web Apps offer a seamless and an intuitive flexible user experience. It also made things easy for developers. Now there is no need to worry about different versions of the same code for various platforms. Getting started with Progressive Web Apps is quite easier than it seems. It is very much possible now to take an existing website and convert into a PWA. The infographic takes you on an exciting journey of understanding every aspect of Progressive Web Apps. The infographic summarizes nearly everything Progressive Web Apps entail. PWA, however, is not limited to the features and benefits detailed here. Exploring the end-to-end Progressive Web Apps spectrum requires you to delve deeper and understand the technologies behind them. Zeolearn’s comprehensive courseware can help you make the most out of these apps. Set off on your journey to a unique mobile browser experience today!  
An Introduction to Progressive Web Apps (PWAs)

An Introduction to Progressive Web Apps (PWAs)

Tutorials
Mobile usage is increasing continuously and companies have now started focusing more on the development of mobile-friendly sites to ensure optimal user experience. It was the year 2015 when developers...
Continue reading

How to create error boundaries in your ReactJS application

React 16 introduced a lot of interesting features into the React ecosystem. One of them was handling errors by creating boundaries for only that component. This feature prevents the whole app from crashing if there is an error in only one of the child components by creating boundaries around that component. More information about the announcement here. Introduction Earlier, whenever there was an error in the React component, the whole app would crash and there would be no way to prevent that. With React 16, it’s possible to create boundaries around your app so that if anything within that boundary errors out, you can show a fallback interface in its place and send the report to an online client. Error boundaries catch errors during rendering, in lifecycle methods, and in constructors of the whole tree below them. Example Suppose, we have a component called Ability: <Ability ability={ability} /> Now, we want to create an Error Boundary around that component. We can simply modify our component like: <ErrorBoundary>    <Ability ability={ability} /> </ErrorBoundary> The ErrorBoundary component looks like the following: class ErrorBoundary extends React.Component {   constructor(props) {     super(props);     this.state = {       hasError: false     };   }   componentDidCatch(error, info) {     // Display fallback UI     this.setState({ hasError: true });     // You can also log the error to an error reporting service     // Just logging it in the console for demo purposes     console.error(error, info);   }   render() {     if (this.state.hasError) {       // You can render any custom fallback UI       return (         <div className="error">           <div className="error__title">             Something went wrong! Please refresh the page.           </div>         </div>       );     }     return this.props.children;   } } Let’s take a look at this component and understand what it does. We have the componentDidCatch lifecycle hook, which is responsible for catching any errors for any inbound components during rendering, in lifecycle methods, and in constructors of the whole tree below them. In that hook, we are setting a flag called hasError to true. Also, you can send the error to any online reporting service here. In the render method, we are using the hasError flag to show the fallback interface if we have an error. Otherwise, we will just render the children, which is the Ability component here. Please note that we have the following component structure in our app: <ErrorBoundary>     <Ability ability={ability} /> </ErrorBoundary> Demo Consider the following as the usual interface of our very simple app: See the Pen <a data-cke-saved-href='https://codepen.io/nirmalyaghosh/pen/opYNmz/' href='https://codepen.io/nirmalyaghosh/pen/opYNmz/'>React 16 Error Boundary Working App</a> by Nirmalya Ghosh (<a data-cke-saved-href='https://codepen.io/nirmalyaghosh' href='https://codepen.io/nirmalyaghosh'>@nirmalyaghosh</a>) on <a data-cke-saved-href='https://codepen.io' href='https://codepen.io'>CodePen</a>. Now, let’s look at the situation when we encounter an error in our app. If we don’t create an ErrorBoundary, our app will look something like the following when it crashes: See the Pen <a data-cke-saved-href='https://codepen.io/nirmalyaghosh/pen/LebYaR/' href='https://codepen.io/nirmalyaghosh/pen/LebYaR/'>React 16 Error Boundary Not Working App with no Error Boundary</a> by Nirmalya Ghosh (<a data-cke-saved-href='https://codepen.io/nirmalyaghosh' href='https://codepen.io/nirmalyaghosh'>@nirmalyaghosh</a>) on <a data-cke-saved-href='https://codepen.io' href='https://codepen.io'>CodePen</a>. There is no going back from this state and our users should never encounter a page like this. Whereas, if we use an error boundary, it will look something like the following: See the Pen <a data-cke-saved-href='https://codepen.io/nirmalyaghosh/pen/jYVOdQ/' href='https://codepen.io/nirmalyaghosh/pen/jYVOdQ/'>React 16 Error Boundary Working App with Error Boundary</a> by Nirmalya Ghosh (<a data-cke-saved-href='https://codepen.io/nirmalyaghosh' href='https://codepen.io/nirmalyaghosh'>@nirmalyaghosh</a>) on <a data-cke-saved-href='https://codepen.io' href='https://codepen.io'>CodePen</a>.   This is much better than the previous situation. Here, we could also put some navigation which will take us to our home page or any relevant page. Also, if we are sure that our app will work fine on refresh, we could show a timer and then refresh the page. The fallback interface is much better since it handles an error case in our app and we can send a report to any online client and track it there. The example app is available on Github. Feel free to check it out. The readme of the repository will give you an idea about how to make the example app work on your system.
How to create error boundaries in your ReactJS application

How to create error boundaries in your ReactJS application

Blog
React 16 introduced a lot of interesting features into the React ecosystem. One of them was handling errors by creating boundaries for only that component. This feature prevents the whole app from c...
Continue reading

Git tips and tricks/common use-cases faced by developers

 A few days ago, I was told to refactor the codebase of a project where the name of a package and its references were to be modified. At first, it seems like the task has a pretty simple solution i.e. Rename the files and directories. git add --all git commit -m "[refactor] Rename package XYZ to ABC" git push origin master And you’re done! (you think) But what happens when you head over to GitHub and look at the commit histories of the renamed files and folders? The histories of the files have suddenly vanished and, to a person who hasn’t been working on the project since its inception, it looks like the renamed files were newly added in the last [refactor] commit. Background When you rename a file in a git repository, git log <filename> only shows the changes done to the renamed files after the rename operation, if you use the --follow argument, however, it continues history before a rename operation (that is, it searches for similar content using the heuristics and ignores the names of the files). Unfortunately, there is no way to see the history of the files before the rename operation on GitHub, since GitHub shows logs similar to the output of the git log command without — -follow. In short, git (implicitly) doesn’t show commit histories of files after they have been renamed. A lot of developers I’ve worked with, face such challenges with git and waste precious time banging their heads around looking for solutions, so I decided to put together a list of some commonly faced git blockers, their quick fixes, and some tips & techniques. I’ll be using this sample project repository on GitHub throughout the rest of the article as the remote repository for my local setup. 1. Rename a file/folder and preserve its history While looking for a solution to preserve and persist the commit histories of renamed files and folders to GitHub, I found this gist  The above script is a lifesaver when you have over 4000 commits with more than 200 files to refactor/move! The usage and working of the script is very well documented in the commented sections. Learn to use the script to rename files while persisting their histories (before renaming) to GitHub. 2. Branch off a set of recent commits Let’s say you started working on a new feature for an existing project, you made some changes, pushed a few commits, and then you realized you should have made a new branch to work on the feature. So how do you move those changes from, let’s say, master, to a new feature branch? You simply want to keep these changes (commits) in the feature branch and roll back to the commit before the new feature’s code was added in the master branch —   You need to be extra careful with 2 steps - Resetting to the commit you want: Make sure you correctly enter the commit hash (from where your feature branch branches off) to ensure none of your commits are lost in the process of migrating them to a new branch. Updating the remote forcefully: Other developers in your team might have already pushed to your remote (while you were working on the new feature) and if you want to keep those changes, doing a git pull before pushing would be the way to go. [WARN] Force pushing commits to the remote may lead to code loss! 3. Stage all (only) deleted files for your next commit If you have additions, modifications and deletions all at once, ready to be committed, it is generally a better idea to put all 3 operations in separate commits for ease of debugging, in case the changes break something in production. So how do you select only the deleted files to stage them for a commit? Well, this simple one line works like magic for exactly that —   The magic words add (technically track removals of) all the deleted files to the stage. They can now be easily committed and pushed while keeping the additions and modifications in the next commits. Unlike git add * which adds all the additions to the stage, git rm * doesn’t add all the deletions, instead, it deletes all the files in the repository and stages them for a commit. Use git rm very cautiously, you have been warned! 4. Git Resets — Soft, Mixed, Hard Ever wonder why there are three types of resets? The fundamental difference between the different kinds of resets is what you want to do with the changes done after the commit you are resetting to. git reset --hard <commit-id>resets the HEAD to <commit-id>and all the changes after<commit-id> are discarded. If you are absolutely sure you don’t want any changes after <commit-id>, you should do a hard reset. git reset --mixed <commit-id>resets the HEAD to <commit-id>and all the changes after<commit-id> are added to your working tree i.e. under “Changes not staged for commit”. This preserves all the changes done after <commit-id> and you can continue work on them. git reset --soft <commit-id> resets the HEAD to <commit-id> and all the changes after <commit-id> are added to your staging area i.e. under “Changes to be committed” so that you can immediately commit them with an updated commit message. A soft reset is essentially a mixed reset plus an additional step of adding the changes to the staging area i.e. doing a git add <files> after resetting. 5. Forgot to add a few changes in the last commit? If you’ve ever been in a position when you stage (add) some changes to be committed, modify them again and forget to add them again before committing, you will know what I’m talking about. You just forget to add that one line change in the last commit and now you have to add it to the next commit which may not describe the context of that one line change. Instead of creating two (or more) commits for a single context, you can update the changes in the last commit. Use the--amend flag with git commit, if you forget to add some files or code changes. Note that if you had already pushed the latest commit to your remote, then you will need to do a forced update, since amending a commit changes its hash and will throw an error when you try to push the amended commit to your remote Fix some stuff, add to stage, amend the latest commit and push to remote! 6. Delete a commit / undo a remote push! The good old use case where you were too excited to commit your changes and pushed them to your remote without realizing you added a bug in the commit and want to roll back, so how do you delete the commit once it’s pushed to your remote? The gist of the solution is that you have to roll back to an earlier commit in your local repository and then do a forced push to overwrite your remote. The -f flag pushes the commits forcefully if your local branch and remote branch have diverged and syncs up your remote with your local branch. Be extra careful as there is a chance that it will overwrite some of the remote commits. Hopefully, I’ve made your life a little easier, if you are frequently looking for git commands/procedures to do something, do share your experiences, until then happy coding!
Git tips and tricks/common use-cases faced by developers

Git tips and tricks/common use-cases faced by developers

Blog
 A few days ago, I was told to refactor the codebase of a project where the name of a package and its references were to be modified. At first, it seems like the task has a pretty simple solu...
Continue reading

How to manage API secrets/tokens?

You might find many tutorials on the internet for learning coding and development, you start with basics and soon you find yourself overwhelmed by the features and complexities. You realize that if you try to build all the features on your own then it’s going to be a humongous task. Thus you go for many service providers for services like emailing, text messaging, monitoring, source code management etc. In order to use all these services, you would sign up for their APIs and hence receive access to these APIs. With these APIs, you may get secret Auth Tokens/API Secrets/API Keys etc. You have to incorporate these keys and tokens in your code to use these services. Following are some methods of managing environment variables: 1. The Naive Approach: Basically, you just hard code them in your code base, which is a naive and insecure approach that most of us use to incorporate these secrets while learning and working on your example projects. For example: If you are using Twilio for sending SMS to your newly signed up users to verify their phone number, then you might end up hard-coding your secrets like below. This approach is a really bad idea as you will be exposing all your keys and tokens to everyone who has access to your code base (even just read permissions). If these auth parameters get changed/renewed, you will have to change it everywhere on your own by the tedious hunt, seek and change method. 2. Export all secrets to your shell: Alternatively, you can store all your variables as shell variables that will be available to your running code or even better put all your secrets/tokens in your .bash_profile/.bashrc and then you source this file due to which these will be available to all your running processes via environment variables. This solves your DRY(Do not Repeat Yourself) problem. For example: $ PORT=9999 node server.js Above command will make a variable PORT with value 9999 available to server.js and for the example of Twilio you can put your secrets in your .bash_profile/.bashrc like It might be possible that two different projects are using same API secrets/tokens but with different values then you have to change them again and again according to the project you are working on. 3. Maintain .env for each project: The best approach to maintain your Auth Tokens/API Secrets/API Keys etc is to keep an environment file for each of your projects so just make a .env file (you may name it differently) and put all the secrets in this file. For example: And make sure to put your .env file in .gitignore so that this file is not present in your codebase and maintain it independently on your server. In your .gitignore file .env To run the process just source your .env file. $ source .env And in this way, you can maintain all your API secrets/tokens and other tokens/keys in your environment and keep them safe and secure.
How to manage API secrets/tokens?

How to manage API secrets/tokens?

Blog
You might find many tutorials on the internet for learning coding and development, you start with basics and soon you find yourself overwhelmed by the features and complexities. You realize that if yo...
Continue reading

Adding an authentication system to your website

INTRODUCTION Firebase is a mobile and web application development platform by Google. It aims at easing the entire web development process by providing us with all the real-time solutions for the various challenges faced by us. Authentication is one of the Firebase products that we will be looking at in this article. WHY FIREBASE? Firebase authentication enables simple, multi-platform sign in our application. It is secured and stable and manages its own database for users. Firebase is very quick to implement. Since authentication is a part of the Firebase suite, we get to integrate other Firebase tools as well. IN THIS ARTICLE We will - Integrate Firebase auth in our web-app. Create some simple web pages* with support for authentication using Google. Enable email verification and mobile verification in our application. *We have used a basic http-server for serving files. Any server will do. The entire code used in this article is available on Github:/FirebaseWebAuthentication GETTING STARTED The first thing we need to do is to create a Firebase project. Go to the Firebase console and create a new project by clicking the Add project button. Go to Authentication and enable Email/Password and Google verification under Sign-in method. Below, under Advanced section, you can manage features like sign-up quota and one account per email-address rule. SETTING UP Include the necessary scripts in your js file. Add the following snippet to your js code: This initializes the firebase object which can be used later to do various tasks. SIGN UP Let’s write some code to get started with sign up. Some basic form code and we get this** - **HTML is out of scope for this article Fill in the form and you receive a verification mail. Go to the console under Authentication to see the list of users who signed up for your application. SIGN IN/SIGN OUT Just as we wrote some code for signing up, we write some similar code for signing in. All sign in or sign out events lead to change in firebase auth objects auth state. We catch this using a simple change listener and implement actions post sign in or sign out.   Signing in looks like this: One of the major advantages of Firebase is that the text boxes for input are automatically sanitized. For eg. USING GOOGLE FOR AUTH Integrating Google authentication system is very easy with Firebase. We have to set up the provider service as: Sign up code goes as: On clicking the sign in button, the user gets a pop-up list for choosing one Google account and he/she can then proceed with it. A few permissions and you’re all set! ADVANCE FIREBASE CONCEPTS “The best way to learn something is to study its docs.” Firebase Web Documentation is the best place to start exploring more and learning more about Firebase. Firebase can also be used with iOS or Android apps. Also, there are a lot more features to Firebase than just authentication. Explore on their official website Firebase for more.
Adding an authentication system to your website

Adding an authentication system to your website

Blog
INTRODUCTION Firebase is a mobile and web application development platform by Google. It aims at easing the entire web development process by providing us with all the real-time solutions for the v...
Continue reading

Build a Progressive Web App using React

Progressive Web App with React! When I read this I thought, why not build one ourselves. If you are familiar with React and a bit about its ecosystem such as Create-React-App utility, this guide is for you. If you spend at least third quarter of your day on internet then you have seen or read about progressive web apps here and there. No? PWA are performance focused web applications that are especially streamlined for a mobile device. They can be saved over a device’s home screen and tend to consist a native app feel and look. The first PWA app I used on my mobile device is the Lite Twitter one which got released a few months back. Here is the link if you want to try: https://lite.twitter.com/. They even support push notifications and offline support these days. Getting Started Let us create a basic React app using Create-React-App generator, the official scaffolding tool to generate Reactjs App released and maintained by Facebook. To install it, we will use our command line tool: Once the installation process is complete, go to your desired directory and create an empty project. Run this from your command-line interface: Go ahead and take a look at the directory structure and package.json file. See what dependencies come with this scaffolding tool. CRA or Create React App is one of the best with minimum hassle tool that I am currently using to build apps and prototypes with React. It is running all that Babel, Webpack stuff behind the scenes. If you want more information or want to customize the process, read here. I hope, regardless of the timeline, your package.json file looks like this: We need to one more dependency and that is React-Router: Go Back to your terminal: You can now try running the application from terminal and see if everything is working: The boilerplate code will and look like this: Building the PWA App Since the sole purpose of this guide is to make you familiar with the build process, I am not going to work out a complex application. For sake of simplicity and your precious time, we will build a simple app. Go to src/App.js file and make amendments exactly like below: In above we are including two pages using react-router-dom.Further we define Home and About Components in src/components/ directory. It is always a best practice to use this approach and make sure that react components are short and readable. For Home.js: And for About.js: Now to see if everything working, run npm start from your terminal window, and you will get a similar result: If you click on the About button/hyperlink, the react-router-dom will render the about page without changing the common Header part that is defined in App.js. This is a bare minimum single page application. Our main job is still yet to be done. Let’s convert this bare minimum React application to a PWA. Installing Lighthouse Lighthouse is a free tool from Google that evaluates your app based on their PWA checklist. Add it to your Chrome browser from here. Once installed as an extension we can start the auditiing process by clicking on the Lighthouse at top right corner where other extension might exist in your browser. Click on the icon and then make sure you are on right tab by checking the URL shown in the lighthouse popup. Also, make sure that development server of Create-react-app from terminal is running. Otherwise Lighthouse won’t be able to generate report. The report that is generated by the Lighthouse is based on a checklist that available to view here. Click on the Generate Report button. After the process is completed, a new window will open where Lighthouse has generated a report. By the looks of it, it does not look pleasing to the Lighthouse and as a Progressive Web App.   We will be solving these issues one by one. Setting up a Serive Worker Let’s setup a service worker first. That is the first thing Lighthouse audited. What is a service worker, you ask? Well, it is a proxy server that sit between web applications, browsers and the network. We can use it to make React Apps work offline (remember the earlier point we discussed. Progressive Web Apps are focused on performance). You can definitey read details about it on Google’s Web Fundamental Docs. It is a two step process. First we will create aservice-worker.js file (service worker, after all is JavaScript code) and then register that worker in our index.html. In the public directory of our app strucutre, create a file service-worker.js. I am going to use Addy Osmani's service worker configuraiton and I will recommend you to do so, at least for this one. You can find the complete thing in much much detail here. To continue, make sure you add the following code in service-worker.js file: Our next step is to register the our service worker by loading the one we just wrote in service-worker.js. Add this before the closing </body> tag in index.html. Make sure you restart the dev server by running npm run start from the terminal. You must see this line if you open Chrome's DevTools > Console: If we run the Lighthouse audit process again, I hope we will get a better result. Yes, you can clearly compare the above with our previous audit. It has imporved, and our previous first issue is now coming under Passed Audits. Now let’s move and add some enhancement. Adding Progressive Enhancement Progessive Enhancement is way to improve the app/site since it will work without any JavaScript loading. Now, we want to display a loading message and some CSS or none (your choice) before the React app initializes the DOM. Let’s add a the required CSS and a loading message to our index.html. To increase performance, I am also adding all our CSS (that is CSS contained inside App.css and index.css) in our index.html file. We can now delete App.css and index.css file from out project directory and also remove their import references from App.js and index.js. The above process improves the performance of our app by 10 points. The overall PWA score is same though: Adding it to Device’s Home Screen The creators of create-react-app is so good to us that they have already included a manifest.json file in public directory with some basic configuration. This feature that we are currently adding allows a user to save our PWA site page on their device's home screen. In future, if the user wish to open the app, they can do that just by using PWA as a normal application and it will open in their phone's default browser. For this purpose, we are going to edit public/manifest.json: Let’s talk about this file a bit. The short_name is the name of app that will appear on Home Screen of device. name will appear on the splash screen. icons is important and is the main icon of our app and will appear along the short_name on home screen, just like a native mobile application. The size of the icon must be 192x192. I haven't played around with other image formats but you can. Here is the link to a dummy logo for this walkthrough we are working on. Add it to the public directory. The 512 setting is for splash screen and is a requirement in auditing process. So here is the link to download that. Next is start_url that notifies that the app was started frome Home Screen. Below it there are three more properties. display is for the appearance of the app, and I am making theme_color and background_color to be same since I want the application to match header background. We will now solve one of our issue in the previous audit. We are left with only some of them to resolve. Deployment First, let us turn the caching on. In service-worker.js edit the first line and change the existing boolean value to true. I will be using Firebase here for deployment since it is easy to connect it with a web/mobile application for prototyping IMO. First, in Firebase console, create a new project pwa-example-1. Now, install the firebase-tool we need to deploy our PWA app. We will be installing this dependency as a global module. Now the CLI tool will prompt for some questions. I am adding a series of images for clarity, make sure you choose the same answers when prompted. Press the Enter key for final time and you will get a success message and two firebase config files generated in your project directory: .firebaserec and firebase.json. Now, it is time to deploy our app. From terminal run: The above command tells create-react-app to build our project into the build/ folder, which Firebase CLI tool then deploys. Firebase CLI tool will give you back a URL, save it and open it in Chrome, and then run our Lighthouse audit for the last time. The hosting url will be similar to below: This solves our main issue from starting regarding using HTTTPS over HTTP. With that, all of our issues our solved and our PWA app gets 100/100 score. The score looks good to me for our first application. The performance bar above of our application can be improved and there are few ways to that. I will not get into that since the scope of this application is for learning purpose. You can find the complete code at this Github repository. Go ahead to clone the repo, don’t forget to npm install once inside the project directory and then head start by trying out aforementioned PWA tips and techniques. Say 👋 to me Twitter and Patreon or drop a word or two about what you read above in the article.
Build a Progressive Web App using React

Build a Progressive Web App using React

Blog
Progressive Web App with React! When I read this I thought, why not build one ourselves. If you are familiar with React and a bit about its ecosystem such as Create-React-App utility, this gu...
Continue reading

Performance optimisation for the mobile Web

We are already all too aware that interacting with web-based content on a phone or tablet ranges from barely acceptable to horrendous. Pages often load slowly, render erratically and behave unpredictably. Ads and related tracking techniques not only add bulk and additional requests to an already bandwidth and CPU-constrained device, but pages often behave as though they’re possessed as they convulse around multiple document.write() calls. While most responsively designed websites look fine on screens of all sizes, they often contain a lot of the baggage of desktop websites when viewed on mobile. Due to this, the bounce rate is extremely high due to the load time and content blocking resources. A lot has been going on to identify user behaviour in order to devise countermeasures and improve user experience and today I would like to discuss some of these ongoing efforts and strategies. The first thing that we need to understand is that generic optimisation strategies that might work well for desktops may be completely useless in the mobile web due to the huge difference in context and user expectations. A mobile user is expecting to get some information quickly and doesn’t really care about the fancy animations that ‘woo’ the desktop users, for the lack of a better term. Understanding the intent of your user plays a major role in devising the ideal optimisation strategy for your website. A lot of people believe that responsive = optimised, which is a wrong notion. When our website simply resizes based on our screen size, sure, a website becomes more usable, but we’re not taking into consideration the possible changes in the environment. For eg. Grid layout for text looks very readable on the desktop but when it stacks down in mobile, it may become too much to scroll through. Similarly, engagement differentiators, capabilities tradeoff, prioritising CTAs, handling navigation and providing app-like functionalities leads to the need for a separate mobile optimisation strategy. So one obvious solution that comes to the minds of most people is to have a separate mobile website or porting to modern options like PWAs and AMP. While these are excellent options(will do a different article covering these in detail as well), they might not suit every use case, so it is only natural we talk about some strategies that might help users achieve better results without porting. Let’s start by discussing a few of them. Libraries - know the tradeoffs. - Understand the cost of using libraries, the trade off between performance and faster development process is crucial to understand. There are tons of resources available that let you eliminate heavy libraries out of your workflow. For eg. jQuery is something that all of us use a lot, but there is this awesome website called You might not need jQuery, that shows you how common use cases can be done simply in vanilla js saving you an additional network call. Concat, minify, uglify, automate - I believe a lot of us take these for granted and how important these are. To briefly summarize - concatenation is just appending all of the static files into one large file, minification is just removing unnecessary whitespace and redundant/optional tokens like curly and semicolons and can be reversed by using a linter and uglification is the act of transforming the code into an "unreadable" form, that is, renaming variables/functions to hide the original intent.   A lot of module bundlers come with these functionalities inbuilt, webpack, being one of my favourites. Fonts matter but not that much ¯\_(ツ)_/¯  By deferring font loading, the browser will display copy in whatever font it has available, to begin with. This means the user will always get the content first. Deferring font loading can be achieved by separating the part of the CSS that links to the font files, and loading it after the rest of the page has been rendered. Note, however, that the text may briefly flash to change when the web font is loaded. Prioritize above the fold. - Separate the CSS used to render the visible part of the page (above the fold) first; defer the rest of the styles to load after the page has been rendered. Adding the top CSS directly into the page header can do this. But, bear in mind this will not be cached like the rest of the CSS file, so must be restricted to key content. A variety of tools can help you determine the CSS to separate, including Scott Jehl’s Critical CSS and Paul Kinlam’s Bookmarklet tool. Optimise images - Heavy colored photos work better as JPEG files, whereas flat color graphics should be in PNG8. Gradients and more complex icons work best as PNG24/32. Always use compressed images - use TinyPNG or ImageOptim to compress them. You could also make use of the new HTML5 <picture> element and srcset and size attributes for images. These two additions to the language help you define responsive images directly in the HTML, so the browser will only download the image that matches the given condition. Data URLs - Instead of linking to an external image file, image data can be converted into a base64 (or ASCII) encoded string and embedded directly into the CSS or HTML file. A simple online conversion tool is available. Data URLs are helpful, as they save HTTP requests and can transfer small files more quickly. Load as they scroll - Identify what's absolutely required in order to render the page initially? The rest of the content and components can wait. JavaScript is an ideal candidate for splitting before and after the onload event. Hidden content, images below the fold, and interactions that come after initial page rendering are other ideal candidates. Post-loaded scripts should be viewed as a progressive enhancement - without them the page should still work. Code splitting allows you to split your code into various bundles which can then be loaded on demand or in parallel. It can be used to achieve smaller bundles and control resource load prioritization which, if used correctly, can have a major impact on load time. Enhancements, not requirements - This idea is part of progressive enhancement, where web technologies are layered to provide the best experience across environments. Here, define a basic set of features that must work on the least capable browsers, and only add further complexity after testing whether browsers can handle it. Detecting whether the browser can support HTML5 and CSS features helps us write conditional code to cover all eventualities: enhancing and adding features when supported while staying safe and simple for devices and browsers that do not. In recent times, CSS has had its own native feature detection mechanism introduced — the @supports at-rule. This works in a similar manner to media queries — except that instead of selectively applying CSS depending on a media feature like a resolution, screen width or aspect ratio, it selectively applies CSS depending on whether a CSS feature is supported. Tailor, whenever possible - Requiring entire library just for using a single feature is bad practice, tailor if you're using only a certain function out of an entire library to reduce network load. Tree shaking is a term commonly used in the JavaScript context for dead-code elimination. It relies on the static structure of ES2015 module syntax, i.e. import and export. UglifyJSPlugin in webpack also supports dead code removal which can be integrated into the workflow quite easily. While we have specified in the beginning that generic optimisation strategies might not work well in the context of mobile, some of them don’t really hurt and can also help you achieve better performance. Let us quickly look at some of them as well. Caching - back to basics. - Dynamic web pages require multiple database queries, taking valuable time to process output and format data, then render to browser-legible HTML. It’s recommended to cache content previously rendered for that device. For returning visitors, instead of processing from scratch, it will check the cache, and only send updates. Use a server handler (like an .htaccess file) to instruct the browser on which type of content to store, and how long they should keep copies. KPI goals and load impact testers - Define key performance indicator (KPI) goals. These are the milestone metrics that indicate project success, based on business objectives. Given their importance, performance-related goals should appear here. Setting and adhering to a strict performance budget: establishing a target for the final website’s speed and size is always a good idea. Load test your site frequently to see what might be causing bottlenecks. Enable gzip compression. - Gzip is another form of compression which compresses web pages, CSS, and javascript at the server level before sending them over to the browser Apache - You can enable compression by modifying your .htaccess file. Nginx - You can enable compression by modifying your nginx.conf file. Redirects - don’t. - Redirects are performance killers. Avoid them whenever possible. A redirect will generate additional round trip times (RTT) and therefore quickly doubles the time that is required to load the initial HTML document before the browser even starts to load other assets. CDNs - Yay or nay?  - You can improve asset loading by using a CDN like CloudFlare alongside your usual hosting service. Here, static content (like images, fonts and CSS) is stored on a network of global servers. Every time a user requests this content, the CDN detects their location and delivers assets from the nearest server, which reduces latency. It increases speed by allowing the main server to focus on delivering the application instead of serving static files. Hot link protection - Hotlink protection refers to restricting HTTP referrers in order to prevent others from embedding your assets on other websites. Hotlink protection will save you bandwidth by prohibiting other sites from displaying your images. Example: Your site URL is www.domain.com. To stop hotlinking of your images from other sites and display a replacement image called donotsteal.jpg from an image host, modify your .htaccess file: Prefetching and preconnect. - Domain name prefetching is a good solution to already resolve domain names before a user actually follows a link. Here an example how to implement it in the HEAD section of your HTML: <link rel="dns-prefetch" href="//www.example.com"> Preconnect allows the browser to set up early connections before an HTTP request is actually sent to the server. Connections such as DNS Lookup, TCP Handshake, and TLS negotiation can be initiated beforehand, eliminating roundtrip latency for those connections and saving time for users. The example below shows what it looks like to enable preconnect for the zone alias link for KeyCDN. <link href='https://cdn.keycdn.com' rel='preconnect' crossorigin> Minimise render blocking resources. - When it comes to analyzing the speed of your web pages you always need to take into consideration what might be blocking the DOM, causing delays in your page load times. These are also referred to as render blocking resources, such as HTML, CSS (this can include web fonts), and Javascript. Async allows the script to be downloaded in the background without blocking. Then, the moment it finishes downloading, rendering is blocked and that script executes. Render resumes when the script has executed. In the image below, the blue dom content line separates render blocking resources from non blocking ones. Well, congratulations if you have made it this far, that was a lot of information to digest in a single article but I really hope you learnt something valuable out of this. Feel free to reach out to me on my social handles in case you would like to discuss something or have any doubts. Cheers.
Performance optimisation for the mobile Web

Performance optimisation for the mobile Web

Blog
We are already all too aware that interacting with web-based content on a phone or tablet ranges from barely acceptable to horrendous. Pages often load slowly, render erratically and behave unpredicta...
Continue reading