top
Sort by :

Connecting ReactJS Frontend with NodeJs Backend

Uploading files might seem like a task that needs to be conquered especially in web development. In this tutorial, we will see how to upload a simple AJAX based file using Reactjs on front-end and Node.js on the back-end. This is easy to accomplish with the following technologies since the whole source code will be in one language i.e JavaScript. To show you how to combine a Node.js backend with React Js front-end, we will be making the use of a simple file upload example. The topics we will be covering are:Setting up a Back-end of the app using express-generatorUsing create-react-app to scaffold a front-end Reactjs appUsing axios for cross-origin API callsHandling POST requests on our serverUsing express-fileupload, a promise based libraryLastly, connecting a Reactjs and Node.jsGetting StartedWe will be starting without back-end first. We will write a server application with necessary configurations required to accept cross-origin requests and uploading files. First, we need to install express-generator which is the official and quickest way to start with an Express back-end application.npm install -g express-generator We will install this module globally from our terminal. After installing this global npm module, we have an instance of it named express to generate our project structure.mkdir fileupload-example express server cd serverWhen changing the current directory to the project express command just scaffolded, we can observe the following structure and files:To run this backend server on default configuration, we have to install the dependencies mentioned in package.json first.npm install npm startExpress-generator comes with the following dependencies. Some of them are essential to use such as morgan and body-parser and some we can ignore for this project."dependencies": {     "body-parser": "~1.18.2",     "cookie-parser": "~1.4.3",     "debug": "~2.6.9",     "express": "~4.15.5",     "jade": "~1.11.0",     "morgan": "~1.9.0",     "serve-favicon": "~2.4.5"   }I will be adding two more packages for our configurable back-end application to behave in the way we want to.npm install --save cors express-fileupload cors provide a middleware function for Express applications to enable various Cross-Origin Resource Sharing options. CORS is a mechanism that allows restricted resources (in our case, API or AJAX requests) on a web page from another domain. It helps a browser and a server to communicate and can be hosted on separate domains. You will understand it more when you will see it in action.The other module, express-fileupload is a bare minimum express middleware function for uploading files. The advantage of it is that it supports for Promises and can handle multiple file uploads.With these two important packages added as dependencies in our project, we can now start by modifying the default Express back-end in app.js file.const express = require('express'); const path = require('path'); const favicon = require('serve-favicon'); const logger = require('morgan'); const cookieParser = require('cookie-parser'); const bodyParser = require('body-parser'); const cors = require('cors'); // addition we make const fileUpload = require('express-fileupload'); //addition we make const index = require('./routes/index'); const users = require('./routes/users'); const app = express(); // view engine setup app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); // uncomment after placing your favicon in /public //app.use(favicon(path.join(__dirname, 'public', 'favicon.ico'))); app.use(logger('dev')); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use(cookieParser()); // Use CORS and File Upload modules here app.use(cors()); app.use(fileUpload()); app.use('/public', express.static(__dirname + '/public')); app.use('/', index); // catch 404 and forward to error handler app.use(function(req, res, next) { const err = new Error('Not Found'); err.status = 404; next(err); }); // error handler app.use(function(err, req, res, next) { // set locals, only providing error in development res.locals.message = err.message; res.locals.error = req.app.get('env') === 'development' ? err : {}; // render the error page res.status(err.status || 500); res.render('error'); }); module.exports = app;In the above code, you would notice that we made some additions. The first addition we did is to import packages cors and express-fileupload in app.js after other dependencies are loaded.const cors = require('cors'); // addition we make const fileUpload = require('express-fileupload'); //addition we makeThen just after other middleware functions, we will instantiate these two newly imported packages.// Use CORS and File Upload modules here app.use(cors()); app.use(fileUpload());Also, we need to allow data coming from a form. For this, we have to enable urlencoded options of the body-parser module and specify a path in order to store the image file coming from the client.app.use(bodyParser.urlencoded({ extended: true })); // below, also change this to app.use('/public', express.static(__dirname + '/public'));With this, we can see if our server is working correctly by running:npm start If you get the screen below by navigation on port http://localhost:3000, it means that our server is running perfectly.Before we move to generate our front-end application, we need to change to port for our backend. Since front-end application generated using create-react-app will also be running on port 3000. Open bin/www file and edit:/**  * Get port from environment and store in Express.  */ // 3000 by default, we change it to 4000 var port = normalizePort(process.env.PORT || '4000'); app.set('port', port);Setting up Front-endcreate-react-app is another command line utility used to create a default Reactjs front-end application.create-react-app node-react-fileupload-front-end We will also install the required library that we are going to use for making API calls to our backend server.yarn add axios index.js is the starting point of our application in the src/ directory. It registers the render function using ReactDOM.render() by mounting App component. Components are the building blocks in any Reactjs application. This App component comes from src/App.js. We will be editing this file in our front-end source code.File Upload FormWe will be using the HTML form element that has an input. This provides access to the value, that is the file, using refs. Ref which is a special attribute that can be attached to any component in React. It takes a callback function and this callback will be executed immediately after the component is mounted. It can be also be used on an HTML element and the callback function associated will receive the DOM element as the argument. This way, the ref can be used to store a reference for that DOM element. That is exactly what we are going to do.class App extends Component { // We will add this part later render() { return ( <div className="App"> <h1>FileUpload</h1> <form onSubmit={this.handleUploadImage}> <div> <input ref={ref => { this.uploadInput = ref; }} type="file" /> </div> <br /> <div> <input ref={ref => { this.fileName = ref; }} type="text" placeholder="Enter the desired name of file" /> </div> <br /> <div> <button>Upload</button> </div> <hr /> <p>Uploaded Image:</p> <img src={this.state.imageURL} alt="img" /> </form> </div> ); } }The input element must have the type="file" otherwise it would not be able to recognize what type we are using it for. It is similar to the values like email, password, etc.The handleUploadImage method will take care of the API calls that we need to request to the server. If that call is successful, then the local state of our React application will be set to let the user know that the upload was successful. Inside this function, to make the API call, we will be using the axios library that we installed when setting up our front end app.constructor(props) { super(props); this.state = { imageURL: '' }; this.handleUploadImage = this.handleUploadImage.bind(this); } handleUploadImage(ev) { ev.preventDefault(); const data = new FormData(); data.append('file', this.uploadInput.files[0]); data.append('filename', this.fileName.value); fetch('http://localhost:4000/upload', { method: 'POST', body: data }).then(response => { response.json().then(body => { this.setState({ imageURL: `http://localhost:4000/${body.file}` }); }); }); }The FormData object lets you compile a set of key/value pairs to send using XMLHttpRequest. It is primarily intended for use in sending form data but can be used independently from forms in order to transmit keyed data. To build a FormData object, instantiating it then appending fields to it by calling its append() method like we did above.Since we are not using any styling, our form looks bare minimum. But, you can go ahead and make it look more professional. For brevity, I am going to keep things simple. I recommend you to always enter a file name, otherwise, it will store the file with a undefined.jpg name.Updating the server to handle AJAX RequestRight now, we do not have in our server code to handle the POST request React app makes a request to. We will add the route in our app.js in our Express application where the default route is defined.app.post('/upload', (req, res, next) => { // console.log(req); let imageFile = req.files.file; imageFile.mv(`${__dirname}/public/${req.body.filename}.jpg`, err => { if (err) { return res.status(500).send(err); } res.json({ file: `public/${req.body.filename}.jpg` }); console.log(res.json); }); });npm start This route gets triggered when a request is made to /upload/. The callback associated using the route contain req, res objects and access to next. A standard way of defining a middleware function in an Express application. The req object has the file and the filename that was uploaded during form submission from the client application. If any error occurs, we return the 500 server error code. Otherwise, we return the path to the actual file and console the response object to check if everything is working as expected..mv file is promise-based and provided to us by the express-fileupload package we installed earlier. Try uploading an image file from the client now. Make sure both the client and server are running from different terminal tabs at this point. You should get a success message like this in your terminal:POST /upload 200 98.487 ms - 25 GET /public/abc.jpg 200 6.231 ms - 60775At the same time, the client is requesting to view the file on the front-end with a GET HTTP method. That means the route /upload from the browser is successfully called and everything is working fine. Once the file is uploaded to the server, it will be sent back to the client to reflect that the user has successfully uploaded the file.You can find the complete code for this example at FileUpload-Example Github Repository.
Rated 4.0/5 based on 7 customer reviews
Connecting ReactJS Frontend with NodeJs Backend 290 Connecting ReactJS Frontend with NodeJs Backend Blog
Aman Mittal 20 Jun 2018
Uploading files might seem like a task that needs to be conquered especially in web development. In this tutorial, we will see how to upload a simple AJAX based file using Reactjs on front-end and Nod...
Continue reading

How to Deploy a Node.js Application to Amazon Web Services Using Docker

Table of Contents1. Introduction2. Prerequisites3. A quick primer on Docker and AWS4. What we’ll be deploying 5. Creating a Dockerfile6. Building a docker image7. Running a docker container8. Creating the Registry (ECR) and uploading the app image to it9. Creating a new task definition10. Creating a cluster11. Creating a service to run it12. Conclusion1. IntroductionWriting code that does stuff is something most developers are familiar with. Sometimes, we need to take the responsibility of a SysAdmin or DevOps engineer and deploy our codebase to production where it will help a business solve problems for customers.In this AWS docker node.js tutorial, I’ll show you how to dockerize a Node.js application and deploy it to Amazon Web Service (AWS) using Amazon ECR (Elastic Container Registry) and ECS (Elastic container service).2. Prerequisites To deploy Node.js applications to AWS using Docker, you’ll need the following:Node and Npm: Follow this link to install the latest versions.Basic knowledge of Node.js.Docker: The installation provides Docker Engine, Docker CLI client, and other cool stuff. Follow the instructions for your operating system. To check if the installation worked, fire this on the terminal:docker --versionThe command above should display the version number. If it doesn’t, it means the installation didn’t complete properly.4. AWS account: Sign up for a free tier. There is a waiting period to verify your phone number and bank card. After this, you will have access to the console.5. AWS CLI: Follow the instructions for your OS. You need Python installed.3. A quick primer on Docker and AWSDocker is an open source software that allows you to pack an application together with the required dependencies and environment in a ‘Container’ that you can ship and run anywhere. It is independent of platforms or hardware, and therefore the containerized application can run in any environment in an isolated fashion.Docker containers solve many issues, such as when an app works on a coworker's computer but doesn’t run on yours, or it works in the local development environment but doesn’t work when you deploy it to a server.                                                       Amazon Web Services (AWS) offers a reliable, scalable, and inexpensive cloud computing service for businesses. As I mentioned before, this tutorial will focus on using the ECR and ECS services.4. What we’ll be deployingLet’s quickly build a sample Node.js app on docker that we’ll use for the purpose of this tutorial. It going to be very simple app.Enter the following in your terminal:// create a new directory mkdir sample-nodejs-app // change to new directory cd sample-nodejs-app // Initialize npm npm init -y // install express npm install express // create an server.js file touch server.jsOpen server.js and paste the code below into it:Start the app with:node server.jsAccess it on http://localhost:3000. You should get Hello world from a Node.js app! displayed in your browser. The complete code is available on GitHub.Now let’s take our very important app to production 😄.5. Creating a DockerfileWe are going to start dockerizing the app by creating a single file called a Dockerfile in the base of our project directory.The Dockerfile is the blueprint from which our images are built. And then images turn into containers, in which we run our apps.Every Dockerfile starts with a base image as its foundation. There are two ways to create your Dockerfile:Use a plain OS base image (For example, Ubuntu OS, Debian, CentOS etc.) and install an application environment in it such as Node.js ORUse an environment-ready base image to get an OS image with an application environment already installed.We will proceed with the second approach. We can use the official Node.js image hosted on Docker Hub which is based on Alpine Linux.Write this in the Dockerfile:Let’s walk through this line by line to see what is happening here, and why.Here, we are building our Docker image using the official Node.js image from Docker Hub (a repository for base images).Start our Dockerfile with a FROM statement. This is where we specify our base image.The RUN statement will allow us to execute a command for anything we want to do. We created a subdirectory /usr/src/app that will hold our application code within the docker image.WORKDIR instruction establishes the subdirectory that we created as the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. /usr/src/app is our working directory.COPY lets us copy files from a source to a destination. We copied the contents of our node application code ( server.js and package.json) from our current directory to the working directory in our docker image.The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. We specified port 3000.Last but not least, the CMD statement specifies the command to start our application. This tells Docker how to run your application. Here we use node server.js which is typically how files are run in Node.js.With this completed file, we are now ready to build a new Docker image.6. Building a docker imageMake sure that you have Docker up and running. Now that we have defined our Dockerfile, let’s build the image with a title using -t:docker build -t sample-nodejs-app This will output hashes, and alphanumeric strings that identify containers and images saying “Successfully built” on the last line:Sending build context to Docker daemon 1.966MB Step 1/7 : FROM node:6-alpine ---> 998971a692ca Step 2/7 : RUN mkdir -p /usr/src/app ---> Using cache ---> f1aa1c112188 Step 3/7 : WORKDIR /usr/src/app ---> Using cache ---> b4421b83357b Step 4/7 : COPY . . ---> 836112e1d526 Step 5/7 : RUN npm install ---> Running in 1c6b36b5381c npm WARN sample-nodejs-app@1.0.0 No description npm WARN sample-nodejs-app@1.0.0 No repository field. Removing intermediate container 1c6b36b5381c ---> 93999e6c807f Step 6/7 : EXPOSE 3000 ---> Running in 7419020927f1 Removing intermediate container 7419020927f1 ---> ed4ac8a31f83 Step 7/7 : CMD [ "node", "server.js" ] ---> Running in c77d34f4c873 Removing intermediate container c77d34f4c873 ---> eaf97859f909 Successfully built eaf97859f909 // dont expect the same values from your terminal.7. Running a Docker ContainerWe’ve built the docker image. To see previously created images, run:docker imagesYou should see the image we just created as the most recent based on time:Copy the image Id. To run the container, we write on the terminal:docker run -p 80:3000 {image-id} // fill with your image-idBy default, Docker containers can make connections to the outside world, but the outside world cannot connect to containers. -p publishes all exposed ports to the host interfaces. Here we publish the app to port 80:3000. Because we are running Docker locally, go to http://localhost to view.At any moment, you can check running Docker containers by typing:docker container lsFinally, you can stop the container from running by:docker stop {image-id}Leave the Docker daemon running.8. Create Registry (ECR) and upload the app image to itAmazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow.The keyword “Elastic” means you can scale the capacity or reduce it as desired.Steps:Go to the AWS console and sign in.Select the EC2 container service and Get started3. The first run page shows, scroll down and click cancel > enter ECS dashboard.4. To ensure your CLI can connect with your AWS account, run on the terminal:aws configureIf your AWS CLI was properly installed, aws configure will ask for the following:$ aws configure AWS Access Key ID [None]: accesskey AWS Secret Access Key [None]: secretkey Default region name [None]: us-west-2 Default output format [None]:Get the security credentials from your AWS account under your username > Access keys. Run aws configure again and fill correctly.4. Create a new repository and enter a name (preferably with the same container name as in your local dev environment for consistency).For example, use sample-nodejs-app.Follow the 5 instructions from the AWS console for building, tagging, and pushing Docker images:Note: The arguments of the following are mine and will differ from yours, so just follow the steps outlined on your console.Retrieve the Docker login command that you can use to authenticate your Docker client to your registry:Note: If you receive an “Unknown options: - no-include-email” error, install the latest version of the AWS CLI. Learn more here.aws ecr get-login --no-include-email --region us-east-22. Run the docker login command that was returned in the previous step (just copy and paste). Note: If you are using Windows PowerShell, run the following command instead:Invoke-Expression -Command (aws ecr get-login --no-include-email --region us-east-2)It should output Login Succeeded.3. Build your Docker image using the following command. For information on building a Dockerfile from scratch, see the instructions here. You can skip this step since our image is already built:docker build -t sample-nodejs-app .4. With a completed build, tag your image with a keyword (For example, latest) so you can push the image to this repository:docker tag sample-nodejs-app:latest 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app:latest5. Run the following command to push this image to your newly created AWS repository:docker push 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-app:latest9. Create a new task definitionTasks functions like the docker run command of the Docker CLI for multiple containers. They define:Container images (to use)Volumes (if any)Networks Environment variablesPort mappingsFrom Task Definitions in the ECS dashboard, press on the Create new Task Definition (ECS) button:Set a task name and use the following steps:Add Container: sample-nodejs-app (the one we pushed).Image: the URL to your container. Mine is 559908478199.dkr.ecr.us-east-2.amazonaws.com/sample-nodejs-appSoft limit: 512Map 80 (host) to 3000 (container) for sample-nodejs-appEnv Variables:NODE_ENV: production10. Create a ClusterA cluster is a place where AWS containers run. They use configurations similar to EC2 instances. Define the following:Cluster name: demo-nodejs-app-clusterEC2 instance type: m4.largeNumber of instances: 1EBS storage: 22Key pair: NoneVPC: NewWhen the process is complete, you may choose to click on “View cluster.”11. Create a service to run itGo to Task Definition > click demo-nodejs-app > click on the latest revision.Inside of the task definition, click on the Actions drop-down and select Create serviceUse the following:Launch type: EC2Service name: demo-nodejs-app-serviceNumber of tasks: 1Skip through options and click Create service and View service.You’ll see its status as PENDING. Give it a little time and it will indicate RUNNING.Go to Cluster (through a link from the service we just created) > EC2 instances > Click on the container instance to reveal the public DNS.Visit the public DNS to view our app! Mine is ec2–18–219–113–111.us-east-2.compute.amazonaws.com12. Conclusion.Congrats on finishing this post! Grab the code for the Docker part from Github.
Rated 4.0/5 based on 21 customer reviews
How to Deploy a Node.js Application to Amazon Web Services Using Docker

How to Deploy a Node.js Application to Amazon Web Services Using Docker

Blog
Table of Contents1. Introduction2. Prerequisites3. A quick primer on Docker and AWS4. What we’ll be deploying 5. Creating a Dockerfile6. Building a docker image7. Running a docker container...
Continue reading

Understanding The Human Process in Machine Learning

What Is Machine Learning?There's probably no definition that the whole world would agree on, but there are certainly some core concepts. The core thing that machine learning does is finds patterns in data. It then uses those patterns to predict the future. For example, we could use machine learning to detect credit card fraud if we have data about previous credit card transactions. We could find patterns in that data potentially. That will let us detect when a new credit card transaction is likely to be fraudulent. Or, maybe we want to determine whether a customer is likely to switch to a competitor. There are lots more, but the core idea is that machine learning lets us find patterns in data, then use those patterns to predict the future.Finding the PatternsHow did we learn to read? In reading, we identify letters, and then the patterns of letters together to form words. We then had to recognize those patterns when we saw them again. That's what learning means and that's what machine learning does with data that we provide. So, suppose I have data about credit card transactions. I have only four records, each one has three fields; the customer's name, the amount of the transaction, and whether it was fraudulent or not. What's the pattern that this data suggests for fraudulent transactions? If the name starts with T, they're a criminal. Well, probably not. The problem with having so little data is that it's easy to find patterns, but it's hard to find patterns that are correct i.e. predictive patterns, they help us understand whether a new transaction is likely to be fraudulent. Suppose I have more data which means I have more records and more fields in each one, and I know where the card was issued, where it was used, the age of the user. Now what's the pattern for fraudulent transactions? Well, if we look, there really is a pattern in this data. It is that a transaction is fraudulent if the cardholder is in their 20s, if the card is issued in the USA, and used in Russia, and the amount is more than $1000. We could find that pattern, if we look at this data for a little while. But once again, do we know that pattern is truly predictive? Probably not. We don't have enough data. To do this well, we should have enough data that people just can't find the patterns. For this, we have to use the software. That's where machine learning comes in for humans.Why Machine Learning?Well, there are several reasons. A big one is that doing machine learning well requires lots of data and we live in the big data era. It requires lots of compute power, which we have. We live in the cloud era. And it requires effective machine learning algorithms, which we have because we have seen researchers spend years, decades, in this space, learning what works. All of these things are now more available than ever, and that's a big reason why machine learning in a human process is popular today.Who's interested in machine learning? Well, majorly three groups of people. The first is business leaders. They want solutions to business problems and good solutions have real business value. Second are the Software developers, because they want to build better and smarter applications. And as we saw, applications can rely on models created via machine learning to make better predictions. The third category of people who are really involved in this space is called data scientists, who know about statistics and want powerful, easy-to-use tools which can help them in making good predictions.The Role of RThere's a machine learning technology worth mentioning called R. R is an open source programming language and environment; it's not just a language. It supports machine learning, it supports various kinds of computing about statistics, and more. R has lots of available packages to address machine learning problems and all sorts of other things. Many commercial machine learning offerings support R. In fact, R has been around for a long time; its roots are in the 90s. But it's not the only choice in this area. Python is also increasingly popular, as an open source technology for doing machine learning. There are now a number of libraries and packages for Python as well. So, R is no longer alone as the only open source choice in this area, but it's still fair to say it's the most popular.The Machine Learning ProcessFinally, machine learning, in a nutshell, looks like this. We start with data that contains patterns. We then feed that data into a machine learning algorithm, it'd be more than one, that finds patterns in the data. This algorithm generates something called a model. A model is a functionality, typically code, that's able to recognize patterns when presented with new data. Applications can then use that model by supplying new data to see if this data matches known patterns, such as supplying data about a new transaction. The model can return a probability of whether this transaction is fraudulent. Machine learning lets us find patterns in existing data, then create and use a model that recognizes those patterns in new data.Understanding machine learning means understanding the machine learning process, and the machine learning process is iterative. We repeat things over and over, in both big and small ways. The machine learning process also challenges and the reason is that we're working with what are often large amounts of potentially complex data, and we're trying to find patterns, meaningful patterns, predictive patterns, in this data.A Closer Look at the ProcessLet’s look at machine learning concepts in a more detailed way and also the terminology used in machine learning.The first thing we need to do is walk through some terminology. Like most fields, machine learning has its own unique jargon. Let's start with the idea of training data. Training data just means the prepared data that's used to create a model. So, instead of prepared data, we should use training data because in the jargon of machine learning, creating a model is called training a model. So, training data is used to train to create a model.There are two big broad categories of machine learning. One is called supervised learning, and what it means is that the value we want to predict is actually in the training data. For instance, the data for predicting credit card fraud, whether or not a given transaction was fraudulent is actually contained in each record. That data in the jargon of machine learning is labeled, and so we're doing what's called supervised learning when we try to predict whether a new transaction is fraudulent.The alternative, unsurprisingly, is called unsupervised learning and here the value we want to predict is not in the training data. The data is unlabeled. Both approaches are used, but it's fair to say that the most common approach is supervised learning.Data PreprocessingThe machine learning process starts with data. It might be relational data, it might be from a NoSQL database, it might be binary data. Wherever it comes from, though, we need to read this raw data into some data preprocessing modules typically chosen from the things our machine learning technology provides. We have to do this because raw data is very rarely in the right shape to be processed by machine learning algorithms. For example, maybe there are holes in your data, missing values, or duplicates, or maybe there's redundant data where the same thing is expressed in two different ways in different fields, or maybe there's information that we know will not be predictive, it won't help us create a good model. We want to deal with all of these issues. The goal is to create training data. The training data, as we discussed in the earlier example, commonly have columns. Those columns are called features. So, for example, in the data for credit card fraud, there were columns containing the country where the card was issued in, the country where the card was used in, and the amount of the transaction. Those are all features in the jargon of machine learning. And also the supervised learning, the value we're trying to predict, such as a given transaction is fraudulent, is also in the training data. In the jargon of machine learning, we call that the target value.Categorizing Machine Learning ProblemsIt's common to group machine learning problems into categories. There are three main categories as discussed below.One of that category is called regression. The problem here is that we have data, and we'd like to find a line or a curve that best fits that data. Regression problems are typically supervised learning scenarios, and an example question would be something like, how many units of this product will we sell next month?The second category of machine learning problems is called classification. Here we have data that we want to group into classes, at least two, sometimes more than two. When new data comes in, we want to determine which class that data belongs to. This is commonly used with supervised learning, and an example question would be something like, is this credit card transaction fraudulent? Because when a new transaction comes in, we want to predict which class it's in, fraudulent or not fraudulent. And often what we'll get back is not yes or no.The third category of machine learning problems is commonly called clustering. Here we have data, we want to find clusters in that data. This is a good example of when we're going to use unsupervised learning because we don't have labeled data. We don't know necessarily what we're looking for. An example question here is something like, what are our customer segments? We might not know these things up front, but we can use machine learning, unsupervised machine learning, to help us figure out that.The kind of problems that machine learning addresses aren't the only thing that can be categorized. It's also useful to think about the styles of machine learning algorithms that are used to solve those problems. For example, there are decision tree algorithms. There are algorithms that use neural networks, which in some ways emulate how the brain works. There are Bayesian algorithms that use Bayes' theorem to work up probabilities. There are K-means algorithms that are used for clustering, and there are lots more and having some broad sense of what the styles are certainly useful.Training and Testing a ModelModels are very important and are always used. An application, for example, can call a model, providing the values for the features that the model requires. Models make predictions based on the features that were chosen when the model was trained. The model can then return a value, predicted using these features. That value might be whether or not it actually is fraudulent, estimated revenue, a list of movie recommendations, or something else.Let's take a closer look at the process of creating and training a model. Let’s start with our training data because we're using supervised learning, the target value is part of the training data. In the case of the credit card example, that target value is whether a transaction is fraudulent or not. Our first problem is to choose the features that we think will be most predictive of that target value. For example, in the credit card case, maybe we decide that the country in which the card was issued, the country it's used in, and the age of the user are the most likely features to help us predict whether it's fraudulent. We've chosen, let's say, features 1, 3, and 6 in our training data. We then input that training data into our chosen learning algorithm. But we only send in 75%, say, of all the data for the features we've chosen. How do we decide which features were most predictive, and how do we choose a learning algorithm? There are lots of options as we've seen. The answer is if it's a simple problem, or maybe our technology is simple for machine learning, the choices can be limited, not too hard. If we have a more complex problem, though, with lots of data and a powerful machine learning technology with lots of algorithms, this can be hard. If we have training data that has 100 features? Which ones are predictive? How many should we use? 5, 10, 50? The answer is this is what data scientists are for. This is why people who have knowledge and facility with these technologies, as well as domain knowledge about some particular problem, are so valuable. It's because they can help us do this. It can be a hard problem. In any case, the result of this is to generate a candidate model.The next problem is to work out whether or not this model is good. And so, we do that in supervised learning like this. We input test data to a candidate model. That test data is the remaining 25%, the data we held back for the features we're using, in this case, 1, 3, and 6. We use that data, because our candidate model can now generate target values from that test data. We know what those target values should be, because they are in the training data. All we have to do is compare the target values produced by our candidate model from the test data with the real target values, which are in the training data. That's how we could figure out whether or not our model is predictive or not when we're doing supervised learning. Suppose our model's just not very good. How can we improve it? One of them is, maybe we've chosen the wrong features. So, this time we choose different ones like 1, 2, and 5. We also may have wrong data, so we can get some new data. The problem might be the algorithm, so we can modify some parameters in our algorithm or choose a new one. Whatever we do will generate another candidate model, and we'll test it, and the process repeats. It iterates and evolves. This process is called machine learning, but notice how much people do. People make decisions about features, about algorithms, about parameters. The process is very human, even though it's called machine learning.In this way, at long last machine learning has grown up. It's never again some innovation that is just for analysts in faraway labs. Machine adapting likewise isn't difficult to get it. I trust we consent to this now, in spite of the fact that, it can be difficult to do well. Lastly, that machine learning can presumably enable individuals to make better applications and contribute a lot to the society. Hope, this tutorial gave you all the information needed to understand the human process in machine learning clearly.
Rated 4.0/5 based on 67 customer reviews
Understanding The Human Process in Machine Learning

Understanding The Human Process in Machine Learning

Blog
What Is Machine Learning?There's probably no definition that the whole world would agree on, but there are certainly some core concepts. The core thing that machine learning does is finds patterns...
Continue reading

SAP Expands The Global Adoption Of Blockchain Across The Intelligent Enterprise

On June 6, 2018, SAP announced the expansion of its SAP Leonardo Blockchain technologies to generate the business value for customers instantly. It also announced the general availability of SAP Cloud Platform Blockchain, a Blockchain-as-a-Service, which empowers every customer, developer, and partner to make blockchain beneficial and actionable to their business.“We have introduced a blockchain co-innovation program with 65 customers participating in it, where we leverage the ecosystem to identify certain interests they have - the sweet spots - and try adding value to it,” said Zube.Customers can now adopt blockchain capabilities as an addition to their enterprise systems with SAP. The newly released SAP HANA Blockchain service makes data management simple and easy with a unified view on all transactional enterprise data, regardless of where it exists in the blockchain or in the core systems. This enables businesses to build apps for decentralized architectures, like blockchain that is developed on top of the database technology, without any need to deal with the underlying convolutions that come with developing multiple blockchain-based networks and systems.The company is now offering its customers the most advanced blockchain technologies MultiChain and Hyperledger Fabric, which would allow customers to “tailor their blockchain solutions to their specific business needs,” according to Zube. “We deeply believe in a technology-agnostic approach and therefore plan to provide even more blockchain technologies on SAP Cloud Platform Blockchain in the future to ensure our offering always meets our customers’ changing requirements.”Enhancing the logic of cross-company collaboration, SAP has launched global blockchain consortium to unite the business mastery of partners, customers, and other similarly invested industry players. The companies such as Hewlett Packard Enterprise, Amkor, A³ via Airbus, UPS, FLEX, SAP, and Intel Corporation has joined the consortium as initial founding members.Source: SAP official blog
Rated 4.0/5 based on 48 customer reviews
SAP Expands The Global Adoption Of Blockchain Across The Intelligent Enterprise

SAP Expands The Global Adoption Of Blockchain Across The Intelligent Enterprise

What's New
On June 6, 2018, SAP announced the expansion of its SAP Leonardo Blockchain technologies to generate the business value for customers instantly. It also announced the general availability of SAP Cloud...
Continue reading

Introduction To Higher Order Components (HOC) In React

This tutorial is intended to give you step by step understanding of how higher order components work, and when and why to use them. We would keep it beginner friendly, to help you get a better understanding of the concept and why it exists.Higher order components, in general, are a functional programming methodology. However, this article does not require any functional programming knowledge but required some basic knowledge in React. We assume that you are already familiar with React, React components, React props, Lifecycle methods and how to build a basic app using React.We will cover some functional programming concepts that will help beginners understand higher order components in React better.Let’s begin with a quick introduction to Reacts higher order components:A higher-order component is a function that takes a component and returns a new component.HOC is not a feature in React or any other programming language, but a pattern evolved from the compositional ( made of components ) nature of React.Functional programming and higher-order functionsA higher order function is a function that accepts another function as an argument. You would have already used the map function which falls under this category.This is a concept that is derived from the world of functional programming. But, why use a functional programming concept in React?The goal of this pattern is to decompose the logic into simpler and smaller functions that can be reused. A rule of thumb is a function that does just one task and does it well. This also avoids side effects (changing anything that is not owned by the function) and makes debugging and maintenance a whole lot easier.A classic example of functional programming example is the multiplication:const multiply = (x) => (y) => x * y multiply(5)(20)Similarly, a HOC takes another component as an argument.Let’s build a HOC and learn more as we go.Higher order component in ReactLet’s look at some code straight away.const reverse = (PassedComponent) =>   ({ children, ...props }) =>           {children.split("").reverse.join("")}     const name = (props) => {props.children} const reversedName = reverse(name) Hello //=> olleHThe above example takes a component and reverses the content inside it. The reverse is a HOC, that takes in an element (name in the example), find the content inside that element, reverses it and spits out an element with reversed content.What shown above is an extremely simple use case for the purpose of understanding the concept.Two things that happen with a HOC are:Takes a component as an argumentReturn somethingLet’s have a look at a more practical and complex use case.In all the apps that we have created in the past, if we have to load data from an API, there would be a latency involved.Typically there is a time lag between when the page is rendered and the actual data is shown. Most of the apps show a loading animation to make the user experience better. Let us build a Loading animation component to demonstrate the concept of HOC.You can find the entire working code here. We will refer to certain parts of the repo as we progress. This is a React app made using create-react-app.First of all, let’s understand how the app works. We use randomuser.me to generate some sample data. Let’s assume that we are building a feed of random users. In App.js we make a request to randomuser.me to get some random data. This request will be created inside the componentDidMount function.componentDidMount() {     fetch("https://api.randomuser.me/?results=50")       .then(response => response.json())       .then(parsedResponse =>         parsedResponse.results.map(user => ({           name: `${user.name.first} ${user.name.last}`,           email: user.email,           thumbnail: user.picture.thumbnail         }))       )       .then(contacts => this.setState({ contacts }));   }The random data from the API is processed since we are only interested in the name, email and the image, we filter it out and set it as the app state. Once we have the data, we pass the contacts to our Feed object asHere is how our Feed component looks. It simply passes the received contact data into FeedItem. And FeedItem iterates through the data to actually display it.import React, { Component } from "react"; import FeedItem from "./FeedItem"; import Loading from "./HOC/Loading"; import FeedStyle from "./Feed.css"; class Feed extends Component {   render() {     return (                         );   } } export default Loading("contacts")(Feed);You would have noticed that the export statement is different from the normal case. Instead of Feed, we export the Feed component wrapped in a Loading component. This is because our Loading HOC is a curried function. Currying is the process of breaking down a function into a series of functions that each take a single argument. Read more about currying here.Let’s take a look at our Loading component.import React, { Component } from "react"; const isEmpty = prop =>   prop === null ||   prop === undefined ||   (prop.hasOwnProperty("length") && prop.length === 0) ||   (prop.constructor === Object && Object.keys(prop).length === 0); const Loading = loadingProp => WrappedComponent => {   return class LoadingHOC extends Component {     componentDidMount() {       this.startTimer = Date.now();     }     componentWillUpdate(nextProps) {       if (!isEmpty(nextProps[loadingProp])) {         this.endTimer = Date.now();       }     }     render() {       const myProps = {         loadingTime: ((this.endTimer - this.startTimer) / 1000).toFixed(2)       };       return isEmpty(this.props[loadingProp]) ? (               ) : (               );     }   }; }; export default Loading;Let’s understand how the component works step by step.For ease of understanding assume component takes another component ( in our case Feed component ) along with a property contactsNow the Loading component checks of the loadingProp ( in our case contacts)are empty — The function isEmpty does this.If it’s empty the Loading component returnWe use the classname loader to add some styles and implement the loader.Else it returns the original component with optional addition properties ( in this case myProps)In our example, we have calculated the loading time for the demonstration purposes and to show that we can pass data back.So what happens when we wrap any component in the Loading components along with a property name?It checks if the passed property name is empty.If it is empty a loading component is returned, if data is present the original component is returned.That wraps up the implementation of our HOC.Now that we have understood how to use React higher order components. Let’s understand the when and whys.In a normal case, to implement a loading component, we can check if the corresponding property (contacts in our example) is in respective component and render a Loading component within the original component.However, this will lead to redundant code. For example, we have a Feed component controlled by contacts and a List component controlled by name, we would have to check if the data is present in two different cases and render the loading components.A generic higher order component as shown in the above example avoids this. So in case, we have to implement loading for the List component, in the List.js we can simply doexport default Loading("name")(List);This is just one application of HOC, you can use it in any way you wish. Basically what it does isTake a component as an argumentReturn something — this can be anything. You can completely disregard the original component and render something completely new.In short, HOC helps you organize your codebase in a much better way and decreases code redundancy.Even Redux uses HOC. The connect statement that you have come across is a HOC that does so many things with the original component.If you see the same code is written in many places in your codebase there might be a chance to move this to a HOC and make your codebase a lot cleaner.
Rated 4.0/5 based on 45 customer reviews
Introduction To Higher Order Components (HOC) In React

Introduction To Higher Order Components (HOC) In React

Blog
This tutorial is intended to give you step by step understanding of how higher order components work, and when and why to use them. We would keep it beginner friendly, to help you get a better underst...
Continue reading

Progressive Web Apps with Ionic and Firebase

PWA, Progressive Web Apps, are a new way to offer incredible mobile app experiences that are highly optimized, reliable, and accessible completely on the web. PWAs are the web applications that are a hybrid between a traditional mobile website and a mobile application. These offer the experience and reliability of using a mobile app while not making the installation of an app required. PWAs are accessed just as one would access a normal website or web app. It is served over a URL, but once it is served, it makes the user feel as if they are using a mobile app in the browser.Brands large and small are jumping on Progressive Web Apps to create better user experiences, anywhere the web runs. Here are some of our favorites.The Google’s checklist of what makes a PWA mentions the following pre-requisites.The content should be served over HTTPS.The app should be completely responsive, so the user’s device’s hardware and screen size are the least of concerns.The app works offline. This provides that even if the user is not connected to the internet, some content should be served to the user.The app adds an icon to the user’s home screen. The service worker mechanism makes this possible. The user can add a shortcut to launch the PWA right from their home screen.The app should be blazingly fast, the load times should be minimized so it feels like a native app.Cross-browser compatibility is required so the app works on all browsers as intended.Transitions should feel snappy as you tap around, even on a slow network, a key to perceived performance.Each page in the application should be deep linkable, so URLs can take the user exactly where they want to go in the app.Now that we know enough about PWAs, let’s have a look at one and then we will move on to building one for us.Pick up your phone and open www.pinterest.com in the browser, preferably Chrome. You will see the app load up in just a few seconds. The interface feels as if it is an app instead of a website opened in the browser. This is because it is completely responsive and blazingly fast.When you do log in, you will get the prompts as shown in the picture below.The app prompts the user to allow to be sent push notifications. This is however not a requirement for PWAs but is recommended. The second prompt is a mandatory requirement for a web app to be a PWA. It prompts the user to allow the app to create an icon on the user’s home screen for easy accessibility.Once you allow, the icon is added to the home screen and it launches the app once the user taps on it.Another amazing example is app.starbucks.com. So yes, PWAs are very popular among the big brands.So, this is how a PWAs differ from a regular web app. Now, let’s create one. In this article, we will be creating a PWA with Ionic and Firebase hosting. The Ionic framework is the most popular hybrid mobile app development framework and has gained tremendous growth in last few years. It is tightly integrated with Angular. It is intended to be used with Apache Cordova and lets developers build apps for Android, iOS, and Windows using just the web technologies like HTML, JS, and CSS.Transforming an Ionic app into a PWA is very easy, but we will do it step by step. Well, there are just a few steps. First off make sure that you have installed NodeJS and NPM. Install Ionic with the following command in your terminal or command prompt.npm install ionic –gDo not forget the –g flag, Ionic should be installed globally. Once the command finishes, make sure that everything is okay by typing in,ionic -vThis will give you the version of Ionic installed. Now, let’s create our first Ionic App. Type in,ionic create PWAIonicIn the above command, replace “PWAIonic” with any name of your choice. This would be the name of your app. You will be prompted to choose a template. For a real app, you should choose the blank template but for this demo, let’s select conference as this will give us a ready-made app to work with.Next, you will be prompted to answer if you want to integrate Cordova in your app. Since we are interested in building a PWA, we will select No and wait for the process to finish.If you are prompted to link your app to Ionic Pro SDK, you can select No for now.Once the app is created, a new folder with the name of your app is created. Change your current directory to the folder and open it on your favorite text editor. We are using Visual Studio Code but you are free to use the one that you like.You can modify your app. The source code that you can work on lives inside the src folder.For now, let’s run this and have a look at how the app looks like in the browser. To run the app, just open the terminal again, in the project directory, type,ionic serveThis command will build your app and will start a local development server, and then launch your app in the browser. It will look like this.Try resizing your browser and you will see that the app is completely responsive. Feel free to play around with it.You can stop the server by pressing CTRL + C when you are done.Now, we need to modify a few files to make this app ready to be built as a PWA. Back to the code editor. Open the file src/index.html and uncomment the code below.Save the file. Well, now you are pretty much done. Technically, that’s all you needed to build a Progressive Web App with Ionic. Uncommenting the above code enabled the service worker that enabled your app to cached in the browser’s memory and thus can be loaded even if the device is offline.Just one more optimization and then we will build this app. Go ahead and comment the script tag that refers to a cordova.js file in src/index.html. That is just useless.Now, we are ready to build the app, but before we do that, let’s examine the src/manifest.json and service-worker.js files.The manifest.json file contains the metadata about your PWA.It contains the app name, url, name of the index file, icons, and colors. You can change these if you want to. For now, we will just leave it like that.The service-worker.js is the file that does all the magic.This is the default service worker setup that Ionic uses. This setup will pre-cache all of your static assets ensuring that your app loads reliably and fast under any network condition. If you ever want to register the user for Push Notifications, this is a place you will put the code for that in. For now, let’s just see how to make our first PWA with Ionic.Open the terminal, and type in the command,npm run ionic:build –prodThis command will build your Ionic App as a production-ready PWA. The CSS and JS are minified and uglified.You can check the built PWA in the www folder in your app’s folder. The www folder’s contents can be deployed to any web server with HTTPS connection now. Let’s deploy this on Firebase Hosting.To do that, make sure that you have firebase-tools installed and working. Use the commands mentioned here to deploy your app to firebase hosting.Assuming that you have firebase-tools installed, type the following commands in order to deploy the www folder to firebase hosting. Make sure that you have created a new Firebase Project here. Make sure to log in with the same account in the next step.firebase loginThis command will open the browser to authenticate you as a firebase user. Log in to a Google Account. Once you log in, come back to the terminal.firebase init hostingThis command will initialize a firebase project in your Ionic project’s directory. You will get a list of Firebase projects in your Firebase account, select the one that you created in the step above.I created a project on Firebase called PWAIonic, so I will select that.The next steps are very important.For a public directory, type “www”.For configuring as an SPA, “Yes”, and,For overwriting index.html, “No”.Once done, you are ready to execute the final step. Type,firebase deployThis command may take some time about a minute or two. It will deploy the contents of the “www” directory to firebase hosting and you will get a URL to share with everyone.See, right there at the bottom, we get the Hosting URL. Let go and open that right-away on our phone in the browser.Voila. We get the PWA launched on the phone. In just a few seconds, we also get the prompt to add the PWA to the home screen. Awesome. Here is the URL to the PWA we built:https://pwaionic.firebaseapp.comSo, what are you waiting for? Build your first PWA with reference to this Ionic Progressive Web App with Firebase tutorial and get going. There is a lot to explore. Happy reading.
Rated 4.0/5 based on 72 customer reviews
Progressive Web Apps with Ionic and Firebase

Progressive Web Apps with Ionic and Firebase

Blog
PWA, Progressive Web Apps, are a new way to offer incredible mobile app experiences that are highly optimized, reliable, and accessible completely on the web. PWAs are the web applications that are a ...
Continue reading

How to Export Firebase Data to Excel In A ReactJS App

What is going to happenIn this five-step tutorial, we are going to start with making a ReactJS app. After that, we will make a proof of concept. Next up is displaying the data in a fancy way. Then import data from Firebase and finally export it to Excel. If you ever get stuck or just want to see the whole project here’s the link to the code https://github.com/antonderegt/export-demoPrerequisitesBefore we start, I’m going to assume you have knowledge of or have installed the following:yarn (npm works as well)Basic ReactJS knowledgeGitHub (not mandatory, but it is used in this tutorial)Knowledge about Firebase is a proAny code editorIf you are set on these five easy parts, we can start with the tutorial.Let’s see the steps involved in Exporting firebase data to excel with a React JS app.Step 1: Create a ReactJS appDo you like simple? I like to keep things simple so we will start off with a boilerplate of a ReactJs app. Open your terminal and install create-react-app by typing to following:$ yarn global add create-react-appOnce this is installed we can create a react app from anywhere on our computer. Open the terminal and go to the folder where you want to store your application in, type this:$ create-react-app export-demo $ cd export-demo $ yarn startA browser window will open and show a beautiful spinning atom.I will push this code to GitHub so you can follow along. You should do as well. Here we go:$ git init $ git add . $ git commit -m "Step 1: Init react app" $ git remote add origin https://github.com/your-user-name/export-demo.git $ git push -u origin masterStep 2: Rough versionIn this step, we are going to make a rough version of what we are going to build. Let’s go ahead and replace the code in src/App.js with the following:import React, { Component } from 'react'; class App extends Component {   render() {     return (                         Export Demo           Export to Excel                                     Firstname               Lastname               Age                                       Elon               Musk               23                                       Donald               Trump               94                                         );   } } const style = {   display: 'flex',   justifyContent: 'center', } export default App; In the code above two imports were deleted, the div in the return was changed to our own title, a button and a table. The style at the button just makes everything sit in the center of the screen. Upon saving this document the page will look like this:You can see three elements here: title, button, table. Somewhere in the future, the table will be populated by Firebase and the button will export the table to an Excel file.That’s it for step 2, let’s push our code to GitHub again.$ git add . $ git commit -m "Step 2: Rough version" $ git push origin masterStep 3: React TableThe standard table is okay if you only have two records like we have right now. When you amass lots of data you want something prettier and easier to use. React-table is the package we are going to use.Install react-table:$ yarn add react-tableReplace the code in /src/App.js with the following:import React, { Component } from 'react'; import ReactTable from 'react-table' import 'react-table/react-table.css' class App extends Component {   constructor(props) {     super(props)     this.state = {       users: [         {           firstname: "Elon",           lastname: "Musk",           age: 23         },         {           firstname: "Donald",           lastname: "Trump",           age: 94         }       ]     }   }   render() {     const userColumns = [       {         Header: "Name",         columns: [           {             Header: "First Name",             id: "firstname",               accessor: d => d.firstname           },           {             Header: "Last Name",             id: "lastname",               accessor: d => d.lastname           }         ]       },       {         Header: "Age",         columns: [           {             Header: "Age",             id: "age",               accessor: d => d.age           }         ]       }     ]     return (                         Export Demo           Export to Excel                             );   } } const style = {   display: 'flex',   justifyContent: 'center', } export default App; The code above imports react-table and its style. On line 6–22 is the state react-table will be used as data. The const user Columns initialize the table with the right headers and make sure the right data is displayed in the right column. The was replaced by which takes the users data in the state and the initialisation of the columns as inputs.The result:This immediately looks a lot better! What do you think? We finished step 3 already.$ git add . $ git commit -m "Step 3: React-table" $ git push origin masterStep 4: Connect to the firebaseIf you don’t know what Firebase is, it’s basically a cloud database hosted by Google. It’s free to use for applications that have less than 100 concurrent connections, which is enough for small applications. To get started with Firebase go to https://console.firebase.google.com/u/0/. Click the + to create a new project. Give it a name and a region. We will use the Real-time Database because Cloud Firestore is still in Beta. Select the ‘Start in test mode’ radio button and click ‘ENABLE’. These settings can be changed at any time in the Rules tab of the database. For now, we will ignore the error message that says: ‘Your security rules are defined as public, anyone can read or write to your database’.Let’s populate it with some random data. You can do this by hovering over the name of your database, in my case export-demo and clicking the + sign.The next step is to add the Firebase package to our application by pasting the following into your terminal:$ yarn add firebaseTo connect to the database we just created, the firebase package needs credentials. You can find these by clicking the gear icon on the left side of the firebase console. Next, click ‘Project settings’. Make a new file called config.js in the src directory. Add this to the file:import firebase from 'firebase' import firebaseSecrets from './firebaseSecrets' const config = {     apiKey: firebaseSecrets.apiKey,     authDomain: firebaseSecrets.authDomain,     databaseURL: firebaseSecrets.databaseURL,     projectId: firebaseSecrets.projectId }; const fire = firebase.initializeApp(config); export default fire;In the file above the secret credentials are imported from a file called firebaseSecrets.js. Let’s create that file in the src directory. Add this to the file:module.exports = {     apiKey: "your-api-key",     projectId: "your-project-id",     authDomain: "your-project-id.firebaseapp.com",     databaseURL: "https://your-project-id.firebaseio.com" };Make sure to change the values in the firebaseSecrets.js file. We want to keep the credentials in this file secret. To do this go to the file .gitignore and add the line: firebaseSecrets.jsTime to load the Firebase data in our app. Change the /src/App.js to the following:import React, { Component } from 'react'; import ReactTable from 'react-table' import 'react-table/react-table.css' import firebase from './config' class App extends Component {   constructor(props) {     super(props)     this.state = {       users: []     }   }   componentWillMount(){     this.getUsers()   }   getUsers() {     let users = []     firebase.database().ref(`users/`).once('value', snapshot => {       snapshot.forEach(snap => {         users.push(snap.val())       })       this.setState({         users       })     })   }   render() {     const userColumns = [       {         Header: "Name",         columns: [           {             Header: "First Name",             id: "firstname",               accessor: d => d.firstname           },           {             Header: "Last Name",             id: "lastname",               accessor: d => d.lastname           }         ]       },       {         Header: "Age",         columns: [           {             Header: "Age",             id: "age",               accessor: d => d.age           }         ]       }     ]     return (                         Export Demo           Export to Excel                             );   } } const style = {   display: 'flex',   justifyContent: 'center', } export default App;$ git add . $ git commit -m "Step 4: Connect to Firebase $ git push origin masterStep 5: Export to ExcelThe final step! In this step we are going to add the logic to the export button we have been looking at the whole time. I bet you clicked it 😄. The package we are going to use to export is called xlsx, let’s install it.$ yarn add xlsxChange the /src/App.js one more time to include the export function.import React, { Component } from 'react'; import ReactTable from 'react-table' import 'react-table/react-table.css' import firebase from './config' import XLSX from 'xlsx' class App extends Component {   constructor(props) {     super(props)     this.state = {       users: []     }     this.exportFile = this.exportFile.bind(this)   }   componentWillMount(){     this.getUsers()   }   getUsers() {     let users = []     firebase.database().ref(`users/`).once('value', snapshot => {       snapshot.forEach(snap => {         users.push(snap.val())       })       this.setState({         users       })     })   }   exportFile() {     let users = [["First Name", "Last Name", "Age"]]     this.state.users.forEach((user) => {       let userArray = [user.firstname, user.lastname, user.age]       users.push(userArray)     })     const wb = XLSX.utils.book_new()     const wsAll = XLSX.utils.aoa_to_sheet(users)         XLSX.utils.book_append_sheet(wb, wsAll, "All Users") XLSX.writeFile(wb, "export-demo.xlsx")   }   render() {     const userColumns = [       {         Header: "Name",         columns: [           {             Header: "First Name",             id: "firstname",               accessor: d => d.firstname           },           {             Header: "Last Name",             id: "lastname",               accessor: d => d.lastname           }         ]       },       {         Header: "Age",         columns: [           {             Header: "Age",             id: "age",               accessor: d => d.age           }         ]       }     ]     return (                         Export Demo           Export to Excel                             );   } } const style = {   display: 'flex',   justifyContent: 'center', } export default App;In the code above the xlsx package gets imported and the button now calls a function called exportFile(). On the first line of this function, the header row is made. The user’s data is stored as objects, but xlsx wants arrays, so the code loops through all the users and changes the objects to arrays. Finally, an Excel file is created, a sheet is added, data is added and the downloading of the file gets initiated.Go ahead, click the Export to Excel button. The excel sheet will look like this.One more commit and we’re done.$ git add . $ git commit -m "Step 5: Export to Excel" $ git push origin masterDoneThe compete working project is here.We just built a ReactJS app, imported data from Firebase and managed to export the Firebase data to an Excel sheet. I hope you liked this Reactjs Firebase tutorial. You can now go ahead and use it in one of your own awesome projects. Tell me if you do! 
Rated 4.0/5 based on 21 customer reviews
How to Export Firebase Data to Excel In A ReactJS App

How to Export Firebase Data to Excel In A ReactJS App

Blog
What is going to happenIn this five-step tutorial, we are going to start with making a ReactJS app. After that, we will make a proof of concept. Next up is displaying the data in a fancy way. Then imp...
Continue reading

Cache Busting : Why You Shouldn’t Tell Your Clients To Hard Refresh

You can download the sample from GitHub @ Cache-Busting-SampleBrowser-side caching is awesome, it makes your pages load faster, reduces network usage and improves perceived load times but it starts to become a real pain when you build an application that frequently rolls out client-side updates. These updates propagate slowly to your end users with almost no way of being sure whether your bug still exists because you missed a test case or because the end user is simply still using the cached version of your JavaScript.Let’s face it, you do tell your clients to press ctrl + f5 to see the latest changes you’ve made to JavaScript.There are a few cache-busting techniques you can get around this problem, some of them beingAppend a hash to the file.Append a query string to the fileWe will be looking at the former.Gulp Rev To The Rescue!The gulp plugin gulp-rev appends content hashes to the end of file names. This is great because hashes are only computed for files that have changed. This way, the client always only downloads the assets that have changed.You need to install gulp-rev by running the commandnpm install gulp-rev — save-devHere, we have installed gulp-rev as a development dependency. It is assumed that you already have gulp installed and a basic gulpfile running. Now, we create a task that handles the job of creating file revisions based on content hashes.const gulp = require('gulp'); const rev = require('gulp-rev'); gulp.task("revision", function () {   return gulp.src(["./Scripts/dist/**/*.js", "./Scripts/dist/**/*.css"])         .pipe(rev())         .pipe(gulp.dest("./Scripts/dist"))         .pipe(rev.manifest())         .pipe(gulp.dest("./Scripts")); });In the above snippet, we select all JavaScript and CSS files and pipe it to the rev command and store the output in the folder named dist.The rev.manifest() function creates a JSON mapping our original file names to the newly created filenames with hashes. Below is what a manifest looks like.{   "assets/css/site.compiled.css": "assets/css/site-1db21d418d.compiled.css",   "js/site.min.js": "js/site-ff3eec37bc.min.js",   "vendors/vendors.js": "vendors/vendors-ebd24a3d51.js" }Serving Assets (ASP.NET)Once the files are revised we can reference them in our HTML or JS files. The problem here is that every time the hash changes you would have to manually update each reference. Quite cumbersome, so let’s make it easy.The below code samples pertain to ASP.NET and can be used similarly in the language of your choice.In the below sample we create a static method named version which takes in the path and returns the hashed file name. Here we use to leverage the revision manifest to look for the updated hash file name. We also use runtime cache to avoid having to read the manifest for every request.public static class Revision {     public static string Version(string path)     {         if (HttpRuntime.Cache[path] == null)         {             using (StreamReader sr = new StreamReader(HostingEnvironment.MapPath("~/Scripts/rev-manifest.json")))             {                 Dictionary rev = JsonConvert.DeserializeObject(sr.ReadToEnd());                 string revedFile = rev.Where(s => path.Contains(s.Key)).Select(g => g.Value).FirstOrDefault();                 string actualPath = "/Scripts/dist/" + revedFile;                 HttpRuntime.Cache.Insert(path, actualPath);             }         }           return HttpRuntime.Cache[path] as string;     } }Now, we can use the above method in our razor view like so.But I have angular templatesIt gets a bit challenging with angular templates. Let’s solve that problem.Start by installing 2 more gulp packages.npm install — save-dev buffer-to- vinyl gulp-ng- configIn the gulp task revision-html below, we create a separate revision file for html named rev-manifest-html.json. This task is a dependent task for the actual gulp task ng-revision.const gulp = require('gulp'); const rev = require('gulp-rev'); const b2v = require('buffer-to-vinyl'); const ngConfig = require('gulp-ng-config'); gulp.task("revision-html", function () {   return gulp.src(["./Scripts/dist/**/*.html"])           .pipe(rev())           .pipe(gulp.dest("./Scripts/dist"))           .pipe(rev.manifest("rev-manifest-html.json"))           .pipe(gulp.dest("./Scripts")); }); gulp.task("ng-revision", ["revision-html"], function () {     var json = require('../rev-manifest-html.json');     var dummy_json = JSON.stringify({});     return b2v.stream(new Buffer(dummy_json), 'rev-manifest-html.js')         .pipe(ngConfig('my.constants', {             createModule: false,             wrap: true,             pretty: true,             constants: {                 HTML: json             }         }))         .pipe(gulp.dest("./Scripts")) });The ng-revision task reads the output of the revision-html file (the rev-manifest-html.json). This is a JSON file of html files as keys and their corresponding revisioned file names as values. We pipe this JSON into gulp angular config which then generates an angular config file named “rev-manifest- html.js”.(function () {   return angular.module("my.constants").constant("HTML", {     "views/modules/applications/applications.html": "views/modules/applications/applications.html",     "views/modules/home/home.html": "views/modules/home/home.html"   }); })();We now need to add an HTTP interceptor to angular which will intercept outgoing requests to html files and modify the request to return the correct versioned file.(function (angular) {   angular.module('my')     .factory('customHttpInterceptor', ['HTML', function (html) {         return {           request: function (config) {             if (config.url.indexOf('views/') > 0) {               if(config.url.indexOf('deployments-history/history.html') > 0){                 console.log("called history");               }               var key = config.url.replace("Scripts/dist/", "")               config.url = "Scripts/dist/" + html[key];               return config             }             return config;           }       }   }]); })(window.angular);SummaryIn this article, we saw the strategies for cache busting that can be integrated into your build pipeline. We saw how we can revision JavaScript, CSS and html files. We can follow a similar approach for other static file types like images, fonts etc. We also saw how an angular interceptor can help to transform outbound requests and how Asp.Net can be used to respond with versioned files. It is crucial to note that we at least have HTTP caching when accessing JavaScript and CSS files to reduce the load on the server. Additionally, you can think of replacing the HTTP run-time cache with Redis which can be much more effective.
Rated 4.0/5 based on 40 customer reviews
Cache Busting : Why You Shouldn’t Tell Your Clients To Hard Refresh

Cache Busting : Why You Shouldn’t Tell Your Clients To Hard Refresh

Blog
You can download the sample from GitHub @ Cache-Busting-SampleBrowser-side caching is awesome, it makes your pages load faster, reduces network usage and improves perceived load times but it starts to...
Continue reading

Descriptive Statistics for Data Science

Facts are stubborn, but Statistics are pliable — Mark TwainDescriptive statistics consist of methods for organizing and summarizing information (Weiss, 1999)Descriptive statistics include the construction of graphs, charts, and tables, and the calculation of various descriptive measures such as averages, measures of variation, and percentiles.Let’s consider an example of tossing dice in order to understand the statistics for Data Science. The dice is rolled 100 times and the results are forming the sample data. Descriptive statistics is used to grouping the sample data to the following table. It is almost always necessary to use methods of descriptive statistics to organize and summarize the information obtained from a sample before methods of inferential statistics can be used to make a more thorough analysis of the subject under investigation. Sometimes, it is possible to collect the data from the whole population. In that case, it is possible to perform a descriptive study on the population as well as usual on the sample.Well, Let’s see what is Descriptive Statistics and how to apply statistics for Data Science and Machine Learning.In Descriptive Statistics, before you summarize the data, you need to get the data first. The data in most of the cases is captured in which the effect of variables under study can be captured. But creating an unbiased and proper environment for collecting data is equally important because ultimately good data leads to good and meaningful results.So, we’ll start off with Design of Experiments. In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is reflected in a variable called the predictor (independent). The change in the predictor is generally hypothesized to result in a change in the second variable, hence called the outcome (dependent) variable. Experimental design involves not only the selection of suitable predictors and outcomes but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources.Khan Academy has a very good explanation on this topic, I strongly believe that it’ll be your exciting 10 minutes of watching the below video.Now, we have data and we have made some analysis using Design of Experiments, your next important step should be conveying that information visually to make it more effective and Data Visualization is the way to do it.Data Visualization is the technique to maximize how quickly and accurately people decode information from graphics.In order to achieve this Data Visualization researchers have focused on two areas,1. Preattentive cognition: This includes the concepts which use cognitive understanding to decode information from the graphics.2. Accuracy: This includes the concepts which maximize the accuracy with which people interpret the visualizations.While using various Visualization techniques you should have engagement from the users, understanding of the concepts, memorability of the information and emotional connection between users and the content.As we know that Descriptive Statistics are all about showing summary, describing the data (descriptive intuition but not generalizing). There is a lot of Mathematical technique revolving around descriptive statistics.Descriptive measures that indicate where the center or the most typical value of the variable lies in a collected set of measurements are called measures of center or Central Tendency. Measures of the center are often referred to as averages. The median and the mean apply only to quantitative data (information about quantities), whereas the mode can be used with either quantitative or qualitative data(information about qualities).The mean (or average)The mean of the variable is the sum of observed values in the data divided by the number of observations  where x1, x2, x3...xn are taken as variable and n is the number of observed values                                                                            The Formula for calculating mean or averageFor Example, 7 participants in horse riding had the following finishing times in minutes: 27,21,25,23,21,28,24. What is the mean?By using the formula for calculating the mean or the average, we take: 27+21+25+23+21+28+24 / 7 equals 24 as the mean.The medianIt is to arrange the observed values of a variable in a data in increasing order. The sample median of a quantitative variable is that value of the variable in a data set that divides the set of observed values in half, so that the observed values in one half are less than or equal to the median value and the observed values in the other half are greater or equal to the median value. To obtain the median of the variable, we arrange observed values in a data set in increasing order(ascending order) and then determine the middle value in the ordered list.The modeIt is to obtain the frequency of each observed value of the variable in a data and noting down the greatest frequency.1. If the greatest frequency is 1 (no value occurs more than once) then the variable has no mode.2. If the greatest frequency is 2 or greater, then any value that occurs with that greatest frequency is called a mode of the variable.The rangeThe sample range is obtained by computing the difference between the largest observed value of the variable in a data set and the smallest one in the dataset.                                                                                   Range = max - minFor Example, Consider the 8 participants in horse riding had the following finishing times in minutes: 28,22,26,29,21,23,24,50. ThenWhat is the range?We take 50-21 = 29 as the range.The Boxplot A boxplot is based on the five-number summary(min, max, three quartiles written in increasing order) and can be used to provide a graphical presentation of the center-point and variance of the observed values of variable in a data set.But, How do you draw a Boxplot?1. Well, Determine the five-number summaries first (min, max, three quartiles)2. Draw a horizontal or vertical axis on which the numbers obtained can be located. Mark the quartiles, min and max values with horizontal and verticle lines above the axes3. Connect the dividend (quartile) to each other that makes a box then connect the box to the min and max values with the lines.The standard deviationThe sample standard deviation is the most frequently used measure of variability, for a variable x.The sample standard deviation denoted by s, is:The population standard deviation formula is:This is all about how you collect, analyze, summarize and make descriptive intuition from the data using Descriptive Statistics. Hope, this tutorial of statistics for data science helped you to learn the mathematical techniques that are revolving around descriptive statistics.Hope, this tutorial of statistics for data science helped you to learn the mathematical techniques that are revolving around descriptive statistics.
Rated 4.0/5 based on 47 customer reviews
Descriptive Statistics for Data Science

Descriptive Statistics for Data Science

Blog
Facts are stubborn, but Statistics are pliable — Mark TwainDescriptive statistics consist of methods for organizing and summarizing information (Weiss, 1999)Descriptive statistics include th...
Continue reading

Understand Routing in Vue.js With Examples

Vue.js is a great JavaScript Framework created by Evan You, it’s used to build a Single web page and modern web applications with really high performances and production, and it is the most important skill required in Front End Web development, you can learn more about Vue.js here. Vue.js provides a lot of methods to build nice UI and UX components. The Routers is one of them, it lets the user switch between pages without page refreshing. This makes the navigation easy and really nice in your Web application. This Vue.js Router tutorial will help you play with Vue Routers by building a Vue.js template as an example, once you learned Routers you will know how effective these are 😄.So, let’s get started with our Vue.js Router project by installing and creating a new Vue.js project. We will need node.js installed, and we are using vue-cli to generate a new Vue.js project.In your terminal run:vue init webpack vue-router // cd vue-router // npm run devIn your Text Editor inside components folder open Hello World.vue file,Rename HelloWorld.vue and replace it with home.vue and remove all the code and replace it with this one.       Home   export default {   name: 'home',   data () {     return {       msg: 'Welcome to Your Vue.js App'     }   } }  And go to index.js inside router folder and replace HelloWorld with Homeimport Vue from 'vue' import Router from 'vue-router' import home from '@/components/home' Vue.use(Router) export default new Router({   routes: [     {       path: '/',       name: 'home',       component: home     }   ] })App.vue file should look like this!             export default {   name: 'App' } #app {   } And Now let’s write our code !!Now we are going to add a Bootswatch template, you can choose any template you like for me I will choose a Cosmo, click Ctrl + U to view code source and just copy the Navbar, we just need navbar, and paste this code into App.vue component.Here we are 😃Next, we are going to create 3 other components- Blog, Services, and ContactInside component folder, create a new file and name it blog.vue and push this code into it     {{blog}}    export default{   name:'blog',   data (){    return{     title:'Blog'    }   }  }   If you want to do the same thing for service and contact component, you must have these files in your component folder:home.vueblog.vueservices.vuecontact.vueRouters configNow after having four components, we have to configure the routers to have the ability to navigate between those components.So, how can we navigate to each component using the routers?Here comes the rule of the Routing, now what we have to do is do some modifications inside router folder, open index.jsFirst, import your components into index.jsImport all the components using import methodimport home from '@/components/home' import blog from '@/components/blog' import services from '@/components/services' import contact from '@/components/contact'Second import vue and Router module from the vue-router moduleimport Vue from 'vue' import Router from 'vue-router' // use router Vue.use(Router)if you installed vue with vue-cli you have vue-router imported by defaultAnd finally we have to define an init our component inside Router Method, it takes an Array of objects which take properties:export default new Router({   routes: [     {       path: '/',       name: 'home',       component: home     },     {       path: '/blog',       name: 'blog',       component: blog     },     {       path: '/services',       name: 'services',       component: services     },     {       path: '/contact',       name: 'contact',       component: contact     }   ] })path: mean the path of the componentname: name of the componentcomponent: the view componentTo make any component as the default component set slash to the path property,path:'/'In our example, we set the home page as default page. Now when you open the project in the browser,the first page that will appear is the home page.{ path:'/', name:'home', component:home }The vue-router has more advanced parameters and methods, but we are not jumping onto them, this is the list of properties and methods that vue-router provides:Nested routersNamed viewRedirect and AliasNavigation GuardRouter instanceNow you can browse any components by typing the name of the component!!router-linkNow we are gonna to make the navigation through the Navbar that we create using router-link element.To do that we should replace ‘’ element by ‘’ like this:   Blog   Services      contact    The router-link take ‘to=’’’ attribute which takes the path of the component as value.Router-viewYou will find the tag in App.vue file, it is the view where the components are rendered, it’s like the main div that contain all the components and it returns the component that matches the current route, we will work with route-view in transition in the next part.Router with ParametersAt this part, we will use parameters to navigate to specific components, the parameters are really important, they make Router work dynamically.To work with parameters we are going to create a list of products, an array of data, so when you click on the link of any product it will take you to the page details through a parameter.In this situation, we are not going to use database or API to retrieve products data, so what we have to do is create an Array of products that we are supposed to have as a database.Inside home.vue component put the Array within data() method just like this:export default {   name: 'home',   data () {     return {       title: 'Home',       products:[       {         productTitle:"ABCN",         image       : require('../assets/images/product1.png'),         productId:1       },       {         productTitle:"KARMA",         image       : require('../assets/images/product2.png'),         productId:2       },       {         productTitle:"Tino",         image       : require('../assets/images/product3.png'),         productId:3       },       {         productTitle:"EFG",         image       : require('../assets/images/product4.png'),         productId:4       },       {         productTitle:"MLI",         image       : require('../assets/images/product5.png'),         productId:5       },       {         productTitle:"Banans",         image       : require('../assets/images/product6.png'),         productId:6       }       ]     }   } }Then fetch and loop into Products Array using v-for directive.                        {{data.productTitle}}           The Result:To navigate to details component we have first to add a click event{{data.productTitle}}Then add methodsmethods:{   goTodetail() {     this.$router.push({name:'details'})   }If you click the title it will return undefined because we didn’t create the details component yet. So let’s create one:Details.vue        {{title}}      export default{   name:'details',   data(){    return{     title:"details"    }   }  } Now we can navigate without getting an error 😃Now, how can we browse to the details page and get the matched data while we don’t have a database?We are going to use the same products Array in details component, so we can match the id that comes from the URLproducts:[       {         productTitle:"ABCN",         image       : require('../assets/images/product1.png'),         productId:1       },       {         productTitle:"KARMA",         image       : require('../assets/images/product2.png'),         productId:2       },       {         productTitle:"Tino",         image       : require('../assets/images/product3.png'),         productId:3       },       {         productTitle:"EFG",         image       : require('../assets/images/product4.png'),         productId:4       },       {         productTitle:"MLI",         image       : require('../assets/images/product5.png'),         productId:5       },       {         productTitle:"Banans",         image       : require('../assets/images/product6.png'),         productId:6       }       ]First, we have to set the id to goTodetail() method as a parameter,{{data.productTitle}}Then add the second parameter to the router method,The $router method takes two parameters, first the name of the component we want to navigate and the second parameter is the id or any other parameter.this.$router.push({name:'details',params:{Pid:proId}})  Add Pid as a parameter in index.js inside router folder{       path: '/details/:Pid',       name: 'details',       component: details     }home.vuemethods:{   goTodetail(prodId) {     this.$router.push({name:'details',params:{Pid:proId}})   }   }To get the matched parameter use this line of code:this.$route.params.Piddetails.vuethe product id is :{{this.$route.params.Pid}}Then loop through products Array in detalils.vue and check the object that matches the parameter Pid and returns it data             {{product.productTitle}}                 /// export default{   name:'details',   data(){    return{     proId:this.$route.params.Pid,     title:"details"      } } You see now when we click any product’s link it gets us to same product i!detail.vue component!                         {{product.productTitle}}                          export default{   name:'details',   data(){    return{     proId:this.$route.params.Pid,     title:"details",     products:[     {     productTitle:"ABCN",     image       : require('../assets/images/product1.png'),     productId:1     },     {     productTitle:"KARMA",     image       : require('../assets/images/product2.png'),     productId:2     },     {     productTitle:"Tino",     image       : require('../assets/images/product3.png'),     productId:3     },     {     productTitle:"EFG",     image       : require('../assets/images/product4.png'),     productId:4     },     {     productTitle:"MLI",     image       : require('../assets/images/product5.png'),     productId:5     },     {     productTitle:"Banans",     image       : require('../assets/images/product6.png'),     productId:6     }     ]          }   }  } The transitionIn this part, we are gonna make the transition of the component animated. We will animate the transition of the components, it makes the navigation awesome and creates some kind of UX and UI, so make an animation while the transition, put the “” inside “” tag and give it a name of class .App.vue              To animate the transition of the component when it enters to the view add enter-active to the name given to the transition tag, and in the leave add leave-active and then give it the CSS transition properties just like this below!.moveInUp-enter-active{   opacity: 0;   transition: opacity 1s ease-in; }  Using CSS3 animation :Now, we are going to animate using @keyframes in CSS3A- when the component enters the viewAdd fade effect to the view when the component enter.moveInUp-enter-active{   animation: fadeIn 1s ease-in; } @keyframes fadeIn{   0%{     opacity: 0;   }   50%{     opacity: 0.5;   }   100%{     opacity: 1;   } }B- when the component leaves the viewNow we are going to make the component move in up when it leaves the view!moveInUp-leave-active{   animation: moveInUp .3s ease-in; } @keyframes moveInUp{  0%{   transform: translateY(0);  }   100%{   transform: translateY(-400px);  } }As you see now you can create your own animation and transition for your components.That’s it we are done! 😆You can Download the Source code here.Wrapping and ConclusionVue.js is a greater Javascript framework to build a great User Interface for your Web app, it’s really awesome and easy to work on. I highly recommend it and it’s most-required Front End Skill.
Rated 4.0/5 based on 44 customer reviews
Understand Routing in Vue.js With Examples

Understand Routing in Vue.js With Examples

Blog
Vue.js is a great JavaScript Framework created by Evan You, it’s used to build a Single web page and modern web applications with really high performances and production, and it is the most impo...
Continue reading

JavaScript: How Is Callback Execution Strategy For Promises Different Than DOM Events Callback?

As a JavaScript developer, first of all, I thought, why the heck do I need to understand the internals of browsers with respect to JavaScript? Sometimes I have been through a situation like “WHAT?? THIS IS STRANGE!!” but later on after diving deeper into the subject matter, I realized that it’s important to understand a bit of how browser and JS engine work together. I hope this article will provide a bit of guidance to predict the correct behavior of your JS code and minimize strange situations.Basically, this article covers following subtopicsQuiz to predict the output of the sample JS code.A overview of the browser, event loop, JS engine, event loop queues.How browser executes JavaScript DOM events(click, http requests) and its callback mechanism?How is callback execution strategy for promises different than DOM events callback?Different execution strategy of tasks queued in Task queue and Micro task queue.What kind of tasks are queued in Task queue and Microtask queue?Conclusion (Answers to the quiz).1. Quiz to predict the output of the sample JS codeTest 1: What would be the sequence of log messages of following JS code?Running example can be found here.{  console.log("Start");  // First settimeout  setTimeout(function CB1() {     console.log("Settimeout 1");  }, 0);  // Second settimeout  setTimeout(function CB2() {     console.log("Settimeout 2");  }, 0);  // First promise  Promise.resolve().then(function CB3() {     for(let i=0; i 0) { // get the next event in the queue event = eventLoop.shift();   // now, execute the next event try { event(); } catch (err) { reportError(err); } } }                                                                                          Basic event loop pseudo codeAs you can see, there is a continuously running loop represented by the while loop, and each iteration of this loop is called a "tick." For each tick, if an event is waiting on the queue, it's taken off and executed. These events are your function callbacks. (Source: You Don’t Know JS — Async and Performance series)Following code shows what standard event loop specification sayseventLoop = {     taskQueues: {         events: [], // UI events from native GUI framework         parser: [], // HTML parser         callbacks: [], // setTimeout, requestIdleTask         resources: [], // image loading         domManipulation[]     },     microtaskQueue: [     ],     nextTask: function() {         // Spec says:         // "Select the oldest task on one of the event loop's task   queues"         // Which gives browser implementers lots of freedom         // Queues can have different priorities, etc.         for (let q of taskQueues)             if (q.length > 0)                 return q.shift();         return null;     },     executeMicrotasks: function() {         if (scriptExecuting)             return;         let microtasks = this.microtaskQueue;         this.microtaskQueue = [];         for (let t of microtasks)             t.execute();     },     needsRendering: function() {         return vSyncTime() && (needsDomRerender() || hasEventLoopEventsToDispatch());     },     render: function() {         dispatchPendingUIEvents();         resizeSteps();         scrollSteps();         mediaQuerySteps();         cssAnimationSteps();         fullscreenRenderingSteps();         animationFrameCallbackSteps();         while (resizeObserverSteps()) {             updateStyle();             updateLayout();         }         intersectionObserverObserves();         paint();     } } while(true) {     task = eventLoop.nextTask();     if (task) {         task.execute();     }     eventLoop.executeMicrotasks();     if (eventLoop.needsRendering())         eventLoop.render(); }References: Event loop explainer , Standard event loop specification 3. How browser executes DOM events(click, http requests) and its callback?There are different types of events supported by the browser such as- Keyboard events (keydown, keyup etc)- Mouse events (click, mouseup, mousedown etc)- Network events (online, offline)- Drag and drop events (dragstart, dragend )etcThese events can have a callback handler which should be executed whenever an event is fired. Whenever an event is fired, it’s callback (aka task) is queued in the task queue. As shown in Test 2, when a button is clicked, it’s callback handler CB1() is queued in a task queue and event loop is responsible to pick it up and execute it in a call stack. There are several rules event loop applies, before picking up a task from a task queue and executing it in a call stack.Event loop checks, if call stack is empty or not. If call stack is empty and there is nothing to execute in a micro-task queue than it picks up a task from a Task queue and execute it.4. How is callback execution strategy for promises different than DOM events callback?When JS engine, traverses through the code within a callback function and encounters web API events such as click, keydown etc, it delegates the task to the runtime environment and now runtime decides where should it queue it’s call back handler (either in task queue or micro-task queue?). Based on the standard specification, the runtime will queue callbacks of DOM/web events in Task queue but not in the micro task queue.Similarly, one task(or callback function) can have multiple other tasks or micro-tasks. When JS engine encounters promise object, it’s callback is queued in a micro-task queue but not in Task Queue.As mentioned before, event loop will pick up a new task from a Task queue only when call stack is empty and there is nothing to execute in a micro-task queue. Let’s assume, there are 3 tasks in Task queue, T1, T2, and T3. Task T1 has one task(say — setTimeout(T4, 0)) and two micro-tasks(say promises — M1, M2). When task T1 is executed in the call stack, it will encounter setTimeout(…) and delegates it to runtime to handle its callback. The runtime will queue T4 in a Task queue. When engine encounters promise 1, it will queue its callback (M1) to microtask queue. Likewise, when it encounters another promise 2 object, it will queue it in a micro-task queue. Now call stack becomes clear, so before picking up task T2 from the Task queue, event loop will execute all the callbacks (M1, M2) queued in the micro-task queue. Once microtasks are executed in a call stack and the stack is cleared, it is ready for Task T2.NOTE (Exception): Even though window.requestAnimationFrame(…) is a function of DOM object window, it’s callback is queued in a micro-task queue but its execution strategy is different. Execution strategy of window.requestAnimationFrame(…) has not been covered in this article.5. Different execution strategy of tasks queued in Task queue and Microtask queueCallbacks queued in task queue are executed in first-come-first-service order and the browser may render between them (Eg. DOM manipulation, changing html styles etc).Callbacks queued in Micro task queue are executed in first-come-first-service order, but at the end of every task from the task queue (only if call stack is empty). As mentioned above in the event loop’s pseudo code, Micro-tasks are processed at the end of each task.// Popular JS engines(Eg. google chrome's V8 engine, Mozilla's SpiderMonkey etc) // implements event loop using C++, which means code snippet below is executed synchronously while(true) {     // Each iteration of this loop is called an event loop 'tick'     task = eventLoop.nextTask();         if (task) {      // First: A task from the Task queue is executed         task.execute();     }         // Second: All the tasks in the Micro task queue are executed     eventLoop.executeMicrotasks();         // Third: It will check if there is someting to render, eg. DOM changes, request animation frame etc     // and renders in browser if required     if (eventLoop.needsRendering())         eventLoop.render(); }Reference:  Tasks, microtasks, queues and schedules , In The Loop — JSConf.Asia 2018 — By JakeArchibald 6. What kind of tasks are queued in Task queue and Microtask queue?Tasks are basically callback functions of promises or DOM/web API events. Because tasks in Micro task queue and Task queue are processed in a different way, the browser should decide types of tasks which should be queued in Task queue or Micro task queue. According to the standard specification, callback handlers of following events are queued in Task Q ueue- DOM/Web events (onclick, onkeydown, XMLHttpRequest etc)- Timer events (setTimeout(…), setInterval(…))Similarly, callback handlers following objects are queued in a Micro task queue- Promises (resolve(), reject())- Browser observers (Mutation observer, Intersection Observer, Performance Observer, Resize Observer)NOTE: ECMAScript uses term Jobs to represent MicrotasksExecution of a Job can be initiated only when there is no running execution context and the execution context stack is empty.- ECMAScript Jobs And Queues7. ConclusionTest 1: When script mentioned in Test 1 is executed, console.log(“Start”) is executed first. When setTimeout(…) is encountered, runtime initiates a timer, and after 0ms (or a specified time in ms), CB1 is queued in Task Queue. Similarly, next CB2 is queued in a Task queue immediately after queuing CB1. When promise object is encountered, its callback i.e CB3 is queued in the Microtask queue. Similarly, next callback of second promise object CB4 is also queued in the Microtask queue. Finally, it executes last console.log(“End”) statement. According to standard specification, once a call stack is emptied, it will check Micro task queue and finds CB3 and CB4. Call stack will execute CB3 (logs Promise 1) and than CB4 (logs Promise 2). Once again, the call stack is emptied after processing callbacks in the Micro task queue. Finally, event loop picks up a new task from the Task queue i.e CB1 ( logs setTimeout 1) execute it. Likewise, event loop will pick up other tasks from Task queue and execute it in the call stack until Task queue is emptied.                                                                                     Placement of callbacks in Task and Micro task queueFor animated simulation of the above code sample, I would recommend you to read the blog post by Jack Archibald.Test 2:For the second test case, when JS engine encounters first button.addEventListener(...), it assigns responsibility to handle click callback to runtime (Browser). Similarly, it does the same stuff for second button event listener.Because we have two click listener for a single button, whenever a button is clicked, two callbacks( CB1, CB2) are queued in Task queue sequentially as shown in the diagram belowWhen call stack is empty, event loop picks up CB1 to execute. First, console.log(“Listener 1 ”) is executed, then callback of setTimeout(…) is queued in Task queue as ST1 and callback of promise 1 is queued in Micro task queue as shown in the diagram belowWhen “Listener 1” is logged, P1 is executed because it is a micro-task. Once P1 is executed, the call stack is emptied and event loop picks up other Task CB2 from Task queue. When callback CB2 is processed, console.log(“Listener 2”) is executed first and callback for setTimeout(…) ST2 is queued in Task queue and promise (P2) is queued in the micro-task queue as shown in the diagram belowFinally, P2, ST1, and ST2 are executed sequentially which logs Promise 2, Settimeout 1 and Settimeout 2.NOTE ON EVENT LOOP: Until now (April 2018), the event loop is a part of browser’s runtime environment but not the JavaScript engine. At some point in future, it might be a part of JS engine as mentioned in a book You Don’t Know JavaScript — Async, and performance. One main reason for this change is the introduction of ES6 Promises because they require the ability to have direct, fine-grained control over scheduling operations on the event loop queue.
Rated 4.0/5 based on 45 customer reviews
JavaScript: How Is Callback Execution Strategy For Promises Different Than DOM Events Callback?

JavaScript: How Is Callback Execution Strategy For Promises Different Than DOM Events Callback?

Blog
As a JavaScript developer, first of all, I thought, why the heck do I need to understand the internals of browsers with respect to JavaScript? Sometimes I have been through a situation like “WHA...
Continue reading