top
Sort by :

Smart Tables Integration and usage with Angular 6

In this tutorial, we are going to learn how to integrate and use ng2-smart-tables with Angular 6. The ng2-smart-table is a library that is available to us as an npm package and pre-defined components and directives for sorting, searching, filtering and displaying data in the form of a table. To start, please run the following command from your terminal window to install the latest version of angular/cli.npm install -g @angular/cli If you are already using angular/cli version above 6, please upgrade to the latest version. If it does not install then try in the sudo mode for Unix based machines. angular/cli not only scaffolds a directory structure for a project but also takes care of pre-configuration that is needed for an application. With the below command we generate our new project:ng new smart-table-example To see if everything is working fine, let us run ng serve --open. You can visit the URL http://localhost:4200/ to see our newly created project in action.Take a look at the steps below to integrate smart tables with Angular 6. Installing rxjs-compatAngular’s ecosystem struggles because of the compatibility issues of third-party libraries whenever a new Angular version is released. Similar is happening in Angular 6. It depends on TypeScript 2.7 and RxJS version 6. Not many third party libraries have been updated and thus, to make them work with the latest Angular version, we are going to use another third-party library in our app, called rxjs-compat.npm i rxjs-compat --save Please note that, in future, third-party libraries might be compatible and do not need rxjs-compat. So check the documentation of the third-party library you are using with Angular 6.Installing the ng2-smart-table libraryWe can install it as a local dependency on our Angular project.npm install --save In any Angular application, app.module.ts is the global space for registering a module. Open it and add the following:import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { Ng2SmartTableModule } from 'ng2-smart-table'; // add import { AppComponent } from './app.component'; @NgModule({ declarations: [AppComponent], imports: [ BrowserModule, Ng2SmartTableModule // add ], providers: [], bootstrap: [AppComponent] }) export class AppModule {} If you get no errors at this step and the webpack has compiled our app successfully, this means we can continue to work.Creating a TableWe will generate a component called table using ng g c table. The ng command will not only create a default component but also updates app.module.ts for us. You can find our newly generated component inside src/app/. To see if it is working, we will import this table component inside app.component.ts.import { Component } from '@angular/core'; // add the following import { TableComponent } from './table/table.component'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'smart-table-example'; } Also modify the app.component.html file to render the table component.Building the BackendTo serve data as an API to our Angular app we currently are going to create a mock backend server. Install json-server globally from npm.npm install -g json-serverNow in our src/ directory, create a directory called data and inside it create a new file called db.json. We will be adding some mock data inside this file in JSON format. It will be easier to render JSON data in the Angular app.{ "employees": [ { "id": 1, "name": "Jason Bourne", "employeeId": "us2323", "city": "New York" }, { "id": 2, "name": "Mary", "employeeId": "us6432", "city": "San Jose" }, { "id": 3, "name": "Sameer", "employeeId": "in2134", "city": "Mumbai" }, { "id": 4, "name": "Sam Hiddlestone", "employeeId": "au9090", "city": "Melbourne" } ] } To serve this data to our angular application, in a separate terminal window, write the following command:json-server --watch src/data/db.json --port 4000 You will be prompted with the following success message. Our data is accessible through the URL .Fetching Data using HTTP ClientMost front-end frameworks do not have a built-in mechanism to fetch data from remote API or a server. However, Angular leverages in this case, its HTTP Client which allows us to fetch data from the remote API, in our case the JSON server data. To make use of it, first, append the app.module.ts file.import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { HttpClientModule } from '@angular/common/http'; // add import { Ng2SmartTableModule } from 'ng2-smart-table'; import { AppComponent } from './app.component'; import { TableComponent } from './table/table.component'; @NgModule({ declarations: [AppComponent, TableComponent], imports: [ BrowserModule, Ng2SmartTableModule, HttpClientModule // add ], providers: [], bootstrap: [AppComponent] }) export class AppModule {} Inside the table folder, create two new files. First, Table.ts: this file works as a schema design (if you think in terms of the database) or an interface that let our Angular application know the type of data it is going to receive from the remote API. This will be helpful in creating a service, which is our next step.export interface Table { id: Number; name: String; city: String; employeeId: String; } Second, is our service file table.service.ts. Our service file contains an injectable class called TableService which will fetch the data from the remote API using HTTP client.import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; @Injectable() export class TableService {   constructor(private http: HttpClient) { }   url = 'http://localhost:4000';   getData() {     return this             .http             .get(`${this.url}/employees`);         } } Importing this service in app.module.ts as a provider which will enable our Table component to access it.import { TableService } from './table/table.service'; // ... providers: [TableService], Next, we modify our table.component.ts and table.component.html.import { Component, OnInit } from '@angular/core'; import { TableService } from './table.service'; import { Table } from './Table'; @Component({   selector: 'app-table',   templateUrl: './table.component.html',   styles: [] }) export class TableComponent implements OnInit {   employees: Table[];   constructor(private tservice: TableService) {}   ngOnInit() {     this.tservice.getData().subscribe((data: Table[]) => {       this.employees = data;     });   } }<table>   <thead>     <tr>       <th>ID</th>       <th>Name</th>       <th>City</th>       <th>Employee id</th>     </tr>   </thead>   <tbody>     <tr *ngFor="let employee of employees">       <td>{{ employee.id }}</td>       <td>{{ employee.name }}</td>       <td>{{ employee.city }}</td>       <td>{{ employee.employeeId }}</td>     </tr>   </tbody> </table> If you get a similar result like below, depending on the amount of your data, this means you successfully created the Table Service and fetched the data from a remote API using HttpClient. Making our Table SmartAt this point, our table is just rendering as static and does not have any functionality provided by the ng2-smart-table library such as sorting, searching, etc. To make it dynamic, we need to add some configuration, which is a required step when working with the ng2-smart-table library.Create an object inside table.component.ts and call it settings. This object contains columns that are displayed on the table.https://gist.github.com/49aae2db5fc0c7900252004cfd9ffcf0The column name contains those fields that are coming from our service. The column name does not have to be identical with the original key. You can add or modify the name for a column using the title. Another thing we need to add is called source which gives reference to the actual data from our service to the ng2-smart-table directive. Yes, our last piece of the puzzle to make our table dynamic is a directive provided by ng2-smart-library called the same. See it action app.component.html and replace our previous code.<ng2-smart-table [settings]="settings" [source]="employees"> </ng2-smart-table> The settings are the same object that we defined in the app.component.ts. The source is coming from Table service where we fetch data and make it available to our component using this.employees. Following is how your table will look now.Try to play with it to explore its functionalities. Notice, how it automatically add by default various functionalities to edit and update or delete a field as well add a search bar over every column. It also adds sorting functionality by default. There are various advanced configurations that ng2 table library provide to us. For example, to add a multi-select checkbox, we can just edit the configuration or the settings object in our table component and this feature will be added to our existing table. Append table.component.tssettings = { selectMode: 'multi', // just add this columns: { id: { title: 'ID' }, name: { title: 'Name' }, city: { title: 'City' }, employeeId: { title: 'Employee No.' } } };And see yourself. I hope this tutorial helps you in finding ways for representing ng2-smart-tables in Angular 6. You can find the complete code for this tutorial at this Github repo.
Rated 4.5/5 based on 11 customer reviews
Smart Tables Integration and usage with Angular 6 762 Smart Tables Integration and usage with Angular 6 Blog
Aman Mittal 25 Sep 2018
In this tutorial, we are going to learn how to integrate and use ng2-smart-tables with Angular 6. The ng2-smart-table is a library that is available to us as an npm package and pre-defined component...
Continue reading

An In-depth Understanding of Callback Functions in JavaScript

In the previous article, we went around a couple of issues that are faced by a newbie javascript programmer. We also came across what is called a callback function while fetching the information using an AJAX request. Let us try to understand a callback function in depth and how to go about using the same in our JavaScript code.What is a callback function?There is no standard definition of what a callback function is. But, callback functions are very common in asynchronous programming. We can get to know the JavaScript callback function defined as follows.A callback function is a function which is:passed as an argument to another functionis invoked inside the outer function to which it is passed as an argument to complete some action, routine or event.The outer function in the above reference, to which the callback function is passed as an argument, is a function which is actually undertaking the async task or delegating the same to another nested async function. Let us see in the examples below for both callback functions and the main function.Let us go through 2 JavaScript callback functions examples to understand the above definition.A Network Call based examplefunction getUserName(callback){     var name;     $.get('https://randomuser.me/api/',  function(data) {             name = data.results[0].name.first                     + " " + data.results[0].name.last;             callback(name);     }); } var username ; function callback(res){     username = res;     document.write("Name: " + username); } getUserName(callback);In the above example, we are fetching the user details using an AJAX call and then using the callback function to show the name of the user. The callback function is responsible for using the fetched data and then updating the same in the DOM. The data is being fetched in the main function getUserName A DOM-based examplevar count = 0; //callback 1 function updateCount(){     $("#count").html(count); } //callback 2 function incCount() {     count++;     updateCount(); } //callback 3 function resetCount(){     count=0;     updateCount(); } $(document).ready(updateCount); $("#inc").click(incCount); $("#reset").click(resetCount); In the example, we are using 3 different callbacks, namely updateCount, incCount and resetCount for different events that are occurring on the page. As soon as the page loads, the count is updated on the page. Then, every time you click on the “Increment Me” button, it increments the counter in the code and also updates the same on the page. Similarly, the  “Reset Me” button resets the counter. The main functions here are the jQuery event handlers functions like ready and click, which operate on the DOM elements.Practice: Can you try and implement a decrement button with the above example?Like the above examples, a callback function would be necessary when you are dealing with any of the items on the following list:DOM event handling and User Input/Output.Databases (e.g. MySQL, PostgreSQL, MongoDB, Redis, CouchDB)APIs (e.g. Facebook, Twitter, Push Notifications)HTTP/WebSocket connectionsFiles (image resizer, video editor, internet radio)Why JavaScript Callback Functions?Javascript code executes on a single thread and that too asynchronously. It means that the code execution stack doesn’t wait on any I/O operation, which happens outside the javascript thread, for example, a file read operation. Callback functions are a means to drive the execution of the code once the I/O operation is completed.This helps in a good way as it does not block the single threaded JavaScript code execution while waiting over any I/O operation and can execute the other functions in the stack. This is important in case of browsers, where the entire user experience is managed by the javascript rich applications since the inception of JavaScript.The way callbacks are managed internally is using an event loop. We shall talk about the JavaScript call stack and event loop in the articles to come.The Pyramid of DOOM or Callback Hell Imagine the following sequence of actions that are to be performed in JavaScript.Fetch User Information with an idUpdate the user Information on the pageFetch all the posts for the user with an idUpdate the first post on the pageFetch all the comments that are for the first post for the user with the mentioned idUpdate the comments on the pageI have tried to implement the above using a demo API called JSON PlaceHolder for implementing the code. You can checkout the link to see the documentation of the same.Let’s try and look at the code of the implementation of the above sequence of actions.function updateUserInfo(data){     var out = "";     for(var key in data){         if ( key!== 'id' && typeof data[key] !== "object") {             out += ""+                 key + ": " + data[key] +                 "";         }     }     $("#user").html(out); } function updateUserPosts(data){     var out = "";     //pulling out just the first post     for(var key in data[0]){         if ( key!== 'id' && typeof data[0][key] !== "object") {             out += ""+                 key + ": " + data[0][key] +                 "";         }     }     $("#post").html(out); } function updatePostComments(data){     var out = "";     //pulling out just the first post     for(var i=0; i
Rated 4.5/5 based on 11 customer reviews
An In-depth Understanding of Callback Functions in JavaScript

An In-depth Understanding of Callback Functions in JavaScript

Blog
In the previous article, we went around a couple of issues that are faced by a newbie javascript programmer. We also came across what is called a callback function while fetching the information using...
Continue reading

Take A Leap And Boost Your Career Path With Python

Python tops the charts when compared with all other programming languages for the most promising career options for technical professionals. Career opportunities are growing exceptionally worldwide for Python developers. Python is not the new player in the programming space but it has gained huge popularity in the market. It is an open-source programming language with excellent development capabilities and maximum versatility. The coding efforts are reduced specially for better testing and performance. This is the reason why Python professionals are in much demand in today’s fast-growing tech market.If you are starting your career as a Python beginner, most of the companies will generally offer you a package of INR 3 LPA - INR 5 LPA. The above infographic will give you a clear idea and help you understand the past, present and future of Python.To grab a successful job in Python you should follow a well-defined learning path always. Because of less availability, the Python developers are earning attractive incomes compared to other similar resources or skills available in the market. This is why there are plenty of institutes offering training programs in Python, but note that industry expert trainers are limited even today. You could learn Python faster and become a most demanding resource in the IT space only with the right training. If you really want to start your career in Python, start your Python training today offered by a reputed institute who could help you achieve your dreams successfully.Companies across the world are looking for highly skilled and qualified Python professionals as they believe that they are able to give a right solution as per the client requirements. Python, PHP, Ruby, C++, Java, SQL, Salesforce, Big Data Hadoop are a few skills that are expected to beat in 2018 and beyond. However, Python remains the high-level programming language gaining a competitive edge over others.Become a successful Python leader with the right skills and right Python certification!
Rated 4.5/5 based on 11 customer reviews
Take A Leap And Boost Your Career Path With Python

Take A Leap And Boost Your Career Path With Python

Tutorials
Python tops the charts when compared with all other programming languages for the most promising career options for technical professionals. Career opportunities are growing exceptionally worldwide fo...
Continue reading

Develop Alexa skills with Alexa SDK using Node.js

OK, so can you first answer these questions, simple yes/no kind.Do you want to build voice-based applications?Do you see all these Alexa Skill and gets amazed and surprised at the same time, that how these apps can be developed?Do you know a little bit of Javascript?Do you have 5 minutes?Well, if your answer to the above questions is a “Yes”, today’s your lucky day. Stick to the end, and by the end, you will know how to develop Alexa Skills using Node.js from scratch. In this post, I’m covering Alexa SDK, if you want me to discuss intent schema of Alexa Skills, do let me know in the comment. You can find the complete project on this gist.Let’s start buildingTo create Alexa skill with NodeJS you need two things installed,NodeNPMThis link will get your job done if above are not installed.Once you’ve installed those, choose a directory in your computer, in my case, it’s “my-first-skill”.Once you’re in it via shell, run the following set of commands:npm install --save alexa-sdk npm install -g node-lambda node-lambda setup So, what happened above? We installed two packages from Node package manager. Alexa SDK is to handle intents and node-lambda is for testing and deploying your lambda function.As you can see, setup command have created some files for you. Now, you’ve to go in your .env file and paste the below file and replace the values inside curly brackets with your AWS credentials.AWS_ENVIRONMENT=development AWS_ACCESS_KEY_ID={your-token} AWS_SECRET_ACCESS_KEY={your-secret-token} AWS_ROLE_ARN=AWS_REGION={your-aws-region} AWS_FUNCTION_NAME={your-function-name} AWS_HANDLER=index.handler AWS_MEMORY_SIZE=128 AWS_TIMEOUT=10 AWS_DESCRIPTION= AWS_RUNTIME=nodejs6.10 PACKAGE_DIRECTORY=build CONFIG_FILE=deploy.env DEPLOY_TIMEOUT=3000000 Done? You’re almost getting there!!!Now create an index.js filetouch index.js Go into the index file and paste this content.const Alexa = require('alexa-sdk'); const handlers = {   'LaunchRequest': function () {      this.emit('HelloWorldIntent');   },   'HelloWorldIntent': function () {      this.emit(':tell', 'Hello World!');   } }; exports.handler = function(event, context, callback) {     const alexa = Alexa.handler(event, context, callback);     alexa.registerHandlers(handlers);     alexa.execute(); }; You just created an Alexa Skill, which will say “Hello World!” on getting invoked.Wanna test? Let’s deploy it first.Go to your shell, and deploy your lambda function with the following command: node-lambda deploy If you’ve set your .env file correctly, your lambda console will see a new lambda function. Now, it’s time to go to your developer console.Create a new skill, pass basic information like name and description of skill.You will see in your console that your intent schema has already 4 default skills:FallbackIntentStopIntentCancelIntentHelpIntentCreate a new intent, let’s say “HelloIntent” and save the skill. Now, in lambda endpoints section, copy-paste your Lambda ARN id here. If build passes, go to your test console and invoke your skill by sayingopen Your Skill will respond with “Hello World!”BANG!!! You just created your first skill. Let’s add some more basic intents'AMAZON.StopIntent': function () {     const card = 'Goodbye'     const content = 'Thanks for using the skill'     this.emit(':tellWithCard', content, card, content, null) }, 'AMAZON.CancelIntent': function () {     this.emit('StopIntent') }, 'AMAZON.HelpIntent': function () {     const card = 'Help'     const content = 'Help will always be given at Hogwarts to those who ask for it'     this.emit(':tellWithCard', content, card, content, null) }, 'HelloIntent': function () {     const card = 'Hello!!!'     const content = 'Woah! You created your first custom intent'     this.emit(':tellWithCard', content, card, content, null) },Let’s deploy your lambda function again, and now test your skill with these phrases: Help Stop close cancel hello You will notice, the echo will say the phrases you’ve programmed in your lambda handler. Why not, modify those and play with it a bit :)The more utterances you will have, the better your skill will become. So, try to add more ways that will possibly invoke your skill and improve your skills Voice Experience.Alexa supports SDK for multiple languages: Java, Nodejs, Python, which makes easier for developers to develop skills.There are many resources you can find on Alexa Skills, you can start with the documentation and blogs:Alexa Built in SlotsAlexa Built-in IntentsAn Alexa Based Project for multiple domains (VR, Recommendation system)This was a basic tutorial to learn Alexa Skills using Node JS. If you want to build a really cool Alexa skill with Node JS, you’ve to go in depth and try to play with different options from Alexa console. If you want to contribute to some open source Alexa Skill project, you can start with one of my project: https://github.com/PaytmBuildForIndia/book-engineProject Github Link: https://gist.github.com/myke11j/53c21154b8bdb686a4a8d0bbc235a152Do let me know your feedback, or if you have any queries!!!
Rated 4.5/5 based on 11 customer reviews
Develop Alexa skills with Alexa SDK using Node.js

Develop Alexa skills with Alexa SDK using Node.js

Blog
OK, so can you first answer these questions, simple yes/no kind.Do you want to build voice-based applications?Do you see all these Alexa Skill and gets amazed and surprised at the same time, that how ...
Continue reading

JavaScript- Moving From Callbacks to Promises and Async/Await

JavaScript is an asynchronous language by default. This means that it will not wait for certain actions to complete before it moves on to evaluate the next line of code. That’s pretty awesome because it doesn’t block the User Interface while something like an HTTP request is being made.It’s also kind of a nightmare. When all you want is for the code to execute in the order you wrote it, JavaScript can bring even the most experienced developer to their knees.In this article, we’re going to look at the myriad of different ways that JavaScript developers can handle asynchronous operations without losing their minds. So, let’s begin at the beginning. And at the beginning is something that JavaScript developers refer to affectionately as a “callback”.CallbacksOne of the things that make JavaScript fun and interesting is that it is functional in nature. This means that it is common to see functions passed around as parameters in JavaScript. The most simple example of this is the setTimeout function.setTimeout(function () {   console.log('Done!'); }, 1000); console.log('Me first'); That first parameter — the function — is the “callback”. It is executed whenever setTimeout is done and JavaScript does not wait to execute other lines of code. JavaScript has zero patience, and patience is a virtue.That’s a contrived example and you probably aren’t doing a ton of timeout calls. We see asynchronous execution more often in the real-world something like an HTTP Request.HTTP requests in JavaScript are asynchronous. The default XMLHTTP object in the browser is kind of verbose (like Charles Dickens), so let’s look at it using the request npm package. const request = require('request'); request('https://jsonplaceholder.typicode.com/posts/1', function myCallback(error, response, body)   console.log(body); });                                                      This API returns lorum ipsum text in the form of a fake blog postThe request API takes a URL and then calls a function when the HTTP request completes. In the above case, we are specifying that function (callback) inline. We could pull it out so that it is a named function. This makes our request call a little cleaner.const request = require('request'); function myCallback(error, response, body) {   console.log(body); } request('https://jsonplaceholder.typicode.com/posts/1', myCallback); Notice that there is an error parameter. If that parameter has a value, we have an error and need to handle that error. We can do that by specifying another function specifically to handle the error. const request = require('request'); function handleError(error) {   console.log(`Error occured: ${error}`); } function myCallback(error, response, body) {   if (error) handleError(error);   console.log(body); } request('https://jsonplaceholder.typicode.com/posts/1', myCallback); As is, this isn’t so bad. Where this starts to go downhill is when we have a lot of nested callbacks. Imagine a scenario where we need to get a comment, to get a post id, to get a user id, to get a user object.const request = require('request'); function handleError(error) {   console.log(`Error occured: ${error}`); } const baseURL = 'https://jsonplaceholder.typicode.com/'; request(`${baseURL}comments/53`,   function (err, res, body) {     if (err) handleError(err);     let comment = JSON.parse(body);     request(`${baseURL}posts/${comment.postId}`, function (err, res, body) {       if (err) handleError(err);       let post = JSON.parse(body);       request(`${baseURL}users/${post.userId}`, function (err, res, body) {         if (err) handleError(err);         console.log(body);       });     });   }); GROSS. That is hard on the old eyeballs. This example is particularly hard to read because the URL is almost identical in all 3 calls. The same variables repeat over and over again and each time their scope/value is different.This kind of code will make a person question their life choices. This is why we sometimes refer to this as “Callback Hell”. Not “Callback Purgatory”. Not “Callback Detention”. Callback Hell.To save all of our souls from the above code path inevitability, the JavaScript gods reached down and upgraded to promises and async/await from callback.PromisesPromises arrived some years ago in the form the of the Promises/A+ specification. We used to implement them with libraries such as Q and RSVP, but as of ES6, promises are now native to JavaScript. You’ll find them virtually anywhere that you find good ES6 support. This would include things like the Angular CLI, create-react-app, the Vue CLI and full support in Node as of 6.14.3.Promises return an object which will contain the eventual result of an asynchronous operation. To get at that “eventual result”, developers call the .then method on the promise object. Let’s back up to our original setTimeout example for a moment. How would it look if we move from callback to promises?let p = new Promise(function(resolve, reject) {   setTimeout(function() {     resolve(`Done!`);   }, 1000); }); console.log('Me First'); A new promise is created by taking in a function which has the resolve and reject functions. When you are done with whatever you need to do in your asynchronous task, you call resolve and pass the data you need. If an error is thrown, you call the reject function.Promises are usually written using lambdas  or arrow functions. This is a shorter way to compose functions without having to type out the word function. This example here is functionally equivalent to the above…let p = new Promise(function(resolve, reject) {   setTimeout(function() {     resolve(`Done!`);   }, 1000); }); console.log('Me First'); So, what do we get back here? What is the value of p? It’s a promise object.What the heck do we do with this p thing and what does “pending” even mean? Pending just means that you haven’t done anything with the result yet. You haven’t executed any code to handle the return value of this object. To handle the return of a Promise, you need to call the then method on it.Now, we can control the flow of the code by moving our second console statement inside the promise. let p = new Promise((resolve, reject) => {   setTimeout(() => {     resolve('Done!');   }, 1000); }); p.then(message => {   console.log(message);   console.log('Me First'); });OK! That’s a simple promise we’re rocking and it’s looking pretty good. Now you may think to yourself, “Geez. I’m still passing functions around and now I feel like I’m writing even more code.”And you would be right. But the point of promises isn’t “less code” per-say — it’s “more readable code”. Let’s do something more real to see how this plays out.Rehashing the HTTP example above, but this time with a promise instead of a callback.const request = require('request'); const baseURL = 'https://jsonplaceholder.typicode.com/posts/'; let getPosts = new Promise((resolve, reject) => {   request(`${baseURL}1`, (err, res, body) => {     if (err) reject(err);     resolve(body);   }); }); getPosts   .then(body => {     console.log(body);   })   .catch(error => {     console.log(`Error occured: ${error}`);   }); The flow of this code is easier to understand because the callback stays with the promise and we’re not going several levels deep. When you move between functions and levels, the brain has to keep up with variables and scope and that becomes like herding gnats at some point.Notice that if there is an err value returned by request, I am calling reject. This means that we can handle any errors in this promise object later by calling catch(), which I do at the very end. Now, what’s really nice is that most HTTP libraries these days already support a promise API. That means that you don’t need to wrap them. You can just call then and catch. In the case of a request, we need to install the request-promise-native package.Look at how nicely built-in promises simplify our code…const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/posts/'; let getPosts = request(`${baseURL}1`); getPosts   .then(body => {     console.log(body);   })   .catch(error => {     console.log(`Error occured: ${error}`);   }); Really, we don’t even need to assign to an object before we resolve the promise. We can simplify this further by doing it all inline.const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/posts/'; request(`${baseURL}1`)   .then(body => {     console.log(body);   })   .catch(error => {     console.log(`Error occured: ${error}`);   });Now our code is looking TIGHT. It’s easy to read and it might indeed be less code, but no promises there. Get it? HAHAHAHA…No? You’re right. That pun was DOA.Now, let’s put our promises to the real test and see what happens when we nest three of these HTTP calls like we did before. const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/'; let handleError = (error) => {   console.log(`Error occured: ${error}`); }; request(`${baseURL}comments/53`)   .then(body => {     let comment = JSON.parse(body);     request(`${baseURL}posts/${comment.postId}`)       .then(body => {         let post = JSON.parse(body);         request(`${baseURL}users/${post.userId}`)           .then(body => {             console.log(body);           })           .catch(error => {             handleError(error);           });       })       .catch(error => {         handleError(error);       });   })   .catch(error => {     handleError(error);   });Good grief! That might actually be worse than the callback example. Actually, it is definitely worse. Way, way worse.Why is it worse? Aren’t promises supposed to save us from callbacks?Yes, but the lesson here is that promises can also give birth to code monsters that make you afraid to go into your source control after dark. In all seriousness, though, you will hit this scenario where you need to nest promises so let’s look at what to do about it without creating a world-class rat’s nest.First off, as a general rule of thumb, do not ever nest promises. When you do that, you are simply creating a different kind of callback hell. But it’s still callback hell. It’s just a socially acceptable one because you used promises to do it. Since you can’t nest promises, what you are going to do instead is use Promise.all to resolve everything once, after it is all finished.const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/'; let handleError = (error) => {   console.log(`Error occured: ${error}`); }; request(`${baseURL}comments/53`)   .then(body => {     let comment = JSON.parse(body);     return Promise.all([body, request(`${baseURL}posts/${comment.postId}`)]);   })   .then(results => {     let post = JSON.parse(results[1]);     return Promise.all([...results, request(`${baseURL}users/${post.userId}`)]);   })   .then(results => {     console.log(results[2]);   })   .catch(error => {     handleError(error);   }); This is better and easier to read, but not because there is less code. It’s because we aren’t going several levels deep so it’s easier for your mind to keep up with the logic of the code.What we’re doing here is calling Promise.all each time we need to return a result, passing in the result(s) from the previous operation and the next Promise we want to execute. We can keep chaining then statements until we get to the end of what we need to do. The single catch at the end will handle any errors thrown on the chain.In this way, you can nest promises without nesting them. This is particularly useful when one promise depends on the result of another — and so on and so forth. These are the types of real-world scenarios you will find yourself in and hopefully you will remember this article and my terrible puns.Promises are awesome. They are; I do believe that. I also kind of believe this…Promises are essentially just a way for us to restructure callbacks so that they aren’t so soupy when we use a lot of them. But, we are still passing functions around. I always have this latent feeling that I’m not actually getting away from callbacks at all when I use Promises. Like, they’re still there but we’ve both agreed to pretend like they aren’t.There is a better way, and it’s called async/await.Async/AwaitWhen working with asynchronous code, what we actually want is for a line of code to execute before the next one does. That’s it! That’s all we’re after. All of these code gymnastics we’ve been doing is to get JavaScript to just do that. Async/Await does just that.Async/Await simplifies the way you work with promises so that your code doesn’t contain all the callbacks. That’s the first thing to understand about Async/Await . It is not a replacement for promises, it works with promises.Let’s take a look at how we can use Async/Await with our HTTP example. Let’s start with using it to make one HTTP call.const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/'; async function main() {   let response = await request(`${baseURL}posts/1`);   console.log(response); } main(); There are some new terminology and structure here. Let’s break it down now…We use await to call the request object which returns a promise.Await will give us the “then” result of that promise and stick it in the response variable.Await has to be called inside a function marked “Async”. That’s why I wrapped it in async function main().The line console.log('All Done') doesn’t execute until the URL request comes back. This code is now synchronous.Pretty neat! Now, we can force JavaScript to do exactly what we want, which is to just wait until an operation finishes before moving to the next line of code. And how do we handle errors? With a proper try/catch block.const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/'; async function main() {   try {     let response = await request(`${baseURL}posts/1`);     console.log(response);   } catch (error) {     console.log(error);   } } main(); We’re starting to look like real programmers here. We’ve got the async code, error handling. All we need now is that job offer from Microsoft or Google or even LEGO. I would take the LEGO job.Where this really shines is in the complex. Look at how much nicer our nested example looks now…const request = require('request-promise-native'); const baseURL = 'https://jsonplaceholder.typicode.com/'; async function main() {   try {     let result = await request(`${baseURL}comments/53`);     let comment = JSON.parse(result);         result = await request(`${baseURL}posts/${comment.postId}`);     let post = JSON.parse(result);         result = await request(`${baseURL}users/${post.userId}`);     console.log(result);   } catch (error) {     console.log(error);   } } main(); This code is much easier on the brain. There are no structures to jump through or weird paths to try and follow like Alice in Wonderland. We could simplify it even further by using a nice concise HTTP library like axios. const axios = require('axios'); const baseURL = 'https://jsonplaceholder.typicode.com/'; async function main() {   try {     let comment = await axios(`${baseURL}comments/53`);         let post = await axios(`${baseURL}posts/${comment.data.postId}`);         let user = await axios(`${baseURL}users/${post.data.userId}`);         console.log(user.data);   } catch (error) {     console.log(error);   } } main(); You can use async/await to turn to call any method that returns a promise into a synchronous line of code. If you want to make something synchronous, just wrap it in a promise and call it with async/await.Async/Await is magical! But there are a few “gotcha!”s. There is one that I want to cover specifically because there is a 100% chance you will hit it. And if I don’t prep you, I’m afraid you might get super frustrated and smash your kid’s Lego castle resulting in years of therapy for both of you.To avoid that, let’s take a look at a place where this can go wrong.Async and the forEach “gotcha!”Look at this example. Here we are going to make an async call for some data, loop over it and make an async all inside of the loop for more data. What could go wrong? const axios = require('axios'); const baseURL = 'https://jsonplaceholder.typicode.com'; async function main() {   // get all posts   let posts = await axios(`${baseURL}/posts`);   posts.data.forEach(async post => {     // get the user for this post     let user = await axios(`${baseURL}/users/${post.userId}`);     console.log(JSON.stringify(user.data.username));   });   console.log('All Done'); } main(); If you were to run this, this is what you would see… Wait. What? Aren’t forEach loops supposed to be synchronous? Why is “All Done” showing up first? That should have been last. This isn’t synchronous at all.Well, forEach is synchronous by default. But the second you put that async on the function that is executed in the loop, you just flipped the switch and made it asynchronous. That’s why the line outside the loop fires first.This can lead to problems that are super hard to debug, not to mention this is not at all what we want. So what do we do here?To handle this, we are going to use the for of loop, instead of the built-in forEach on the array. const axios = require('axios'); const baseURL = 'https://jsonplaceholder.typicode.com'; async function main() {   // get all posts   let posts = await axios(`${baseURL}/posts`);   for (let post of posts.data) {     // get the user for the post     let user = await axios(`${baseURL}/users/${post.userId}`);     console.log(JSON.stringify(user.data.username));   }   console.log('All Done'); } main(); Now that is what we wanted to begin with. I got bit by this at least half a dozen times before I was able to figure out what was happening, so I wanted to spare you the pain I’ve already endured. The Best Of Both WorldsQuite possibly my favorite feature of JavaScript is its asynchronous nature. It makes the language so much fun to work with because you can execute things without blocking the user’s interaction. It’s also one of my least favorite things because when you don’t need it, async issues can turn your code into a jacked up Rubix Cube.                                                       “Oh When” by Peanut Dela Cruz It’s fine if you know how to solve a Rubix Cube, but if you don’t, may God have mercy on your soul.Fortunately, async/await and promises are here to absolve you of your sins so you don’t ever have to visit callback hell. Bless you, my child. To understand  how to handle JavaScripts with async/await and promises more clearly you can try these examples from Github. 
Rated 4.5/5 based on 11 customer reviews
JavaScript- Moving From Callbacks to Promises and Async/Await

JavaScript- Moving From Callbacks to Promises and Async/Await

Blog
JavaScript is an asynchronous language by default. This means that it will not wait for certain actions to complete before it moves on to evaluate the next line of code. That’s pretty awesome be...
Continue reading

Using Kaggle to guide your learning: Why and How should you start?

I often get asked by my friends and college-mates — “How to start Machine Learning or Data Science”.So, here goes my answer..Earlier, I wasn’t so sure. I would say something like do this course or read this tutorial or learn Python first (just the things that I did). But now, as I am going deeper and deeper into the field, I am beginning to realise the drawbacks of the approach that I took.So, in hindsight, I believe that the best way to “get into" ML or Data Science might be through Kaggle.In this article, I will tell you why I think so and how you can do that if you are convinced by my reasoning.(Caution: I am a student. I am not a Data Scientist by profession. I am definitely not an expert at Kaggle. So, take my advice/opinions with a healthy grain of salt.)But first, let me introduce Kaggle and clear some misconceptions about it.You have probably heard of the mind-boggling cash prizes that some Kaggle competitions award. Or maybe, some argument on how far its competitions are from the “real” data science work. Or, maybe you have subscribed to Kaggle email alerts and that last competition announcement finally made you wanna try to get a piece of the pie.It is this very fame which also causes a lot of misconceptions about the platform and makes newcomers feel a lot more hesitant to start than they should be.(Oh and Don’t worry if you don’t know about Kaggle and can’t relate to any of the above points; this article will still make complete sense. Just go through this and the next section will introduce Kaggle to you.)Well, I believe that competitions (and their highly lucrative cash prizes) are not the true gems of Kaggle. I am writing this article to tell you how you can use this platform to start your data science journey. If like me, you are tired of doing another online course to learn Data Science/ML, then this Kaggle learning beginners guide is right for you.A few misconceptions that people have about Kaggle:1.“Kaggle is a website that hosts Machine Learning competitions”This is such an incomplete description of what Kaggle is! Take a look at this header—                                                          Competitions are just a part of KaggleAlong with hosting Competitions (it has hosted about 300 of them now), Kaggle also hosts 2 other very important things:Datasets , even the ones not related to any competition: It houses 9500 + datasets as compared to just the 300 competitions (at the time of writing). So you can sharpen your skills by choosing whatever dataset amuses or interests you Kernels: They are just Kaggle’s version of Jupyter notebooks, which in turn, are just a really effective and cool way of sharing code along with lots of visualisations, outputs and explanations. The “Kernels” tab takes you to a list of public kernels which people use to showcase some new tool or share their expertise or insights about some particular dataset(/s).Learn : This tab contains free, practical, hands-on courses that cover the minimum prerequisites needed to quickly get started in the field. The best thing about them? — everything is done using Kaggle’s kernels (described above). This means that you can interact and learn.. no more passive reading through hours of learning material!All of these together have made Kaggle much more than simply a website that hosts competitions. It has, now, also become a complete project-based learning environment for data science. I will talk about that aspect of Kaggle in details after this section.2. “Only experts (Ph.D. or experienced ML practitioner with years of experience) take part in and win Kaggle competitions”If you think so, I urge you to read this —                                                                  Source: mashable.comTL;DR: A high school kid became a Kaggle Competitions Master by simply following his curiosity and diving into the competitions. In his own words — 3. “I should do a few more courses and learn advanced Machine Learning models before participating in Kaggle competitions so that I have a better chance of winning” The most important part of machine learning is Exploratory Data Analysis(or EDA) and feature engineering and not model fitting. In fact, many Kaggle masters believe that newcomers move to the complex models too soon when the truth is that simple models can get you very farBesides, a lot of challenges have structured data, meaning that all the data exists in neat rows and columns. There is no complex text or image data. And simple algorithms (no fancy neural nets) are often the winning algorithms for such datasets. EDA is probably what differentiates a winning solution from others in such cases.Now, let’s move on to why you should use Kaggle to get started with ML or Data Science..Why should we use Kaggle?Reason #1 — Learn exactly what is essential to get startedThe Machine Learning course on Kaggle Learn won’t teach you the theory and the mathematics behind ML algorithms. Instead, it focuses on teaching only those things that are absolutely necessary in analysing and modelling a dataset. Similarly, the Python course over there won’t make you an expert at Python but it will ensure that you know just enough to go to the next level.This minimises the time that you need to spend in passive learning and makes sure that you are ready to take on interesting challenges ASAP.Reason #2—  Embodies the spirit of building to learnI believe that doing projects is so effective that it's worth to center your entire learning around completing one. What I mean to say is that instead of searching for a relevant project after you learn something, it might be better to start with a project and learn everything you need to, to bring that project to life.                                                      I believe that learning is more exciting and effective this way. (I wrote an article about the above methodology a few weeks ago. Its called — “How (and why) to start building useful, real-world software with no experience” and it seems to be doing well on Medium. So, check that out if you can :-) )I believe that this is also true for courses and tutorialsBut there are lots of barriers to doing a side-project. I know the benefits of doing them but every time I try to start one, I just get.. stuck. And it is almost always one of these 3 areas and that feels frustrating—a) Finding an interesting project ideaFinding ideas for Data Science projects seems to be more difficult than other programming fields because of the added requirement of having suitable datasets. It often feels like all the data being generated is just being hoarded away by the tech companies for their private use.b) Help in learning the missing prerequisitesSometimes when I have started some project, it feels like there are just so many things that I still don’t know. So many things that online course didn’t teach me about. Was I supposed to know all that beforehand? Am I just out of my depth? I feel like I don’t even know the prerequisites for learning this thing. So, how do I go about learning what I don’t know? And that’s when all the motivation starts to wane away.c) Helps during the building processIt seems like I keep hitting one roadblock after the other during the building process. And even if I do manage to build the first version how do I improve it? What’s the best way forward? It would be so good if I could talk to a group of people and know how they would tackle the problem.And here’s how Kaggle seems to be the perfect solution to all those problems — Soln. a → Datasets and Competitions : With around 300 competition challenges, all accompanied by their public datasets, and 8500+ datasets in total (and more being added constantly) there seems to be no shortage of ideas that you can get here. Soln. b → Kernels and Learn : All the challenges have public kernels that you may use to get started with that challenge. Also, a lot of popular challenges have kernels intended to help newcomers who are just getting started. Apart from that, Kaggle seems to be making a real effort to include newcomers in their community. They have recently added a Learn section and it now features on the website’s main header.Kaggle Learn provides courses to give a practical intro to Data Science using Kaggle’s challengesSoln. c → Kernels and Discussion :Along with the public Kernels section, each competition has its own Discussion forum. You will often find some really useful analysis or insights in there. As written in the article, “Learning From the Best” on Kaggle’s blog — “During the competitions, many participants write interesting questions which highlight features and quirks in the data set, and some participants even publish well-performing benchmarks with the code on the forums. After the competitions, it is common for the winners to share their winning solutions”All of these can help you give ideas about improving your own approach and even guide you by telling you what you need to learn next.Reason #3 —  Real data to solve a Real problem => Real motivationThe challenges on Kaggle are hosted by real companies looking to solve a real problem that they encounter. The datasets that they provide are real. All that prize money is real. This means that you get to learn Data Science/ ML and practice your skills by solving (at least what feels like) real-world problems.If you have tried competitive programming before, you might relate to me when I say that the problems hosted on such websites feel too unrealistic at times. I mean why should I try to write a program to find out the number of Pythagorean triplets in an array? What is it going to accomplish??I am not trying to assert that such problems are easy; I find them extremely difficult. Nor am I trying to undermine the importance of websites that host such problems; they are a good way to test and improve your data structures and algorithms knowledge.All I’m saying is that it all feels way too fictional to me. When the problem that you are trying to solve is real, you will always want to work on improving your solution. That will provide the motivation to learn and grow. And that’s what you can get from participating in a Kaggle challenge.The Other Side of the debate: “Machine Learning isn’t Kaggle competitions”I will be remiss to not mention the other side of this debate which argues that— Machine Learning Isn’t Kaggle Competitions. Some people even go as far and say that Kaggle competitions only represent a “touristy sh*t” of actual Data Science work and that the data over there is artificially clean.Well, maybe that is true. Maybe real data science work doesn’t resemble the approach one takes in Kaggle competitions. I haven’t worked in a professional capacity, so I don’t know enough to comment. But what I have done, plenty of times is using tutorials and courses to learn ML/ Data Science. And each of those times, I felt like there was a disconnect between the tutorial/course and my motivation to learn. I would learn something just because it is there in the tutorial/course and hope that it comes of use in some distant, mystical future.On the other hand, when I’m doing a Kaggle challenge, I have a stage that allows me to immediately apply what I have learned and see its effects. And that gives me motivation and the glue that helps make all that knowledge stick. So, Kaggle, in itself, is probably not enough to help someone become a Data Scientist but it seems to be really effective in helping someone start his/her journey to becoming one.How to get started with Kaggle:Having all those ambitious, real problems has a downside though — it can be an intimidating place for beginners to get in.I understand this feeling as I have recently started with Kaggle myself (with the Housing Prices Prediction and the Costa Rican Household Poverty Level Prediction challenges). But once I overcame that initial barrier, I was completely awed by its community and the learning opportunities that it has given me. So, here I try to lay down how I started with my Kaggle journey (and how you can start yours too):Step 1. Cover the essential basicsChoose a language: Python or R.Once you have done that, head over to Kaggle Learn to quickly understand the basics of that language, machine learning and data visualisation techniques.                                                                                    Courses on Kaggle LearnStep 2. Find an interesting challenge/datasetRemember your goal isn’t to win a competition. It is to learn and improve your knowledge of Data Science and ML. It's easy to feel small and lost and out of place if you try to start by competing in an active challenge as a beginner.I would suggest that you choose a competition from this list of competitions (sorted by the number of participating teams — highest to lowest). What this means is that you will mostly encounter the introductory and archived competitions before the active ones. And that is a good thing! When you choose such a competition as opposed to an active one, you can have the insights and the analysis of a lot more people.Go through the list and choose the competition whose dataset or problem statement seems interesting to you; it is what will help to sustain your learning.Step 3. Explore the public kernels and the discussion forumsThey will help you understand the general workflow of data exploration -> feature engineering -> modeling as well as the particular approach that other people are taking for this competition. Get a feel for how things are done by looking at how other people’s public kernels.Often, these kernels and discussions will tell you what you don’t know in ML/ Data Science. Don’t feel discouraged when you encounter a technical term that you are unfamiliar with. Knowing what you need to know is the first step to knowledge.So, its okay if you don’t know the difference between continuous and discrete variables or what k-fold cross-validation is or how to produce those compelling graphs and visuals. They are just the things that you need to learn to help you grow. But before you do that...Step 4. Develop your own kernelNow go work on your own analysis. Implement whatever you learn from the previous step in your own kernel. Build as much as you can with your current knowledge.Also, remember it is the open-source philosophy to have “the freedom to run it, to study and change it”. So, don’t shy away from using someone else’s approach in your own implementation. It's not cheating to copy. Just don’t take this as an excuse to slack off. Make sure that you copy only what you understand. Be honest with yourself. (Also it would be nice of you to credit and link back to the original authors if you use parts of their notebook.)Step 5. Learn what you need to and go back to step 3This is where you do the learning. Sometimes, it is just a short article while at other times it can be a meaty tutorial/course. Just remember that you need to go back to step 3 and use what you learn in your kernel. This way you create the cycle needed to —  “Learn, Leap and Repeat”!Step 6. Improve your analysis by going back to step 2You come to this step once you have built an entire prediction model. So, congratulations for that!Now you probably want to improve your analysis. To do that you can go back to step 2 and look at what other people have done. That can give you ideas about improving your model. Or, if you feel like you have tried everything but have hit a wall, then asking for help on the discussion forums might help.                                                                           An example of such a discussionSo — learn, leap and repeat!A few more resources and links to check out:Weekly Kernels Award Winner Announcement thread:  A while back, Kaggle started with an initiative of choosing one best public kernel (their decision) each week and awarding it  with a cash prize. These kernels include some excellent analysis/visualizations. Also, on some weeks they fix the dataset upon which  they are awarding. This means that you get lots of excellent public kernels for those datasets which makes those datasets  perfectly apt for the 1st step of “How” above!Data Science glossary on Kaggle:  This public kernel uses the Meta Kaggle database to make a glossary of the most famous public kernels grouped by the tools/techniques that they use. This might be one of the best resources on the internet to understand the practical  implementations of ML algorithms. Also, this is one of those winning kernels from the Weekly Kernels Award that I mentioned  above (so now you know how useful that thread is.No Free Hunch— the official blog of Kaggl:  Among other things, this blog contains interviews from the top performers of Kaggle competitions. They discuss their strategy  and dissect their approach for the respective challenges. Reflecting back on one year of Kaggle contests:  A Kaggle Master shares his year-long experience on how he became good at Kaggle competitions.A Getting Started discussion thread about How to Become a Data Scientist at Your Own: It contains lots of links to various free resources for learning.Learning From the Best: This article, published on the Kaggle blog, contains advice on how to do well at Kaggle competitions from some of the top performers of Kaggle. There’s a lot I don’t understand in here but it all seems really interesting and I have bookmarked it.And finally, Machine Learning Isn’t Kaggle competitions: This is the second time I am linking to this article. In this blog post, Julia Evans explains how she found Kaggle competitions  vastly different from a day-to-day job at Stripe as a Machine Learning engineer. Things I took away from this post —  * Doing well on the leaderboard isn’t the end of the world  * How little actual ML work is about the fancy algorithms  * What one may expect at an ML/ Data Science jobAlright then. Thank you for reading. I hope this Kaggle learning basic tutorial has been helpful for you. I really believe that learning by building something is a very rewarding experience but it is difficult. Kaggle makes it easy for you. Kaggle competitions take care of coming up with a task, acquire data for you, clean it into some usable form and have a predefined metric to optimize towards. But as pointed out by other people, that is 80% of a Data Scientist’s work. So, although Kaggle is a great tool to start your journey, it is not enough to take you to the end. You need to do other things to showcase in your Data Science portfolio.And therefore I am trying to start a community — Build To Learn. It is a place where people can share their project ideas (weird ideas, welcome!) or cravings for tools and build them with the help of other members. It is a community of web developers, mobile app developers and ML engineers. So, no matter what domain your idea/problem falls in, you can expect to get at least some help with your fellow members.Let me know your thought in the comments section below.
Rated 4.5/5 based on 12 customer reviews
Using Kaggle to guide your learning: Why and How should you start?

Using Kaggle to guide your learning: Why and How should you start?

Blog
I often get asked by my friends and college-mates — “How to start Machine Learning or Data Science”.So, here goes my answer..Earlier, I wasn’t so sure. I would say somethin...
Continue reading

Introduction to Text Mining in WhatsApp Chats using Python- Part 1

Sad or glad? Let’s find out!Not boring you with lengthy and banal introductions. In this post, we are going to analyze WhatsApp chat messages and plot a pie chart visualizing the percentage of texts that have a Positive, Negative, or Neutral connotation or Sentiment to it. Ready to start?Text Mining is just a fancy term for deriving super-awesome patterns and drawing amazing inferences from Textual Data.1. Introduction to text mining in whatsapp chatsText Mining is a just a fancy term for deriving super-awesome patterns and drawing amazing inferences from Textual Data. Just like you can look at an image and infer that it is of a baby or of a 77-year-old woman, we can do the same with texts. Feel free to get on the rocket  and dive straight into code. The code, and a Jupyter notebook is available on GitHub, as always!2. Mining patterns from ChatsIf you have ever emailed a WhatsApp chat to yourself, (or someone else, if that’s how you roll) you may have noticed that it includes a handful of details that can be used to analyze a group of texts as well as the entire chat history. For those of you who do not know what I am talking about, here’s a brief description: WhatsApp allows you to export/share your chat history with a contact as a .txt file and it has the following structure:10/09/16, 6:10 PM - Person 1: Person 2? 10/09/16, 7:10 PM - Person 2: Yes? 10/09/16, 8:10 PM - Person 1: How are you? 10/09/16, 9:10 PM - Person 2: Idk. You? 10/09/16, 10:10 PM - Person 1: I just got a new phone 10/09/16, 11:10 PM - Person 2: Which one? 10/09/16, 12:10 AM - Person 1: I showed you? 10/09/16, 1:10 AM - Person 2: Cool 10/09/16, 2:10 AM - Person 1: Tes 10/09/16, 3:10 AM - Person 1: Yes 10/09/16, 4:10 AM - Person 1: You wanna write? 10/09/16, 5:10 AM - Person 1: If you have time. 10/09/16, 6:10 AM - Person 2: Not right now 10/09/16, 7:10 AM - Person 1: All right 10/09/16, 8:10 AM - Person 1: Do you think I should eat turkey? 10/09/16, 9:10 AM - Person 1: It's been a week 10/09/16, 10:10 AM - Person 1: We had a ten hour gaming session 10/09/16, 11:10 AM - Person 1: And it wasn't easy 10/09/16, 12:10 PM - Person 1: To be honest 10/09/16, 1:10 PM - Person 1: You should come over? 10/09/16, 2:10 PM - Person 2: Yes 10/09/16, 3:10 PM - Person 1: When? 10/09/16, 4:10 PM - Person 2: In an hour, maybe? 10/09/16, 5:10 PM - Person 2: See you!We are going to take advantage of this data, so let’s dive in!3. Project Setup and PrerequisitesLike any other project, there are a few dependencies (libraries) that you would need to have in order to analyze whatsapp chats using Python. If you face any problem installing any of these dependencies, feel free to leave a comment!                                                                                 Damn!On my own development system, I use Anaconda (to which I am not associated ) as it provides seamless virtual environments in Python and also has pre-compiled libraries in its repositories. Assuming you have pip installed, fire up a terminal and enter the following command:pip install numpy matplotlib nltkOr if you also prefer a conda, you can go to:conda install numpy matplotlib nltkTo install nltk’s data, run:python -m nltk.downloader allAfter you have installed the libraries, we can go ahead and start mining! (There won’t be gold.)4. CodeCreate a new directory — you can call it whatever-you-like but for me, it is a whatsapp-chat-analysis — and open it in your favorite code editor. While you are at it, open WhatsApp on your smartphone, and export your chat with any of your contacts (preferably someone you talk a lot too.) Detailed instructions on how to do that can be found after an intelligent Google Search or by navigating to the following: Place the .txt file in your project’s root and proceed.Step 1. Loading chat data into Python, and pre-process itThose were just the raw materials and like any other food recipe, we need the raw materials to make anything worthwhile. Now that we have gathered all the raw materials, let’s load them in the mixer and make something awesome.The first step is to load the chat file in Python and the easiest way to read a file in Python is to use the open utility and call the read method on the returned File object.chat = open("chat-with-amfa.txt") chatText = chat.read() # read its contents chat.close() Next comes the task of cleaning the text to remove artifacts that we aren’t going to need. Here’s a function that reads a text file, removes the artifacts and returns a list of lines extracted from the text file.As you’ll find in any other tutorial that does Text Mining, we make use of regular expressions — or RegExp for short — to find recurring patterns in the text and substitute them in one go. It makes the code more comprehensible and easy to understand, as well as helps us avoid unnecessary loops. (If you are a beginner, think about how we can do what we want without RegExp.)import re mediaPattern = r"(\)" # Because it serves no purpose regexMedia = re.compile(mediaPattern, flags=re.M) dateAndTimepattern = r"(\d+\/\d+\/\d+)(,)(\s)(\d+:\d+)(\s)(\w+)(\s)(-)(\s\w+)*(:)" regexDate = re.compile(dateAndTimepattern, flags=re.M)Let’s first declare two RegExp patterns that we would like to find and remove from every line of the text file. We then precompile it and save the compiled RegExp in the regexMedia and regexDate variables. These two variables now act as ready-to-use shredders. All you have to do is give them paper and they’ll make inedible spaghetti in no time! If you are having trouble understanding RegExp, you can go to https://regexr.com/3s301 and try changing it bits by bits to see how the matches change.                                                                              https://regexr.com/3s301 — RegExr                Next, we “substitute”, as implied by that sub method (5–6), all occurrences of the pattern with an empty string. You can think of it as a conditional and automated eraser.""" Removes the matches and replace them with an empty string """ chatText = regexMedia.sub("", chatText) chatText = regexDate.sub("", chatText) lines = [] for line in chatText.splitlines():     if line.strip() is not "": # If it's empty, we don't need it         lines.append(line.strip())Then, we simply split the chat file into separate lines and remove all lines that are empty. (Can you reason about why the file, at this time, would have empty lines?)The complete code, with some schezwan sauce and decoration, is as below:import re mediaPattern = r"(\)" # Because it serves no purpose regexMedia = re.compile(mediaPattern, flags=re.M) dateAndTimepattern = r"(\d+\/\d+\/\d+)(,)(\s)(\d+:\d+)(\s)(\w+)(\s)(-)(\s\w+)*(:)" regexDate = re.compile(dateAndTimepattern, flags=re.M) def cleanText(filename):         chat = open(filename)     chatText = chat.read()     chat.close()     # 01/09/17, 11:34 PM - Amfa:     """     Removes the matches and     replace them with an empty string     """     chatText = regexMedia.sub("", chatText)     chatText = regexDate.sub("", chatText)     lines = []     for line in chatText.splitlines():         if line.strip() is not "": # If it's empty, we don't need it             lines.append(line.strip())     return linesYou can copy and paste that code in a file named utilities.py, but you’ll learn a lot about the language if you typed it yourself.Step 2. Sad or Glad — A Novelimport sys import re import matplotlib.pyplot as plt import nltk from utilities import cleanText from nltk.sentiment.vader import SentimentIntensityAnalyzer sentiment_analyzer = SentimentIntensityAnalyzer() # Our Great Sentiment Analyzer def analyze(name):     linesList = cleanText(name + '.txt')     neutral, negative, positive = 0, 0, 0     for index, sentence in enumerate(linesList):         print("Processing {0}%".format(str((index * 100) / len(linesList))))                 # Ignore Emoji         if re.match(r'^[\w]', sentence):             continue                 scores = sentiment_analyzer.polarity_scores(sentence)                 # We don't need that component         scores.pop('compound', None)                 maxAttribute = max(scores, key=lambda k: scores[k])         if maxAttribute == "neu":             neutral += 1         elif maxAttribute == "neg":             negative += 1         else:             positive += 1     total = neutral + negative + positive     print("Negative: {0}% | Neutral: {1}% | Positive: {2}%".format(         negative*100/total, neutral*100/total, positive*100/total))         # Plot     #### Code Omitted #### analyze(sys.argv[1])nltk comes pre-bundled with a Sentiment Analyzer that was pre-trained on the state-of-the-art Textual Datasets and we can use it to analyze the tone of each sentence and mark it as one of these: Negative, Positive, or Neutral. (8)As always, we clean the chat text using the function we defined in utility.py (11), initialize helper variables (12) and start processing (15–16.) We then process each line in the chat file, which represents one text, one by one. In (18–19), you can see that we are again using RegExp to get rid of texts that only begin with an emoji. In most cases, this works, but it might fail in cases the text is preceded by an emoji. I look forward to a more sophisticated hack to get rid of this! PRs will always be welcome.                                                                                PRs are welcomeMoving on, we use our GreatSentimentAnalyzer to assign scores to the current text in question. (21) Then, we find the field — “neg”, “pos” or “neu” — which has the maximum score and increment the counter keeping track of it. We use Python’s lambda functions to find the field which has the maximum score (26). There are simpler ways to do it, so feel free to replace the lambda function with something else, keeping the end goal in mind.After that, we apply some mathematics and boom-shaw-shey-done!  The results are logged in the terminal, as well as plotted using matplotlib.                                                            Sentiment Analysis — Chat with AmfaI have omitted the code for plotting the results here for simplicity but you can grab the source from the project’s repository on GitHub, along with a Jupyter Notebook.ConclusionIn this post, we used the GreatSentimentAnalyzer to study and analyze WhatsApp chat messages using Python, and damn, wasn’t it fun? Here is the project link for review.In the next post, we are going to use the metadata we deleted and draw awesome inferences from that. Stay tuned! (Hint: Time is important, isn’t it?)
Rated 4.5/5 based on 8 customer reviews
Introduction to Text Mining in WhatsApp Chats using Python- Part 1

Introduction to Text Mining in WhatsApp Chats using Python- Part 1

Blog
Sad or glad? Let’s find out!Not boring you with lengthy and banal introductions. In this post, we are going to analyze WhatsApp chat messages and plot a pie chart visualizing the perce...
Continue reading

Implementing a Neural Network with Python in 15 Minutes

In the last article, I discussed the fundamental concepts of deep learning and artificial intelligence - Neural Networks. In this article, I will discuss about how to implement a neural network to classify Cats and Non-Cat images in python. Before implementing a Neural Network model in python, it is important to understand the working and implementation of the underlying classification model called Logistic Regression model. Logistic Regression uses a logit function to classify a set of data into multiple categories. The logit function (or sigmoid function) is defined as 1/ (1 + exp(-Z)) where z is the input vector. The output value of a logit function always takes values between 0 and 1 as represented in the following graph:Let us take a dataset of labelled images which contains two types of labels - Cat and Non-Cat images. We need to build a classifier that predicts if a given image is a Cat Image or a Non-Cat. Let us first prepare our dataset that will be used for classification. In this step, we will first load the dataset in x (predictor) and y (target) variables. We will then perform preprocessing to normalize the data. Since the given data is of image type, the preprocessing step will include flattening the entire pixel values and dividing by maximum pixel value. To implement a Logistic Regression classification model, three important steps are requiredCompute the sigmoid activation of inputsCompute the error term (cost function)Optimize the model weights using a cost function 1. Create a function to compute the sigmoid activation of the input data. 2. Create a function to compute the Cost Function and obtain the weights derivativesTo compute the cost function, we first need to compute the loss function. The loss function is the measure of error in the prediction value (activation) computed using sigmoid activation. The simplest loss function can be defined as the mean of the square of the difference in predicted value and the actual value. If A is the activation (prediction), and Y is the actual value, loss function can be defined as:Loss = (1/m) * (A - Y )^2We will discuss in the next step that we need to find the global minima of this function using optimization algorithms. One problem with this loss function is that the optimization problem becomes non convex. This means that there exist multiple local minimas in the cost function and the gradient descent may not converge to the global minimum value. A better loss function which overcomes this problem is the log loss function which is defined as:Loss = -Y * Log (A) - (1-Y) * Log (1 - A)The cost function is defined as the sum of loss function values for every input in the training data.3. Create a function to optimize the model weights (train the mode)Finally, to train the model, an optimization algorithm is used which minimises the cost function and updates the model weights with the optimized values. An optimization algorithm such as gradient descent is used for this purpose.Gradient Descent minimises the Cost function value by making small adjustments in the model weights. It iterates over different inputs in the training examples and computes the derivatives (small change) of model weights so that the cost is minimised. These derivatives of model weights are then subtracted from original model weights to get the updated values. Great, now our core functions are implemented. Lets compile the entire code together to get the most optimized values of model weightsLet's define a predict function which can be used to make predictions on test dataNow, it's time to implement a neural network model on the same lines. The architecture of the neural network is: The first layer is called the input layer which consists of inputs. The next layer consists of hidden layer which comprises of different neurons. Every neuron computes the activation functions values using the tanh activation function. The final layer is the output layer which computes the sigmoid activation of the received input from the hidden layer. The Steps to implement Neural Network are as follows:1. Define the neural network structure ( # of input units,  # of hidden units, etc)2. Initialize the model's parameters3. For a number of epochs:    - Implement forward propagation    - Compute loss    - Implement backward propagation to get the gradients    - Update parameters (gradient descent)1. Define the neural network architectureIn the first step, we define the architecture of neural network which consists of defining the number of nodes in the input layer, the output layer, and the hidden layer.2. Initialize the parameters Next, we need to implement a function that initialize the weights and bias of the models to random values. It is important to initialize the weights to the small random values to break the symmetry among different neurons. The meaning of symmetry among the neurons means if the weights values are all initialized to zeros, then every neuron is likely to behave similarly. Thus, the neural network may not compute complex non linear activation function values, but it will merely be a logistic regression implementation. 3.1 Implement Forward Propagation : The next step is to compute the forward propagation activations. We will use tanh in the hidden layer and sigmoid function in the output layer. 3.2 Compute the cost Now, we need to compute the cost (error term) which can be computed using the log loss function that we discussed in the logistic regression article. To recap, a cost function is the sum of loss function values for every input in the training example. Loss = -Y * Log A - (1-Y) * log (1-A)3.3 Compute Back PropagationNow, the cost term will be used to obtain the derivative of model weights (small change in model weight to compensate the error term). The derivative values will be used to update the original values of model weights during the optimization (training) process.3.4 Update the parametersIn the final step, we need to run an optimization algorithm in which a number of iterations are performed. In every iteration, the cost function and the derivatives of weights and bias with respect to cost function are computed. These derivatives are used to update the original values of weights and bias. The process is repeated for many iterations with the aim to minimise the cost function value.The essential components of cost function are implemented. Lets compile all of the above to implement a neural networkGenerate a synthetic dataset using the following function and train the neural network model Implement a function that predicts the values and compute the accuracy Compute the accuracy on test setAnd we have trained a neural network model which is performing 90% accurate on the test dataset. Feel free to share your views, thoughts, and queries in the comment section. 
Rated 4.5/5 based on 10 customer reviews
Implementing a Neural Network with Python in 15 Minutes

Implementing a Neural Network with Python in 15 Minutes

Blog
In the last article, I discussed the fundamental concepts of deep learning and artificial intelligence - Neural Networks. In this article, I will discuss about how to implement a neural...
Continue reading

Creating VR Experiences Using Angular And A-frame

VR Technology is growing in a wide range in all aspects. But, learning the tech stack for its implementation is still a challenge for many web developers. What if we could have something which makes our work easy without writing all the boilerplate codes for its implementation and setup.A-Frame is one open source web framework out in the market. It is used for building virtual reality experiences with simple HTML and entity components and can be deployed and viewed on any available device in the market without setting up any configurations explicitly. A-Frame makes it easy for web developers to create virtual reality experiences that work across desktop, iPhone, Android, and the Oculus Rift.In this post, you will learn how to create VR experiences using Angular and A-Frame.What is A-Frame?                                              Figure 1. A-frame Website (Source: https://aframe.io/)A-Frame is a framework for building rich 3D experiences on the web. It’s built on top of three.js, an advanced 3D JavaScript library that makes working with WebGL extremely fun. The cool part is that A-Frame lets you build WebVR apps without writing a single line of JavaScript (to some extent). You can create a basic scene in a few minutes writing just a few lines of HTML. It provides an excellent HTML API for you to scaffold out the scene, while still giving you full flexibility by letting you access the rich three.js API that powers it. In my opinion, A-Frame strikes an excellent balance of abstraction this way. The documentation is an excellent place to learn more about it in detail.A-Frame brings to the game is that it is based off an entity-component system, a pattern used by universal game engines like Unity which favors composability over inheritance. As we’ll see, this makes A-Frame extremely extendable.Entity Component SystemThe entity-component system is a pattern in which every entity, or object, in a scene, are general placeholders. The components are used to add appearance, behavior, and functionality. They’re bags of logic and data that can be applied to any entity, and they can be defined to just about do anything, and anyone can easily develop and share their components.An entity, by itself without components, doesn’t render or do anything. A-Frame ships with over 15 basic components. We can add a geometry component to give it shape, a material component to give it appearance, or a light component and sound component to have it emit light or sound.Each component has properties that further define how it modifies the entity. And components can be mixed and matched at will, hence the “composable” word root of “component”. In traditional terms, they can be thought of as plugins. And anyone can write them to do anything, even explode an entity. They are expected to become an integral part of the workflow of building advanced scenes.Writing and Sharing ComponentsSo, at what point does the promise of the ecosystem come in? A component is simply a plain JavaScript object that defines several lifecycle handlers that manages the component’s data. Here are some examples of third-party components that I and other people have written:Explode componentLayout component Explode componentSpawner componentExtrude and lathe componentSmall components can be as little as a few lines of code. Under the hood, they either do three.js object or JavaScript DOM manipulations. I will go into more detail how to write a component at a later date, but to get started building a shareable component, check out the component boilerplate.Setting up your Angular projectStep 1:                                                   Figure 2. Adding script tag in index.htmlAdd, the below script tag between the head tag in index.html
Rated 4.5/5 based on 11 customer reviews
Creating VR Experiences Using Angular And A-frame

Creating VR Experiences Using Angular And A-frame

Blog
VR Technology is growing in a wide range in all aspects. But, learning the tech stack for its implementation is still a challenge for many web developers. What if we could have something which makes o...
Continue reading

Learn Extracting an Information from the Regression Tables

Does this look familiar to you? And do you know what this table tells you?You probably stumbled upon regression tables from time to time. Whether your colleagues are sharing results, you are trying to figure out which features to select for an algorithm, or if you are reading scientific papers — chances are high that you are confronted with regression tables.The amount of information and statistics they contain, however, can seem a little bit intimidating. As such, this article helps you how to read them and how to extract the information you need.Three Parts of Regression TablesRegression tables usually contain 3 parts:A description of the data and modelDetailed results on the independent variablesTest statistics that indicate the quality and robustness of the model1. Description of the data and modelBefore you can assess a model, you should get an overview of the model(s) and data. Find out what has been estimated (dependent variable) using which (independent) variables, and for what kind of dataset.If various models have been estimated, it is common to summarize all models in a single regression table. In this case, each model is reported in a single column. As these models usually differ in the set of independent variables, you will see some white space in rows of independent variables that have not been considered for specific models.Information on the dependent variable is often at the top of regression tables. Depending on the tools you use or the domain in which you are working, the dependent variable is either directly specified, the name of a column or it might be mentioned in the caption of a table (Effect of … on ).Pay attention to how your dependent variable is encoded. Is it a binary variable (either 0 or 1), is it a continuous variable (all values possible), or can it take only positive/negative values?The independent variables are often easily located. They are the ones who are in the middle of the table and for whom coefficient estimates, standard errors, t-statistics or p-values are given. The independent variables are the variables who are expected to determine the dependent variable. As such, they should be chosen carefully and have a significant effect on the dependent variable.Again, check how they are encoded. Like the dependent variable, they can also be binary or restricted to a certain range of numbers.Finally, check the size of the dataset. The number of observations is often abbreviated with n = . Make sure it is sufficiently large and meets your expectations. During the data cleaning process, some observations are usually dropped due to missing values, a poor data quality, or domain-specific selection rules.Especially if you have not been involved in the data wrangling process, it is always a good idea to check how many observations have been considered in the final model.Example:First of all, this regression table reports the result of 4 models (see the 4 grey columns). The first three models describe the effect on wages, whereas the fourth model describes the effect on working hours (dependent variable).To estimate wages, the first model only considers the hours worked, IQ, education (measured in years), work experience (measured in years), and age.Additionally, the second model also takes into account whether an individual is married (encoded 1 if married and 0 otherwise), black (encoded 1 if black and 0 otherwise), and how many siblings an individual has.Finally, the third model does not take the number of siblings into account, but whether an individual is living in a rural area.To estimate the number of hours worked, the fourth model considers the IQ, education, work experience, age, as well as a dummy variable for being married.At the bottom of the table, we can see that all models include the full dataset of 935 observation. So there are no differences between the samples.2. Detailed results on the independent variablesAfter we got an overall impression on what has been estimated using which variables, the next step would be to look at more detailed results of the independent variables. As mentioned earlier, they are supposed to explain the value of the dependent variable. But do they really help in explaining the dependent variable?To assess the effect of each independent variable, regression tables provide a coefficient for each independent variable.Note, the interpretation of these coefficients crucially depends on the units and values of the dependent and independent variables. If any of the variables is expressed as a logarithm or as a binary variable, the interpretation changes!Finally, p-values, standard-errors, t-statistics, and confidence intervals indicate the significance and precision of single coefficient estimates.Example:Let’s have a look at our example, again, and see what our estimations suggest:First, let’s only consider the models that estimate wages (coefficients with yellow background): All models suggest that the number of hours worked, IQ, education, work experience, and age have a positive effect on wages.The second and third model suggests that the married people have higher wages on average but black people receive lower wages.The second model suggests that individuals with siblings would have lower wages, but this effect is not significant as indicated by the missing asterisks.Finally, with the fourth model, we learn that individuals living in urban areas tend to have higher wages.With the fourth model, however, we do not find any significant variables that predict the number of hours worked. Having a high IQ, good education, lots of experience, old age, or being married cannot tell us how many hours an individual work. In this case, we should come up with new independent variables like the occupation that is better suitable to predict the number of hours worked.3. Test statistics of the modelLastly, numerous test statistics allow you to assess the overall quality of a model. Some of them give hints on the robustness and validity of a model and others check whether the assumptions hold.The most prominent test statistic is the R2 and the adjusted R2.The R2 indicates how much of the variation in the independent variable is explained by the model. Similarly, the adjusted R2 does the same but it also takes the number of variables into account and favors parsimonious models, i.e. models with few independent variables.Please do not assess the quality of a model solely on the R2 or adjusted R2 . Although they have their benefits, it would be naive to pay too much attention to them.Example:The R2 and adjusted R2 are around 20% for the three models predicting wages. As we do not find any significant variables for the number of hours worked, the last model hardly explains 1% in the variation of hours worked.SummaryRegression tables are a great way to communicate the results of linear regression models. Do not be intimidated by the number of statistics they provide but read them in a systematic way:First, understand what has been estimated using which variables. Find out how variables have been encoded and what size the dataset has.Second, have a closer look at the effects of single independent variables. Which variables have a positive and negative effect on the independent variable, and which of them are significant.Finally, check the overall quality of the model. 
Rated 4.5/5 based on 9 customer reviews
Learn Extracting an Information from the Regression Tables

Learn Extracting an Information from the Regression Tables

Blog
Does this look familiar to you? And do you know what this table tells you?You probably stumbled upon regression tables from time to time. Whether your colleagues are sharing results, you are trying to...
Continue reading

Creating Your First Offline Payment Wallet With Django, Part-2

In part 1 of the series we had set up Django and the databases correctly, without wasting any further time lets get our hands dirty by writing some code.To make the wallet functional and private we need to provide a login functionality to our users so they can login and store their information and details, so we will go on to build the login functionality, for this let us create an app called accounts in our Django project, get into the project directory and type,python manage.py startapp accounts The app will be created and then we will have to register this application in django settings.py after which the INSTALLED_APPS section will look like thisINSTALLED_APPS = [         'django.contrib.admin',         'django.contrib.auth', #
Rated 4.5/5 based on 11 customer reviews
Creating Your First Offline Payment Wallet With Django, Part-2

Creating Your First Offline Payment Wallet With Django, Part-2

Blog
In part 1 of the series we had set up Django and the databases correctly, without wasting any further time lets get our hands dirty by writing some code.To make the wallet functional and pri...
Continue reading