top
Sort by :

Angular Best Practices

If you have been around in the last few years in the development field, I am sure you must have come across the name Angular at-least once. In this article, we are not only going to talk about what Angular is, but some of the best practices that a developer should keep in mind to keep the project efficient and the code easier to manage and debug.What is Angular? Angular is a modern MVVC framework and platform that is used to build enterprise Single-page Web Applications (or SPAs) using HTML and TypeScript. Angular is written in TypeScript. It implements core and optional functionality as a set of TypeScript libraries that you import into your apps. Angular is an opinionated framework which means that it specifies a certain style and certain rules that developers need to follow and adhere to while developing apps with Angular, therefore you need to learn Angular and the various components that make up an Angular app.Angular vs AngularJSA lot of beginners get confused with so many different versions out there for the same framework. You must have heard AngularJS, Angular 2, Angular 4, Angular 5, Angular 6 and now Angular 7. Well, in reality, there are two different frameworks - AngularJS and Angular.AngularJS is the JavaScript-based web development framework which was first released in 2010 and is maintained by Google. Later, in September 2016, Angular 2 was announced which was a complete rewrite of the whole framework using TypeScript, a super-set language of JavaScript.Since modern browsers (as of now) do not understand TypeScript, a TS compiler or transpiler is required to convert the TS code to regular JS code.Why Angular? Why do we use Angular that uses TypeScript as the primary programming language when it comes with the overhead of transpilation? Well, the answer lies in the list of advantages that TypeScript offers over traditional JavaScript. With TypeScript, developers can use data types, syntax highlighting, code completion all the other modern features that help them code faster and more efficient. Since TypeScript is an object oriented programming language, developers can use the advantages of classes, objects, inheritance and similar features that all other OOPLs offer.Angular, therefore, is the framework that uses TypeScript as the primary programming language. Since the Angular team opted for semantic versioning, Angular 2, 4, 5, 6 and 7 are all the versions of the same framework, each version being better than the previous one, while AngularJS is a completely different framework that uses JavaScript as the primary programming language.Recommended Best PracticesNext, we will go through some of the best practices that a developer should follow to keep the project easier to manage and debug. Although, there are no hard rules regarding these best practices, most developers follow them. You may have your own coding styles and that is completely fine too.Using the power of TypeScriptAngular is written using TypeScript which means that all the code that you will be writing to build your amazing web-app will also be written using TypeScript. One key point to note here is that all the JavaScript code is valid TypeScript code. While it is recommended that you use TypeScript code all along, you can even you JavaScript code, if you will.Using TypeScript offers you certain advantages as you are able to define types on the properties and variables, you can define classes & interfaces and utilize the power of interfaces and unions.Specify types for all variables. This helps you prevent other developers working on the project from storing wrong information in the variable. For example, if a variable called age is of the type number, it can only store numerical values. If another developer writes the code to store the value of any other types, for example, a string, the code editor will warn the developer and the compilation process will also be able to spot the error in the code. These errors won’t leak into the runtime.Classes in object oriented programming languages are used to contain data members and member functions related to a particular citizen in the application. Be it a product, a user or something else like that. Always create classes to represent these citizens in your applications. Encapsulate all the functionality related to these objects within these classes. This helps keep the consistent and maintainable when a large team is working on the project.class User { first_name: string; last_name: string; email: string; constructor(f_name: string, l_name: string, email: string) { this.first_name = f_name; this.last_name = l_name; this.email = email; } sendMail() { // method to send an email to the user } }Consistency brings productivity into the picture as well. Developers don’t have to worry as much about if they’re doing it the “right way”.Create models to represent objects in your data that can be more complex than simply a string or a number. Using models allows you to use the help offered by the Code Editor you are using. Most errors that are likely to be introduced because of typing can be prevented because you do not need to remember the names of properties inside the model as the Code Editor will suggest the names as you type.If you have  worked with MVC frameworks before, you know that you declare types in a model which can then be reused throughout the rest of the application. With TypeScript, front-end applications can now benefit from strongly typed models. Consider a simple model for a user as below.export interface User {   name: string = 'Angular';   age: number = 0; }According to the Angular Style Guide, models should be stored under a shared/ folder if they will be used in more than one part of your application.A lot of times, developers depend on APIs and the data returned by the APIs. They retrieve the data, process it and present the data to the user in a nice and clean UI. It is possible that the APIs are not always perfect or some field of the API may not always contain the expected values or the values in the expected format.TypeScript allows intersection types. This allows you to create variables of a type which is a combination of two or more types. Let’s have a look at an example.interface Student {     roll_number: number;     name: string; } interface Teacher {     teacher_id: string; } type A = Student & Teacher; let x: A; x.roll_number = 5; x.name = 'Samarth Agarwal'; x.teacher_id = 'ID3241';In the above code, the new type A is a combination of types both Student and Teacher and will therefore contain the properties of both types.While intersection creates a new type with the combination of the provided types, union, on the other hand, allows you to have a type either of the several types provided as arguments. Let’s look at an example.age: string | number;In the code snippet above, the age variable can store a value of either the type string or number. So, both the following values will be fine.this.age = 12; this.age = 'twelve';Use the Angular CLIAngular CLI is one of the most powerful accessibility tools available when developing apps with Angular. The Angular CLI makes it easy to create an application that already works, right out of the box. It already follows all the best practices!The Angular CLI is a command-line interface tool that is used to initialize, develop, scaffold, maintain and even test and debug Angular applications. You can use the tool directly in a command shell.Instead of creating the files and folders manually, try to always use the Angular CLI to generate new components, directives, modules, services, pipes or even classes. Angular CLI updates the required module files and generates required files and folder. It also creates the required files for unit testing the components and directives. This maintains the uniformity of structure across the application and helps easily maintain, update and debug the project.The CLI also allows you to use the development server to test the applications locally and then build the production build of the application for deployment.To stay updated with the latest version of the CLI, you can use the following command.npm install @angular/cli -gYou can also check the version installed on your system using the following command.ng versionNaming ConventionsAccording to the Angular style guide, naming conventions are hugely important to maintainability and readability. Following general naming conventions are specified by the Angular style guide.Do use consistent names for all symbols.Do follow a pattern that describes the symbol's feature then its type. The recommended pattern is feature.type.ts.Do use dashes to separate words in the descriptive name.Do use dots to separate the descriptive name from the type.Do use conventional type names including .service, .component, .pipe, .module, and .directive. Invent additional type names if you must but take care not to create too many.Why? Naming conventions help provide a consistent way to find content at a glance. Consistency within the project is vital. Consistency with a team is important. Consistency across a company provides tremendous efficiency.Why? The naming conventions should simply help find desired code faster and make it easier to understand.Why? Names of folders and files should clearly convey their intent. For example, app/heroes/hero-list.component.ts may contain a component that manages a list of heroes.The purpose of the above guidelines is to ensure that just by looking at the filename, one should be able to infer the purpose and type of the contents of the file. For example, files with filenames hero.component.ts and hero.service.ts can be easily identified as being the files for the component and service for something called a hero in the project, respectively.Note: If you are using the Angular CLI (which you always should), the file names will be taken care of automatically by the CLI.Single Responsibility PrincipleThe single responsibility principle is a computer programming principle that states that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.Apply the single responsibility principle (SRP) to all components, services, and other symbols. This helps make the app cleaner, easier to read and maintain, and more testable.According to the style guide, functions should be limited to 75 lines of code. Any method larger than that should be broken down into multiple files or methods. Each file, on the other hand, should be limited to 400 lines of code.Creating one component per file makes it far easier to read and maintain the components as the application grows over time. This also helps avoid collisions with teams in source control. This also prevents any hidden bugs that may often arise when we combine multiple components in a file where they may end up sharing variables, creating unwanted closures, or unwanted coupling with dependencies. A single component can be the default export for its file which facilitates lazy loading with the router.The key is to make the code more reusable, easier to read, and less mistake prone.Breaking down ComponentsThis might be an extension of the Single responsibility principle not just to the code files or the methods, but to components as well. The larger the component is, the harder it becomes to debug, maintain and test it. If a component is becoming large, break it down into multiple, more manageable, smaller components, dedicating each one to an atomic task.In such a situation, if something goes wrong, you can easily spot the erroneous code and fix it spending less time in spotting the issue.Look at the following code for a component called PostComponent which can be used to display various parts of the post including the title, body, author’s information and the comments made by people on the post.<div> <h1> <post-title></post-title> <h1> <post-body></post-body> <post-author></post-author> <post-comments><post-comments> </div>The component contains multiple components and each of these components only handles a small task. We could have had one gigantic component instead of having 4 different ones but that would have been so hard to maintain and read.It is also a good practice to keep minimal code within the component. Component should handle the job of displaying the data to the user in a nice and clean way while the responsibility of data retrieval should be outsourced to a service. The component should receive the data either as an input or use the service to retrieve the data.Change Detection OptimizationsWhen you scaffold a brand new Angular application, the change detector seems magically fast. As soon as you change the value of a property on the click of a button or something like that, the view updates almost in real-time without any delays.But, as the application grows, things may start to lag a bit. If you have drag & drop in your interface, you may find that you’re no longer getting silky-smooth 60FPS updates as you drag elements around.At this point, there are three things you can do, and you should do all of them:Use NgIf and not CSS - If DOM elements aren’t visible, instead of hiding them with CSS, is it a good practice to remove them from the DOM by using *ngIf.Make your expressions faster. Move complex calculations into the ngDoCheck lifecycle hook, and refer to the calculated value in your view. Cache results to complex calculations as long as possible.Use the OnPush change detection strategy to tell Angular there have been no changes. This lets you skip the entire change detection step on most of your application most of the time and prevents unexpected change detection steps when they are not required at all.This saves the trouble to empirically determining all properties of all Angular Components and Directives for changes and therefore improves a lot on the performance of the application.Build Reusable ComponentsIt is a rule never said - build reusable components. If there is a piece of UI that you need in many places in your application, build a component out of it and use the component.This will save you a lot of trouble if, for some reason, the UI has to be changed a little bit. In that case, you do not go around changing the UI code in all 100 places. Instead, you can change the code in the component and that is it. The changes will automatically be reflected in all the usages of the component throughout the application.Reusing a component in multiple places may require the component to change itself based on the context and adjust accordingly. For this, you may have to use property and event bindings to pass inputs to the components and receive output events from the components, respectively.API code in a ServiceComponents consume Services.A service is typically a class with a narrow, well-defined purpose. It should do something specific and do it well. Service is a broad category encompassing any value, function, or feature that an app needs.This again connects to the Single responsibility principle and keeping the components lean. In Angular, there is a clear line of separation between components and services, and this is done to increase modularity and reusability. When a component's view-related functionality is separated from other kinds of processing, it makes the component classes lean and efficient.Certain tasks can be delegated to services by components, such as fetching data from the server, validating user input, or logging directly to the console. If we define these tasks in a service, we are making this reusable across all the components in the application and more. However, Angular doesn't enforce these principles but Angular does help you follow these principles by making it easy to break down your application logic into services and making those services available to components through dependency injection.export class APIService {   get() { // code to get the data from the web service or API }   post(data: any) { // code to send the data to the web service or API }   update(data: any) { // code to update the data } }The above code is a representation of a service that interacts with the external API on behalf of the application. This service can be used to get the data from the API, send the data to the API and update any existing data on the server. Other components use this service to handle sending and receiving of the data.Using trackBy in NgForThe NgFor directives is used to loop over a collection (or an array) in your application to render a piece of UI repeatedly. The following snippet is a typical implementation of rendering a collection using NgFor directive.<ul> <li *ngFor="let item of collection;">{{item.id}}</li> </ul> And let’s use the following code in the TS class to change the elements in the collection every 1 second. import { Component } from '@angular/core'; @Component({  selector: 'my-app',  templateUrl: './app.component.html',  styleUrls: [ './app.component.css' ] }) export class AppComponent  {  collection = [{id: 1}, {id: 2}, {id: 3}];  constructor() {    setInterval(() => {       let randomIndex = parseInt(((Math.random() * 10) % 3).toString());       this.collection[randomIndex] = {         id: parseInt(((Math.random() * 10) % 10).toString())       }     }, 1000)  }}If we change the data in the collection, for example as a result of an API request or some other complex logic within the application, we may have a problem because Angular can’t keep track of items (not able to identify each individual element) in the collection and has no knowledge of which items have been removed or added or changed.As a result, Angular needs to remove all the DOM elements that are associated with the data and create them again. That means a lot of DOM manipulations especially in case of a big collection, and as we know, DOM manipulations are expensive. This is okay if you have a small app or the collection only has a few elements but as the application grows, thus can cause performance issues in the application.The solution is to always use trackBy whenever you use NgFor directive. Always ensure that you pass in a unique value to the trackBy so that Angular can uniquely identify each element in the collection using the unique value. In the following snippet, we use the id.<ul> <li *ngFor="let item of collection; trackBy: id">{{item.id}}</li> </ul>If you run the above code, and look at the DOM using Google Chrome Inspector, you will find that only the list item that has changed is re-rendered. All the other list items are unchanged.The trackBy also takes a function as the argument which takes the index and the current item as arguments and needs to return the unique identifier for this item.Lazy Loading ModulesLazy loading is a technique in Angular that allows you to load JavaScript components asynchronously based of currently activated route. It is a feature that could help you a lot with large and heavy applications.Since lazy loading breaks down the application into multiple modules (logical chunks of code) and loads those modules only when they are required by the user (depending on where the user navigates within the application), it reduces the initial load times of the application since less KBs are loaded when the application is loaded first. As the user navigates within the application, more chunks are loaded as and when required. Angular router has full support for lazy loading Angular modules.Using Async PipeI always use async pipe when possible and only use .subscribe when the side effect is an absolute necessity.You must have heard that the AsyncPipe unsubscribes from Observables as soon as the component gets destroyed. But did you also know it unsubscribes as soon as the reference of the expression changes?That’s right, as soon as we assign a new Observable to the property that we use the AsyncPipe on, the AsyncPipe will automatically unsubscribe from the previous bound Observable. Not only does this make our code nice and clean, but it’s also protecting us from very subtle memory leaks.Environment VariablesWhen we build projects using Angular (or any other technology for that matter), it’s common for developers to have multiple application versions that target different environments i.e. development and production. Each environment will have some unique environment variables i.e. API endpoints, app versions, datasets, etc. Angular provides environment configurations to declare variables unique for each environment.By default angular supports two environments – production and development.Inside the environment directory, there are two files, environment.ts and environment.prod.ts. While the first file contains the environment configuration and variables for the development environment, the second one contains the same for the production environment.You can even add more environments, or add new variables in the existing environment files.// environment.ts environment variables export const environment = {   production: false,   api_endpoint_root: 'https://dev.endpoint.com' }; // environment.prod.ts environment variables export const environment = {   production: true,   api_endpoint_root: 'https://prod.endpoint.com' };Maintaining these variables helps a lot when something changes, for example, the API URL for the endpoint and when you build your application for a particular environment, the variables changes are applied automatically.Always DocumentLast, but not the least - Alway document your code as much as possible.Writing comments within the code helps other developers involved in the project development a lot and understand the purpose and logic of the written code. It helps manage the code and adds to the readability of the code.It is a good practice to document the use and role of each variable and method. For methods, each parameter should be defined using multi-line comments and it should also be defined what task exactly the method accomplishes.What’s new?We do not know much about the latest version of Angular, 8 (8.0.0-beta.8), yet but like any other major update, this version will be better, faster and result in a smaller build than the previous versions. It has improvements and fixes over the previous version of Angular.The current major version is still Angular 7 (as on March 2019).The new version, Angular 8, will feature a new renderer called Ivy. However, this support is most likely be only experimental and full support will be added in the next major release.Some other major improvements are listed below.Added Support for TypeScript 3.2Added a Navigation Type Available during Navigation in the RouterAdded pathParamsOrQueryParamsChange mode for runGuardsAndResolvers in the RouterAllow passing state to routerLink Directives in the RouterAllow passing state to NavigationExtras in the RouterRestore whole object when navigating back to a page managed by Angular RouterAdded support for SASSResolve generated Sass/Less files to .css inputsThe release cycle of Angular provides for a new major release every six months. Thus, Angular 8 would have to follow in April or May 2019. Until then, it continues first with minor releases of Angular 7 and then the beta versions of Angular 8.
Rated 4.5/5 based on 19 customer reviews
Angular Best Practices 8645 Angular Best Practices Blog
Susan May 17 May 2019
If you have been around in the last few years in the development field, I am sure you must have come across the name Angular at-least once. In this article, we are not only going to talk about what An...
Continue reading

Best Python Practices

What is “Best Practice” of any programming language?Programming is equally an art as it is science and logic. Hence, even though programming style or technique can vary from developer to developer, there are some ground rules that are usually followed across the industry. These rules are laid out with the view of making the programming experience more uniform across the different variety of developers. Of course, the logic developed by one developer to achieve a certain task might not match the logic developed by another for the same task. But, the “format” of writing code should be similar, just like the general format of writing a business email is fixed.“Best Practice” of any programming language refers to the recommended way of writing a program, such that it is uniform across the globe and any developer other than the original author is able to easily understand and modify the program.Why is it necessary to follow “Best Practices”?In the industry, more than one person is involved in the development of a certain project for any given company. In such a case, it is of utmost importance that every member of the current team or any future employee working on the same project is able to understand the flow of the program and what the previous developer has done.Since development is a dynamic process, even if a project is completed, for the time being, some other developer might be assigned to make updates or add features to the same project at a later date. That person then has to read through and understand the code written by the previous author in order to modify it. Hence, to make it efficient and easier for developers to read and understand the program written by other developers, it is necessary to follow a certain set of recommended “Best Practices”.Best practices for PythonThere are a lot of best practices that are followed across the industry. Covering all of them is beyond the scope of this article. However, listed below are a few of the most common practices that any beginner developer should adhere to. 1. Following a proper naming conventionWhen a developer sees a piece of code, he/she should instantly be able to understand which name refers to what kind of data. A proper naming convention is thus important to allow the programmer to easily determine what is the purpose of a particular user-defined instance/variable.Rules for a proper naming conventionGeneral: Names should not be generic and it should have relevant meaning. They should be short yet descriptive. Example:Bad names: my_variable, x, list_for_storing_word_countsGood names: row_dict, product_id, word_countsInstance Variable: In Python, variables are named in lower case with an underscore ( _ ) separating each word; Similar to the examples shown above. If the variable is not public then the variable name should start with an underscore, example: _private_variable.Constants: Constants are not commonly used, however they are supposed to be named in all caps and separated by an underscore. Example: PRODUCT_IDFunction: Similar to variables, function names should also be lower case separated by an underscore.Methods: In Python, Methods are functions which belong to a class. Their naming scheme is the same as normal functions, with the exception of private methods, which start with an underscore, similar to instance variables.Classes: Since Python is an object-oriented programming language, classes are very commonly used. They are named in upper case camel casing. Example: NeuralNetworkClassPackages: These are modules that are imported into a program to add functionality. They should be named in lowercase with an underscore separating them. They are preferably single-worded.2. PEP 8 Style GuideThe developers of Python Community found out that even when following some of the “Best Programming Practices” overall, the kind of Python code that many developers wrote vastly varied. This created inconveniences to the community and to anyone who was trying to read and/or modify existing code.Hence, the community came up with a set of guidelines, which would layout a particular design flow that developers can adhere to in order to keep the code looking as uniform as possible. This set of guidelines, specifically designed for Python, have been mentioned in PEP 8 of Python’s official Documentation.It is an extensive list of rules and Dos and Don’ts, most of which are beyond the scope of this article. Here, we have listed out a few of the commonly used PEP 8 style conventions.PEP 8 RulesIndentation: Traditionally indentation was used to improve code readability. But Python is a language where indentation is used to differentiate between various code blocks and hence indentation is more of a necessity than a luxury. The problem arises when different developers use a different number of spaces or tabs for an indent. Hence, PEP 8 specifically defines that one indentation is equal to four spaces and does not recommend the use of tabs at all. Also, a mixture of tabs and spaces in any Python script throws a syntax error.Additionally, PEP 8 also mentions how to write code for multi-line long functions. Some examples are shown below:Do:# Aligned with opening delimiter. foo = long_function_name(var_one, var_two,                         var_three, var_four) # Add 4 spaces (an extra level of indentation) to distinguish arguments from the rest. def long_function_name(        var_one, var_two, var_three,        var_four):    print(var_one) # Hanging indents should add a level. foo = long_function_name(    var_one, var_two,    var_three, var_four)Don’t:# Arguments on first line forbidden when not using vertical alignment. foo = long_function_name(var_one, var_two,    var_three, var_four) # Further indentation required as indentation is not distinguishable. def long_function_name(    var_one, var_two, var_three,    var_four):    print(var_one)Maximum Line Length: PEP 8 limits the maximum line length to a fixed number of characters. This is done to increase code readability and to avoid the program from going off the page, which causes inconvenience to the reader.For normal program code, it is recommended to not go beyond 79 characters per line.For docstrings and comments, it is recommended to keep the character count within 72 per line.For lines that go beyond the given limit, they can be written in more than one line. As shown in the examples of “Indentation” above.Import: PEP 8 mentions the method of importing modules. Even though more than one module can be imported in a single line by using a comma (,) in between. However, it is not recommended. PEP 8 suggests importing one module per line, as shown in the example below.Do:import numpy import pandas import matplotlibDon’t:import numpy, pandas, matplotlibThe original PEP 8 DocumentationThe above mentioned few guidelines just start to scratch the surface of the entire PEP 8 guideline lists. Again, it should be remembered that following PEP 8 guidelines is not mandatory, but is necessary.3. Writing Modular and Non-Repetitive CodeLet us discuss modular and non-repetitive coding practices. Modular code means to write code in a short and understandable format. This improves code readability and reduces code repetition. Both of these can be achieved quite easily by the proper and frequent use of Classes and Functions.We have already seen the use of classes in OOP practices. Below is an example of the usage of Functions.Example# Code example of Python Python Function def find_odd_nums(num_list): return [num for num in num_list if num % 2 != 0] random_numbers = [2, 3, 5, 8, 34, 62, 6, 3, 4, 23, 13, 34, 0, 39] print(find_odd_nums(random_numbers))Output:[3, 5, 3, 23, 13, 39]In the above example, we can see that the function is defined once and has been called later in the code. In this particular case, it is being used only once and hence is not as effective. But, often a particular operation needs to be done multiple times in the same program. At such times, it is recommended to define one function about it and then call it as many times as required.This has many advantages. They are listed as follows:Easier Code ReadabilityCode ModularityNon-Repetitive CodeEfficient use of VariablesReduced Coding and Debugging TimeUsing functions in code is not mandatory, but is highly recommended. It is one of the most fundamental “good” programming practices. 4. Using Object-Oriented ProgrammingLet’s start with what is Object-Oriented Programming, or more popularly known as OOPs. It is a way of programming which involves “objects” and the data associated with them. It is different in approach when compared to conventional procedure-oriented programming in the part that the primary focus is based on “objects” instead of “actions”. This might sound pretty insignificant, but it does provide a lot of advantages in terms of programming methodology, data management, data hiding, code modularity and so on.Object-Oriented Programming is implemented with the use of Classes and its methods. Let’s try to understand this in brief. One class can have multiple “objects” or “instances”. Methods are class functions which are accessible by an “Object” or “Instance” of that class.Example of a Class with methods and creating objects:# Code example of Python Class # Create a class class Employee: def __init__(self, name, age, designation): self.name = name self.age = age self.designation = designation print("Employee created!") def modify_details(self, name, age, designation): self.name = name self.age = age self.designation = designation print("Employee details modified!") def show_employee(self): print("Name:", self.name) print("Age:", self.age) print("Designation:", self.designation) # Create two instances of a class employee_1 = Employee('John Doe', 35, 'Data Scientist') employee_2 = Employee('Natasha', 29, 'DevOps') # Show the details of the employees employee_1.show_employee() employee_2.show_employee() # Modify the details of employee 1. employee_1.modify_details('John Doe', 31, 'Data Scientist') # Employee 2 details remain unchanged, but employee 1 has been modified employee_1.show_employee() employee_2.show_employee()Output:Employee created! Employee created! Name: John Doe Age: 35 Designation: Data Scientist Name: Natasha Age: 29 Designation: DevOps Employee details modified! Name: John Doe Age: 31 Designation: Data Scientist Name: Natasha Age: 29 Designation: DevOpsHow does OOP affect the development process?Let’s answer the question by explaining the above code snippet.As shown, a class is created which contains a few methods (functions) inside it. Note that we define the class only once, but we are creating two different instances (objects) of it, named “employee_1” and “employee_2”. These two employees are initialized with their relevant details, but they each are completely independent of each other. Even when the details of employee_1 is modified, employee_2 remains unaffected.This was just a sneak peek into the advantage of using classes. In order to implement the above code without OOPs would mean to have a separate set of variables to store the details of each employee. This makes it very difficult to work with a huge number of variables as the number of employees increase.Now that we know what can be the consequence of not using OOPs, let’s list out the advantages of using OOPs:Code ModularityCode ReusabilityPolymorphismData EncapsulationInheritanceHence, using OOP concepts in Python is highly recommended when working on larger projects. 5. Proper Commenting and DocumentationComments are parts of the code which are not executed by the interpreter. These are statements that are ignored during the execution of the program. Declaring comments are different for each programming language. In Python, a line starting with Hash ( # ) is considered a comment. The main purpose of comments is to convey information to the reader in a simpler way compared to the program itself. Commenting is of different types:Single-Line CommentMulti-Line CommentDocStringsExample of a Single-Line Comment:# Define a List of Brands brands = ['Apple', 'Google', 'Netflix', 'Amazon', 'Ford']Example of a Multi-Line Comment:# Welcome to Python Programming # This is a dynamically typed object-oriented language # Let’s define some brand names brands = ['Apple', 'Google', 'Netflix', 'Amazon', 'Ford']Example of a DocString:def find_odd_nums(num_list): """ This function is used to find and list out the odd numbers from a given list It takes a list of numbers as argument It returns a list of odd numbers """ return [num for num in num_list if num % 2 != 0]As we can see from the above code snippets, Single-line comments are used to explain short segments of a program, whereas a multi-line comment is more descriptive about a bigger segment of code. Finally, DocStrings are used in Class and Function definitions. It explains the overall functionality of the Class/Function and explains the parameters and return statements.The necessity of Documentation:Documentation may seem unnecessary, but it plays a key role in programming. A well-documented program is easier to read and understand when compared to an undocumented one.Reading the program line-by-line with the view to understand/modify the program is tedious. In the industry, the length of programs often reach thousands of lines. In such a case, if at a later date any developer (even the author himself) tries to debug/modify the code, it becomes a challenge to understand the program since it is often difficult to understand the intended functionality of every part of the code. Here comes the necessity of proper documentation. A well-documented program is easy to read and understand and hence easier to debug when compared to an undocumented one.Efficient Code Documentation:Documentation does not mean to write what each line does. For Example:# Print about Python print('Python is an Object Oriented Language')Instead, Documentation is supposed to give insight to a bigger picture as to WHY a certain piece of code is written. For Example:# create numpy array of zeros same as df_log to initialize the output output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))In any development task, extensive documentation methods are followed by all companies. Good documentation practices are recommended for any programming language and are not limited to Python. Even though documentation is not mandatory, it is necessary. 6. Using Virtual EnvironmentsPython is a language which has multiple versions (Python 3 and Python 2) and relies heavily on numerous libraries. Most programs utilize more than one library at a time, and each library has specific version requirements and interdependence with others. Moreover, a certain task may require the usage of one library which may not be compatible with other pre-existing libraries. Additionally, the developer may require more than one version of Python to be installed on the same machine at a given time. For example, Tensorflow, a Deep Learning library does not support Python 3.7 (at the time of writing this article). It supports up to Python 3.6. Yet, at the same time, the developer might want to use the newer features of Python 3.7 for other projects.It is not practically feasible to install and uninstall the interpreter and all the associated libraries repeatedly every time a different version is required. Hence, Virtual Environments were introduced. These are self-contained environments which are completely independent of the system interpreter. One Virtual Environment can run Python 2.7 and another Virtual Environment can run Python 3.6 with Tensorflow in the same machine at the same time, while both of them are completely independent of each other. This is the main reason why Virtual Environments are used.How to set up a Virtual EnvironmentThere are two methods to set up a virtual environment. One is by using the package “virtualenv” which can be installed via pip. The other method is to use “venv” that comes with Anaconda. Since Anaconda is not our primary focus here, we will stick to the default method.The following tutorial is based on the assumption that the user has installed Python successfully and has access to pip (automatically gets installed with Python). If not, please refer to this guide on how to do so.Open your console and type: pip install virtualenvCreate a directory where you want to make your virtual environment: mkdir folder-nameNavigate to that folder: cd folder-nameCreate a virtual environment: virtualenv env-nameIn order to use this virtual environment, navigate inside the Scripts of the virtual environment: cd env-name/ScriptsActivate the virtual environment: activateNow the virtual environment is set up, and it can be used as its own independent interpreter. No changes made here will affect the system interpreter.What is “requirements.txt”Since Python heavily depends on external libraries and packages, more often than not, the programs that a developer creates require a certain set of libraries with their specific versions mentioned. One program might be dependent on numerous libraries. Hence, if any other developer wants to run the same program on his local machine, he/she would have to install the libraries manually, which is very tedious.Hence, “requirements.txt” is a text file which contains the list of all the libraries required to run the particular program successfully. A sample file has been shown below.The main advantage is that the developer can just use the command: pip install requirements.txt and this will install all the required dependencies automatically. This makes the process more streamlined.How is “requirements.txt” related to best practices and virtual environmentAdding a “requirements.txt” file for any complex project is highly recommended since it has two main functions:Any other developer can just open the file and get an idea of what libraries have been used in the projectAs discussed above, it makes it quite easy for the developer to install dependenciesHence, it is considered a “best practice” in Python development.Generating “requirements.txt” manually is not a necessity. It can easily be generated by using the command: pip freeze > requirements.txt, which creates a list of dependent libraries as shown in the screenshot in a previous section.However, this command generates the list for all the libraries currently installed in the environment from which pip is running from and hence will list all of them in the requirements.txt. This is unnecessary since the project might not require all the libraries. This is where virtual environments become a necessity, where each virtual environment is specifically set up for each project, and hence it contains a limited number of libraries required for that project. Creating the requirements.txt file from that active environment generates a specific list of necessary libraries leaving out the ones that are not used for the project.ConclusionThus, so far we have discussed some of the most common “best practices” in python that are recommended across the development industry. This list, even though it does not include every “best practice”, lists the most important ones, and hence is a good place to start with. As the developer gets more experienced in programming, he/she will automatically come into terms with most of the “best practices”.
Rated 4.5/5 based on 25 customer reviews
Best Python Practices

Best Python Practices

Blog
What is “Best Practice” of any programming language?Programming is equally an art as it is science and logic. Hence, even though programming style or technique can vary from developer to d...
Continue reading

Understanding DevOps – A Perspective

Over the years, DevOps has brought about a welcome change in the way product development and operations teams work and collaborate while paving way for faster time-to-market and enhanced customer satisfaction levels Did you know that DevOps is as much about culture as it is about digital transformation /automation?You read it right.The underlying current for a successful DevOps practice is the culture of experimentation, shared learning and collaboration at its best - between product development and operations teams. Interestingly enough, this has paved the way for a new and better way of the software development process.So then, how do we define DevOps? Since DevOps has a lot to do with culture and not just technology, there can be many definitions and here are a few:DevOps is the practice that brings together the participation of development engineers and operations admins with an objective to enhance and expedite the application management lifecycle, from design through to development, testing, deployment, and monitoring. (Source: The AgileAdmin)DevOps stands for a change in the IT Culture where the focus is on achieving rapid IT product/service delivery through the active collaboration of the development and operations teams using DevOps practices such as Continuous Integration, Continuous Delivery, and Continuous Monitoring. (Source: NewRelic)DevOps is defined as a set of practices that allows for automation of processes between the software development and operations teams such that software applications are built, tested and released into the market more reliably and quickly. (Source: Atlassian)The DevOps model is a continuous loop of active collaboration between the development and operations teams such that efficiency is achieved even as time and efforts are minimized.Some core DevOps principles –Flow - that accelerates the work from development to operations to customersFeedback – that forms the backbone of the practice and enables the creation of better productContinuous Learning - that helps enhance the skill sets of DevOps engineers and redefines the way teams operateAutomate - the developmental tasks, and the entire IT infrastructure landscape over time Why DevOps matters? The first thing about DevOps practice is the shift in culture. It is a shift from the regular siloed functioning to a more participative and inclusive style. This is a welcome change for organizations.So why does DevOps matter, more than ever? “   Organizations that are best able to leverage agile and DevOps consistently, have seen a 60% higher rate of both revenue and profit than their mainstream peers who have not” CA Technologies Survey (Nearly 1,300 IT and Business Leaders)The world of business is going through a rapid change all thanks to the massive proliferation of the internet and the influence of software tools.Initially, software functioned as an accessory to business operations but that is changing now; it’s fast becoming an integral part of a business. Under these circumstances, there is an imminent need to not only develop quality software but update it periodically and deploy it in the quickest possible manner. This is where DevOps comes into play - it just changes the way software applications are built, tested, deployed and monitored.The software has become ubiquitous and essential. From bridging customer communication gaps to increasing operational efficiencies and delivering greater customer experiences, it plays a major role in business operations. Software applications help businesses put their best foot forward and DevOps helps build quality applications and at a faster rate.DevOps Market SizeAs per a MarketsandMarkets report, the global market for DevOps was about $ 2.9 billion for 2017 and is projected to grow at a CAGR (Compounded Annual Growth Rate) of  24.7% over a five year period to touch $ 10.3 billion by 2023. Globally, while North America will account for the largest share in the DevOps market, Asia Pacific will account for the highest growth rate during the said period.The above stats only reaffirm the fact DevOps is fast being embraced as software development methodology by businesses across sectors.The DevOps Life-CycleThe DevOps lifecycle is all about getting two teams to come together to become ‘’one’’. You might be wondering what we talked about collaboration and not unification. Well, the core philosophy of DevOps is that development and operations teams collaborate so well to function as one big extended team. The life-cycle is all about shared experimentation, learning and quality product development.‘’DevOps is not a Goal, but a never-ending process of continual improvement’’  - Jez Humble, Author – The DevOps Handbook: Lean EnterpriseLet’s look at the various stages of this life-cycle -Plan: It all starts with planning - planning about the kind of application or software that needs to be developed. This involves making a schematic about the requirements and development process.  Code: Coding is the beginning of the execution process towards building the software application based on the client’s requirements and proposed plan of action.Build: Various codes delivered in the previous stage are then used to build an application. Here, application development is broken into multiple ‘’sprints’’ that have shorter development cycles.Test: This is an important phase for it helps developers comprehensively test the application for bugs and fixes. It is the phase that validates coding and building processes and allows for the release of the application.Release: The tested application is then released into the market and is, Live. Sometimes, companies release the newly developed application or functionality to a test audience, other times, it is directly released to the entire market.Deploy: This is the phase where the code is deployed either into an on-premise or cloud environment. Deployment is performed in a manner that does not interfere with the functioning of the application.Operate: Here, operations are performed on the code as and when needed.Monitor: The released application is monitored for its performance. The performance is noted and key changes (to be made) are listed as per client requirements. These key changes then translate into further requirements wherein we again move to the planning stage initiating what is a continuous loop between development and operations teams.In simple terms, DevOps Life-cycle combines the functions of development and operations teams to roll out a better product based on client requirements and feedback in a quicker time frame.DevOps PracticesWhat really differentiates DevOps from other methods of software development is the continuity of learning and application of this learning to build a better product every time. The elements or best practices of DevOps are what make it stand as a software development methodology that is finding great acceptance across industries.Let’s look at what these best practices are and how they help -Continuous Integration (CI):It isn’t enough if a new code is built, it needs to be integrated into existing code or application. CI aims at integrating codes built by developers into the central code repository more frequently such that detection of bugs, if any, can be carried out more quickly.New code may also represent a new functionality to the existing application in which case it has been integrated into the existing code as smoothly and as quickly as possible - Continuous integration is the way to do this.  Another thing to keep in mind is that the changed code should not, as much as possible, allow for any errors in the run-time environment.The ultimate objective of CI is to make code integration repeatable and easy so as to reduce overall costs and quickly discover challenges at integration.Continuous Development:This is the stage that differentiates the traditional waterfall method with DevOps method – the software or application is developed, continuously. Here, the software development process comprises of multiple ‘’sprints’’ or phases of development cycles that are short and are delivered in a faster time frame. This is the stage where the developers’ technical skills are tested as they focus on coding and building.Continuous Delivery (CD): CD is an ongoing process where the code is built, tested, configured and deployed to a production environment. This follows upon the CI stage; all code changes are deployed to production environment after the build stage. A characteristic feature of CD is that the code is always in a ready-to-deploy state.What is central to CD is the continuous delivery pipeline – a set of processes that make use of tools to compile, test and deploy code for additional/new features.Microservices: Microservices is a valuable approach to application development wherein a single large application is built as a suite of smaller services. Each service is independent of the other while it is connected to the others through interfaces. Another advantage of Microservices is that problems in one service do not impact other services thereby reducing the number of failures in development. The commonality between Microservices and DevOps is that both advocate decentralization of work process allowing for small teams to decide on their delivery.CI and CD can be effectively used to drive Microservices that increase development velocity.Infrastructure as a Code (IaC): In simple words, IaC is the practice of managing the operations environment in the same way as it is done for a code or application. Configuration changes are common as part of infrastructure management. Manual configuration of infrastructure consumes a lot of resources and time not to mention the high possibility of human errors. DevOps eliminates the issue by helping to switch from manual configuration to automatic or programmatic with the help of continuous integration, continuous monitoring, and version control. IaC also helps in the code review process; there is much clarity and teams have a clear idea on the changes so everybody is on the same page.Monitoring and Logging: Monitoring and logging form an integral part of quality product delivery and eventual customer satisfaction. They help organizations understand how the performance of an application impacts customer experience. Data and logs from application usage are captured and analysed to understand how changes impact product quality and customer experience. Active monitoring is a must even as changes to IT infrastructure and application become more frequent.Communication and Collaboration: No matter how good technology or a methodology is, nothing can be achieved without human interference. The cultural takeaway from DevOps is that it enables better communication and collaboration between two teams without which the DevOps practice collapses.Open communication and collaboration are what will lead to enhanced learning experiences and improve combined productivity. In fact, communication and collaboration are even extended to other functional teams such as Sales, Marketing, Finance, etc. bringing a paradigm shift in the way an organization functions towards achieving its objectives. Now, isn’t that a great advantage of DevOps practice? DevOps Tools and their relevanceFor DevOps practices to succeed, it needs tools and excellent tools at that. As we move into 2019, here are a few major DevOps tools -Git – The DevOps practice usually starts with a tool like Git. It is a version-controlled system used for tracking changes to the code while developing the software application. It is also used to track changes in the set of files and also helps in coordination among programmers.Jenkins – A popular Continuous Integration tool, Jenkins is an open-source automation tool written in JavaScript. It helps to automate non-human tasks of development and is easy to install and maintain. The other popular CI tools are – Apache Gump, Bamboo by Atlassian and CircleCI.Selenium– It is a portable and open-source framework/tool that is used for testing web applications. Selenium is a suite of software, each of which caters to a different need. There are four components to the tool – Selenium Integrated Development Environment, Selenium Remote Control, WebDriver, Selenium Grid.Docker – Arguably the most popular container platform, Docker is open-source and helps teams build, ship and run distributed applications. It helps users assemble apps from different components and work in collaboration. It also helps to schedule and orchestrate containers on machine clusters.Kubernetes – A widely popular tool, Kubernetes is a container orchestration system and a game-changing tool. It helps in running containers in a cluster. It also helps manage and monitor different applications, effectively. Kubernetes is an open-source system originally built by Google and is now being managed by Cloud Native Computing Foundation or CNCF.Chef and Puppet – As two top configuration management tools, Chef and Puppet have a lot in common and are used for deploying, configuring and managing servers. They are both open-source and are based on the master-slave architecture. The major difference being - Chef is largely considered a ‘’Developer’s tool’’ allowing for more creativity and is easier to deploy and scale. Puppet, on the other hand, is more of a ‘’safe’’ tool that allows less liberty. Other popular configuration management tools are Terraform, Ansible, and Saltstack.  Nagios – An open-source and free system monitoring tool that was designed to run on Linux Operating System. It can help monitor devices that run Linux, Unix, and Windows. Nagios helps identify and resolve IT infrastructure issues and thereby helps avoid critical business process problems.ELK Stack – A combination of three powerful tools – Elastisearch, Logstash and Kibana, ELK Stack provides a value add as it helps collect and analyse logs to provide some great insights. It is an open-source tool with free plug-ins.You can also learn more about DevOps tools here.How DevOps WorksIn purely technical terms - DevOps is a move away from the ‘’Water-fall’’ model of software development and is a more enhanced process than the ‘’agile’’ method. We shall talk about the major differences between DevOps and Agile a little later. For now, let’s understand how DevOps works -The communication gap between development and operations team is bridged through DevOps workflow. Due to this, siloes are broken giving rise to greater collaboration, work efficiencies, and better product delivery. This ultimately leads to faster time-to-market of products and higher levels of customer satisfaction giving organizations the much-required competitive edge.At the core of DevOps Lifecycle is the ‘’Continuous’’ philosophy – a simple yet profound philosophy that keeps the DevOps Life-cycle running and improving at each stage.Benefits of DevOps and its Business Value  There are several benefits to adopting DevOps. We have observed some of these in the earlier sections, let us now look at the benefits that provide business value to organizations -Stable Environment:DevOps involves breaking down silos leading to better communication and collaboration between the two teams. This leads to a more stable operating environment, a much-required move away from the ‘’chaotic’’ environment that we have been used to.Increased Efficiency: Stable operating environment automatically brings about increased efficiencies of individuals and teams since it less about competition and proving the other wrong and more about achieving set objectives, together.Higher Productivity: Efficient teams end up building better quality products leading to lower failure rates, faster fixes and quality time focus on core functions. DevOps teams end up being more productive and less obstructive. This leads to a reduction in CAPEX and OPEX."There is an average of 19% increase in revenue directly attributed to adoption of DevOps methodologies. Automating software benefits organizations in multiple value areas, financially’’ - CA Technologies SurveyFaster time-to-market: Arguably, the bigger advantage of DevOps is its ability to deploy software tools or applications in a faster time-to-market manner. This gives an edge over the competition while ensuring that the customer/market feedback is quickly translated into the enhancement of product(s) resulting in customer satisfaction.  Better ROI: Better products lead to greater customer satisfaction which can be the foundation layer for customer loyalty. Customer loyalty coupled with lower developmental and operational costs is a blessing for any organization as it ramps up the ROI. Over time, this can lead to increased revenues and profitability.Overall, DevOps just about increases the velocity at which software can be developed, tested and deployed to meet client requirements, successfully.  This is a great value add, especially so, in the current times where customer demand is high and the need to have a competitive edge has gained paramount importance.DevOps Trends for 2019As per a Statista report, DevOps adoption has shown an increase of 17% in 2018 vs. only 10% in 2017. The year 2019 will an interesting one for DevOps as more organizations prepare themselves to embrace the practice in a bid to enhance business operations.In that vein, let’s observe some of the top DevOps trends for 2019 –Increased Focus on CD: While Continuous Integration is an essential component of DevOps practice, the focus will shift more towards Continuous Delivery where organizations will look to automate the entire software development process to the extent possible. At a macro-level, leadership is looking towards CD as the key component to deliver better deliverables and thereby positively impact business performance and customer engagement in a more comprehensive manner.Growth in Microservices: Microservices will see compounding growth, there is no doubt about that. As Microservices do not create any dependencies in software development they augment the DevOps philosophy; both go hand in hand. Moving to Microservices architecture helps improve delivery and increases runtime, something, every organization looks forward to in a highly competitive market scenario.Data Science and AI integrate: Applications generate a lot of data and DevOps is becoming more data-driven these days. With so much data available, it is only prudent for businesses to harness the same to enable better outcomes. Going forward, Data Scientists and AI experts will be working closely with DevOps professionals to observe, study and analyse different trends as users interact with applications. Insights can be gathered to help enhance the DevOps process and also better understand customer behaviour patterns.Kubernetes to evolve and lead:Kubernetes has slowly become the fastest growing technology, and as per reports, it is the highest adopted container technology. Container Orchestration is used by software teams to manage, control and automate various tasks. Each service broken down from the larger application (via Microservices architecture) can be stored in a container. Kubernetes helps to adhere to the CI and CD processes because it is easy to scale, use and manage.DevSecOps gains focus: Security, in software terms, is usually treated as an afterthought. In fact, this attitude has led to further vulnerabilities in technology adoption and usage. However, 2019 will see a different outlook even as security becomes an integral part of software development – it will be integrated into the development process and not added on top of it. DevSecOps is the practice of putting security first and injecting the same into the development life-cycle of an application. This will gain much momentum in 2019.What DevSecOps also does is that it makes everyone involved as a stakeholder in the security maintenance process and thus, helps reduce the confusion on how to go about implementing the DevOps Security process.Is DevOps related to Cloud Computing?Yes.While DevOps is about the process and process enhancement, Cloud Computing is all about technology and services. The two are mutually exclusive and yet the common binding factor is that both have become very essential for organizations moving towards digital transformation.How related are these two? First and foremost, the centralized nature of the cloud provides DevOps with the right platform for building, testing, deployment, and production of code. Cloud platforms support DevOps automation; they provide the right environment and tools to ensure continuous integration and continuous development. This allows for lower spends on software development in comparison with on-premise development.Cloud Computing allows developers to enhance their efficiency and productivity. It also allows them to have more control over their work process resulting in smaller wait times which is a great advantage. Since Cloud provides application-specific infrastructure, developers can now own more components. The various tools available on Cloud help quicken the development process, ensure repeatability and reduce human error to a large extent.There is a reduction in infrastructure costs as DevOps in combination with Cloud, allows for scalability in the software development process. Cloud helps developers in creating a self-service method as far as provisioning infrastructure is concerned. Developers need no longer depend on IT operations as they can experiment (build, test) and come out successful with quality products.To sum it up, DevOps and Cloud have a synergetic relationship whose combination can provide multiple benefits to organizations and help speed up their digital transformation in sync with their business objectives.What is the difference between DevOps and Agile? As interesting as it can get, there have been enough and more debates and discussions on DevOps and Agile with respect to software development and Digital Transformation. Let’s understand some basic differences -While both DevOps and Agile are involved in the creation of software applications, they have clearly demarcated differences.Teams involved:Agile is largely restricted to the development team and involves the continuous iteration of development and testing of code/application in the software development life-cycle. DevOps, as we had earlier discussed, involves both the development and operations teams. It is a culture that promotes greater collaboration and work efficiencies between the two teams.Task Focus: Agile focuses on managing complex projects by breaking them into smaller pieces with the help of frameworks such as Scrum, Sprint and Safe. DevOps goes beyond to manage the entire engineering process that involves development and system admin teams.Task Timeframe: Agile tasks are carried out in short cycles called ‘’Sprints’’ and the duration for each sprint can be anywhere from 2-4 weeks. DevOps also has a short development life-cycle, the difference being code is delivered to production on a daily basis.Feedback Source: While Agile teams obtain feedback from the customers, DevOps teams obtain the same from Operations.In other words, DevOps is nothing but a continuum of Agile methodology that goes beyond software development team to operations and focuses on faster time-to-market of product and end-to-end business solutions. It is also interesting to note that DevOps facilitates agile development teams to implement the continuous integration and continuous delivery processes resulting in a faster product release.What are the roles and responsibilities of a DevOps Engineer? A good DevOps engineer is in demand and is almost always busy. Yes, good DevOps engineers get paid well. Let’s now, observe some roles and responsibilities of a DevOps Engineer -First and foremost, a DevOps engineer must understand the requirements of the client. Not understanding can completely set the task journey of the engineer in the wrong direction.Perform problem-solving across application domains and system troubleshootingIncrease project visibility through traceabilityManage projects effectively through open and standards-based platformsDesign, Evaluate and Analyse automation scripts and systemsEnsure critical resolution of system issues via cloud security solutionsImprove product quality and help reduce development costsCollaborate and coordinate to obtain feedback so as to implement changesDevOps Engineer Skillset - Engineer Skills required for DevOpsThe demand for DevOps engineers is increasing as organizations embrace DevOps. But a good DevOps engineer must possess certain skills, and these can be divided as – a) Technical and b) Non-Technical -Technical SkillsScripting Languages – DevOps engineers may be required to provision the infrastructure and hence they will need to know some scripting languages in order to automate it. Some of the more popular scripting languages are – JavaScript, Pearl, Python, Ruby.Linux Fundamentals – Most organizations have their environment running on Linux. Add to that, Configuration Management tools such as – Puppet, Ansible and Chef also happen to have their master nodes on Linux making it very important for DevOps engineers to have Linux fundamentals.Grip on DevOps tools – As a DevOps engineer you will be required to have good knowledge and grip on tools used at various stages in the DevOps cycle. These are the tools we had already mentioned under section VI of this article.Usage of Tools – Knowledge of tools alone is not enough. A good DevOps engineer will know which tool to use at what stage in the DevOps process. In order to facilitate a successful DevOps process, DevOps engineers will need to use relevant tools to ensure continuous integration and continuous delivery.Cloud and Infrastructure – It would augur well for engineers to have a working understanding of how things work on Cloud and on-premise infrastructure. Infrastructure skills will help engineers develop and deploy applications in an effective manner.Non-Technical SkillsCollaboration and Coordination – It is but imperative for a DevOps engineer to have good coordination and collaboration skills - after all, the underlying current of a DevOps process is collaboration and unity of teams.Patience and Flexibility – The likelihood of success is much higher if DevOps engineers exhibit patience and flexibility when it comes to shifting from one area of software development to another.Effective Communication – This skill goes a long way in helping engineers perfect the art of being a top DevOps professional. Remember, effective communication is what will break siloes between people and teams. Not just that, it will also help engineers develop quality products as they will be able to understand and communicate issues and take feedback in an effective manner.How can I get started with DevOps?  Now that we have had a fair amount of understanding on DevOps, it’s benefits and the roles and responsibilities, there is a big question staring at us –How do we get started?The Learners’ Journey First and foremost, it would augur well to keep in mind what we started off with – that DevOps is so much about culture and collaboration. So, approach the learning with this mindset and you are likely to do well in the long run.Now coming to how you can get technically better at DevOps –Read - Spend time and go through a host of material available online. There are several websites and forums that provide material and guidance on DevOps. Start from the basics and yes, it’s easy to get sidetracked with such exhaustive information, so, keep a tab on what you need to start with.Watch – Thank YouTube. It has some very interesting and relevant videos on DevOps. Watch through these and you will be able to understand the concept better. Browse through any animated series on DevOps basics or fundamentals, they are easy to understand and grasp.  Test – It is important that you test yourself on your basics. There are some free tests available online, make use of these.Courses – Finally, the courses. Online learning has reached a new peak. There are plenty of companies teaching DevOps courses with the help of subject matter experts. Research and select the right training partner/Edu-tech company who can help you with not only understanding the DevOps basics but also with how you can chart and build a great career.Prerequisites – It’s important to also have some prerequisites in order to excel as a DevOps Engineer. Here are some -Passion to code, design, and buildGreat interest in automationLinux Administration SkillsGood at one or more of the scripting languages such as Java, Python, etc.Knowledge and usage of various tools – Configuration management, Monitoring, CI, CD, etc.Understanding of containerizationGood Troubleshooting SkillsA lot of patience …Hands-on experience matters Now that you know what would make a good DevOps engineer, let’s look what DevOps ‘’feels’’ like. You can do that by getting into the shoes of a DevOps engineer.One of the easiest ways to obtain the real-life experience is by interning as a DevOps trainee or engineer. You will know the real picture; you will learn to code, develop, test, deploy and code again. Never lose out on a chance to get a first-hand experience, it will go a long way in helping to do well in your interview and career.  Remember, when you face an interview or even run into a potential employer at a conference/event, you will be expected to explain about DevOps in a non-technical way. Know that DevOps is first about culture, collaboration and team unity. Keep in mind that DevOps isn’t all or only about automation and technology. Yes, it is also true that tools are used to deliver the ultimate objectives of – better quality product, customer satisfaction and business growth of your organization.It is likely that you would end up as a good DevOps engineer with focus and hard work on learning the technical aspects and tools but great DevOps engineers and future DevOps leaders will be those who look at the larger picture of what DevOps can achieve for both the organization and its customers. ,Conclusion After all of this, where is DevOps headed? Having understood DevOps, we can say with much confidence that this practice is here to stay. Why? Because it does what most technology methodologies do not – breaks siloes and creates unison between the stakeholder teams both technically and culturally. DevOps is a symbiotic practice that gives added advantage and benefits for businesses adopting the practice.The sheer need to improve application quality, its performance, faster time-to-market and of course, the end-user experience is what will have businesses embracing DevOps in a never-before-like manner.In more ways than imagined, DevOps is a blessing for businesses out there.  Internally, it provides businesses with a culture-rich, collaborative and progressive environment. Externally, it provides businesses with the required competitive edge to sustain and move ahead. As the world of business moves into the sea of digitalization and automation, DevOps is that ship helping businesses sail smooth, fast and far.
Rated 4.5/5 based on 12 customer reviews
Understanding DevOps – A Perspective

Understanding DevOps – A Perspective

Blog
Over the years, DevOps has brought about a welcome change in the way product development and operations teams work and collaborate while paving way for faster time-to-market and enhanced customer sati...
Continue reading

How to Become a Data Scientist

What is a Data Scientist?Data Scientist is a professional standing at the confluence of technology, domain knowledge, and business to tackle the data revolution. A Data Scientist needs to be a mathematician, computer programmer, analyst, statistician, and effective communicator to turn insights into actions.Source Link:It's not just the technical skills that make Data Scientist the most in-demand job of 21st Century, it takes a lot more. Data Scientist is a professional who utilizes these new-age tools to manage, analyse and visualize data.Let us take an example to better understand a day in the life of a Data Scientist. On a typical day, A Data Scientist may be given an open-ended problem such as “We need our customers to stay longer and watch/read more content”. The following are a few steps he/she might get started with:The Business HatThe job of the Data Scientist would, first of all, involve translating this problem statement into a quantifiable data science problem. For this, he might first ask or identify the current time being spent by users and discuss with the business teams how to quantify “more”.The Programming HatHe/she would then get towards data collection. He would have to work with different teams to understand what kind of data is available, what all he might require for his analysis and so on. Once clear about what and where related to data, he would extract and prepare data for analysisThe Analytical HatHere he would utilize his analytical and statistical powers to ask important questions using data. This typically involves exploratory analysis, descriptive analysis and so on.There are additional steps after this wherein the data scientist would then head towards building models to actually improve the time spent on the website by developing recommender engines and so on, sharing results/fine tuning models with business teams and so on. He would then take this towards production environment, where it can be actually tested and finally used.The above example is an over-simplified version of tasks a typical data scientist performs. Yet, it should give you a glimpse into how different skill-sets are utilized by such a professional.Data Science vs. StatisticsData Science can be defined in many ways. One of the most interesting and true definitions marks it as the fourth paradigm (link). The first three being experimental, theoretical and computational science. The fourth paradigm, Dr. Jim Gray explains, is the answer to cope with the tremendous flood of data being collected/generated every day.In simple words, Data Science is thus a new generation of scientific & computing tools which can help to manage, analyse and visualize such huge amounts of data.The explanation of the term Data Scientist and Data Science seems to indicate it is a completely new field with its own set of techniques and tools. Though this is true to a certain degree, yet, not entirely. Data Science, as mentioned above, is at the confluence of technology, domain knowledge and business understanding. Thus, it utilizes tools and techniques from various fields to form a set of encompassing methodologies to turn data into insights.Statistics traditionally has been the go-to subject to analyse data and hypothesis. Statistical methods are based on established theories and years of research.Even though Data Science and Statistics have similar goals (and overlapping techniques in certain cases), i.e. to utilize data to reach conclusions and share insights, they are not the same. Statistics predates computing era while Data Science is new-age amalgamation of interdisciplinary knowledge.There is a never-ending debate on the definitions of Data Science and Statistics. The old school believes Data Science is merely a rebranding of Statistics while the new-age experts grossly differ. Amongst all this, an interesting and somewhat accurate take on the issue was presented in an article on the website of Priceonomics (link):“Statistics was primarily developed to help people deal with pre-computer data problems like testing the impact of fertilizer in agriculture or figuring out the accuracy of an estimate from a small sample. Data science emphasizes the data problems of the 21st Century, like accessing information from large databases, writing code to manipulate data, and visualizing data.”Educational Qualifications to become a Data ScientistIt is worth reiterating the fact that Data Science is an interdisciplinary field. This makes sense as Data Science is not limited to just one field of study or industry. It is being used across every field which can or is generating data. It is not a surprise to see Data Scientists coming from varied academic backgrounds. Yet there are a few important and common skills such professionals have in the first place. Educational qualifications required to become a Data Scientist can be summarized as follows:A graduate degree in a quantitative field of study. Areas in mathematics, computer science, engineering, statistics, physics, social science, economics, statistics or related fields are most common.Newer options like bootcamps and MOOCs (Massively Open Online Courses) are quite popular for professionals to pivot into the areas of Data Science.An advanced degree in the form of Masters’ or even PhD certainly helps. Increasingly, a lot many Data Science professionals are with such advanced degrees (link).Technical Skills required to become a Data ScientistThis is the trickiest part of the whole journey. While being interdisciplinary is good in most aspects, it also presents a daunting question for beginners. Data Scientists are storytellers. They turn raw data into actionable insights all the way leveraging tools and techniques from various fields. Yet generic programming skills remain as the common denominator. Apart from programming skills, the following are a few important technical skills a Data Scientist usually has:Mathematical background/understanding (linear algebra, calculus, and probability are important)Machine Learning concepts and algorithms.Statistical concepts (hypothesis testing, sampling techniques and so on)Computer Science/Software Engineering skills (data structures, algorithms)Data Visualization skills (tools like d3.js, ggplot, matplotlib, etc)Data Handling (RDBMS, Big Data tools like Hive, Spark)Though there are no hard and fast rules, most Data Scientists rely on programming/scripting languages like python, R, scala, Julia, Java or SAS to perform everyday tasks from the raw data to insights.Learning Path for Data Scientist - From Fundamentals, Statistics to Problem SolvingTurning Data to Insights is easier said than done. A typical Data Science project involves a lot of important sub-tasks which need to be performed efficiently and correctly. Lets us breakdown the learning path into milestones and discuss how to go about the journey.Step 1: Select a Programming LanguageR and python are widely accepted and used programming languages in the data science community. There are other languages like Java, Scala, Julia, Matlab and to a certain extent even SAS. Yet, R and python have a huge ecosystem and community contributing towards making it better every day. Though there is no such thing as the best programming language for Data Science, yet, there are some favorites and popular ones. When starting off with your Data Science journey, it may be confusing which one to choose. The following are a few pointers that might be helpful:RR is the most popular language when it comes to statistical analysis and time series modeling. It also has a good number of machine learning algorithms and visualization packages. It can have a peculiar learning curve, yet it is good for exploring your data, one-off projects or quick prototypes. It is also usually the go-to language for academic reports, research papers.PythonPython is one of the most widely used programming languages. It is also sometimes referred to as a popular scientific language. Its ever-expanding community, ease of writing code, ecosystem, and support are reasons for its popularity. Python packages like numpy, pandas, and sklearn enable Data Scientists and researchers to work with matrices and other mathematical concepts with ease.The Java FamilyR and python are great languages and are of great help when it comes to quick prototyping (though that is changing slowly with python being used in production as well). The heavyweights of the industry are still the languages from the Java family. Java in itself is a mature and proven technology with an extensive list of packages for machine learning, natural language processing and so on. Scala derives heavily from Java and is one of the go-to languages for handling big data.There are a number of courses on platforms like Coursera and Udemy to get you started with these languages. Some of the courses are:Programming for Everybody(Getting started with python)Applied Data Science with Python SpecializationR ProgrammingAdvanced R ProgrammingJulia and languages of its type are upcoming ones with a special focus towards Data Science and Machine learning. These languages have advantages of having Data Science as one of its core concepts, unlike traditional languages which have been extended to cater to DS/ML needs. Again, it boils down to a personal choice and comfort when it comes to deciding which language to choose.Step 2:  Learn Statistics and MathematicsThese are the basic concepts required to understand the intricacies of more involved ones. The most essential ones are:Linear Algebra, Calculus and Probability theoryHaving an understanding of these concepts would help you in the long run to understand complex concepts. Probability theory is a must have as a lot of machine learning and statistics is based on measuring the likelihood of events, probability of failures or wins and so on. These concepts can be learnt through a number of classroom textbooks like Probability Theory by E.T James, Pattern Recognition and Machine Learning by Christopher M. Bishop, Introduction to Linear Algebra by Gilbert Strang. You could look up for these books/ebooks or even videos on youtube, khan academy and so on.Statistics:These form the very foundation of a lot of things you would be doing as a data scientist. The following are some of the popular online resources which can be helpful in this journey:Statistics:The Statsoft Book on StatisticsOnline Statistics EducationStep 3: Powerup with Machine Learning:Mathematics and Statistics give you the understanding to learn the tools and techniques required to leverage Machine Learning to solve real-world problems. ML techniques expand on a Data Scientist’s capabilities to handle different types and size of data sets. It is a vast subject on its own which can be broadly categorized into :Supervised Methods like classification and regression algorithmsUnsupervised Methods like different clustering techniquesReinforcement Learning like q-learning, etcDeep Learning (spanning across the above three types, it is slowly emerging as a specialized field of its own)Image Source:The following are a few helpful resources to get you started on the subject:Python for Data Science and Machine Learning bootcampR:Complete Machine Learning SolutionsData Science and Machine Learning bootcamp in RDeep Learning SpecializationData Science Nano DegreeProgramming for Data Science Nano DegreeStep 4: Practice!All theory and no practice would lead you nowhere. Data Science has an element of art apart from all the science and theory behind it. A Data Scientist needs to practice to hone the skills required to work on real-world problems. Luckily, the Data Science ecosystem and community is really a great place. To practice Data Science, you need a problem statement and corresponding data. Websites like Kaggle, UCI Machine Learning Repository, and many others are a great resource. Some of the popular ones are as follows:Bike Sharing Demand: Given daily bike rental and weather records predict future daily bike rental demand.Iris dataset: Given flower measurements in centimeters predict the species of iris.Wine dataset: Given a chemical analysis of wines predict the origin of the wind.Car evaluation dataset: Given details about cars predict the estimated safety of the car.Breast Cancer Wisconsin dataset: Given the results of a diagnostic test on breast tissue, predict whether the mass is a tumor or not.There is a detailed list of datasets discussed here as well by Dr. Jason Brownlee on his blog machinelearningmastery.comApart from these datasets, there are regular competitions on Data Science problems on websites like Kaggle, AnalyticsVidya, KDNuggests and so on. It is worth participating in these competitions to learn the tricks of the trade from some of the seasoned performers.Step 5: Build a PortfolioJust like a photographer or a painter, a Data Scientist is as much of an artist. While working on the different datasets and competitions, you can build a portfolio of your completed work to showcase your findings and learnings. This will not only help you showcase your talent but also give you a glimpse of your progress as you learn new and complex methods. A machine learning/data science portfolio is a collection of independent projects which utilizes machine learning in one way or the other. A typical machine learning portfolio can give you the following benefits:Showcase: your skill set and technical understandingReusable code base: As you work on more and more projects, there are certain components which would be required time and again. Your portfolio can be a repository of such reusable components.Progress Map: A portfolio is also a map of your progress over time. With every project, you would be getting better and learning new complex concepts. This is a great way to keep yourself motivated as well.Typically, Data Scientists leverage their portfolios along with their CVs for interviews and prospective employers to have a better understanding of their capabilities. Code repositories can be maintained on websites like github, bitbucket and so on. Maintaining a blog to share your findings, commentary and research to a broader audience along with self-promotion are also quite common.Step 6: Job Search / Freelancing:Once the groundwork is done, it's time to reap some benefits. We are living in the age of data and almost every domain and sphere of commerce is (or trying to) leverage Data Science. To leverage your skill set for Job Search or Freelancing, there are some amazing resources to your aid:Interview Preparation:Machine Learning using PythonData Science and ML Interview GuideDeep LearningData Science Competitions:KaggleInnocentiveTuneditHackathons:HackerEarthMachineHackEach of these platforms provides you with an ecosystem of experts and recruiters who can help you land a job or a freelancing project. These platforms also provide you with an opportunity to fine tune your skills and make them market ready.Top Universities offering a Data Scientist CourseThe educational requirements to become a Data Scientist were discussed previously. Apart from traditional quantitative fields of study, a lot many reputed top universities across the globe are also offering specialized Data Science courses for undergraduate, graduate and online audiences. Some of the top US universities offering such courses are:1. Information Technology and Data Management courses at the Colorado Technical UniversityCourse Name: Professional Master of Science in Computer ScienceCourse Duration: 2 yearsLocation: Boulder, ColoradoCourses: Machine Learning, Neural Networks, and Deep Learning, Natural Language Processing, Big Data, HCC Big Data Computing and many moreTracks available: Data Science and EngineeringCredits: 302. MS in Data Science, Columbia UniversityCourse Name: Master of Science in Data ScienceCourse Duration: 1.5 yearLocation: New York City, New YorkCore courses: Probability Theory, Algorithms for Data Science, Statistical Inference and Modelling, Computer Systems for Data Science, Machine Learning for Data Science, and Exploratory Data Analysis and VisualizationCredits: 303. MS in Computational Data Science, Carnegie Mellon UniversityCourse Name: Master of Computational Data ScienceCourse duration: 2 yearsLocation: Pittsburgh, PennsylvaniaCore courses: Machine Learning, Cloud Computing, Interactive Data Science, and Data Science SeminarTracks available: Systems, Analytics, and Human-Centered Data ScienceUnits to complete: 1444. MS in Data Science, Stanford UniversityCourse Name: M.S. in Statistics: Data ScienceCourse Duration: 2 yearsLocation: Stanford, CaliforniaCore courses: Numerical Linear Algebra, Discrete Mathematics and Algorithms, Optimization,Stochastic Methods in Engineering or Randomized Algorithms and Probabilistic Analysis, Introduction to Statistical Inference, Introduction to Regression Models and Analysis of Variance or Introduction to Statistical Modeling, Modern Applied Statistics: Learning, and Modern Applied Statistics: Data MiningTracks available: The program in itself is a trackUnits to complete: 455. MS in Analytics, Georgia Institute of TechnologyCourse Name: Master of Science in AnalyticsCourse Duration: 1 yearLocation: Atlanta, GeorgiaCore courses: Big Data Analytics in Business, and Data and Visual Analytics,Tracks available: Analytical Tools, Business Analytics, and Computational Data AnalyticsCredits: 36There are numerous other courses by other top universities in Europe and Asia as well. Also, MOOCs from platforms like Coursera, Udemy, Khan Academy and others have also gained popularity lately.Roles and Responsibilities of a Data Scientist - What does a Data Scientist do?The role and responsibilities of a Data Scientist vary greatly from one organization to other. Since the life cycle of a data science project involves a lot of intricate pieces, each with their own importance, a data scientist might be required to perform different tasks. Typically, a day in a Data Scientist’s life comprises of one or more of the following tasks:Formulate open-ended questions and perform research into different areasExtract data from different sources from within and outside the organizationDevelop ETL pipelines to prepare data for analysisEmploy sophisticated statistical and/or machine learning techniques/algorithms to solve problems at handExploratory and Descriptive analysis of data.Visualization of data at different stages of the projectStory-telling/communicating results and findings to end-consumers/IT teams/business teamsDeploy intelligent solutions to automate tasksThe above list is by no means exhaustive. Specific tasks may be required for specific organizations and/or scenarios. Depending upon the set of tasks assigned or strengths of a particular individual, the Data Scientist role may have different facets to it. Some organizations divide the above set tasks into specific roles like:Data Engineer: concentrates more on developing ETL pipelines and Big Data infrastructure.Data Analyst: concentrates on hypothesis testing, A/B testing and so onBI Analyst: concentrates on visualizations, BI reporting and so onMachine Learning/Data Science Engineer: concentrates on implementing ML solutions into production systemsResearch Scientist: concentrates on researching new techniques, open-ended problems, etc.Though some organizations separate out the roles and responsibilities, others chose to have a common Data Scientist title.Salaries of a Data ScientistThe title of the most-coveted job of the 21st century ought to have an equally tempting salary as well. The data also confirms the hypothesis from various aspects. Different surveys from across the world have analysed salaries of Data Scientists and the results are astonishing.The Burtch Works Study for Salaries of Data Scientists is one such survey:The survey points out that post the peak increases in data scientist salaries across different levels in 2015-2016, the salaries for 2018 have been more or less steady at the previous year levels.The median base salary for a starting position is around $95k which rises up to $165K for 9+ years of experience (for individual contributors)The median base salary for Managers start out around $145K and go up to $250K (for 10+ years of experience)Image SourceA survey by PromptCloud on the similar lines tried to identify different skills required for different Data Scientist job postings. The results show python as the topmost skill required followed by SQL, R and others. This showcases how important python and python ecosystem is to the Data Science work and community.The Glassdoor 50 Best Jobs in America for 2018 (link) rates Data Scientist as numero uno with an average salary of around USD 120k. The study also identifies other related Data Science job titles like Data Analyst and Quantitative Analyst in the study.Image SourceSimilar results from Payscale, Linkedin and others reconfirm the fact. Data Scientists are really sought after across the globe.Top companies hiring Data ScientistWith the advancements in compute & storage and corresponding lowering of cost for hardware, technology is part and parcel of almost every industry. From aerospace to mining, from the internet to farming, every sphere of commerce is generating an immense flood of data. Where there’s data, there’s data science. Almost every industry today is leveraging the benefits of Data Science.Some of the top companies hiring for Data Scientists are:GoogleTwitterGE-HealthHPMicrosoftAirbnbGE-AviationIBMAppleUberUnitedHealth GroupIntelFacebookAmazonBoeingAmercian ExpressThese are some of the big names in their respective fields. There are a lot of start-ups along with small-medium sized enterprises that are also leveraging Data Scientists to make an impact in their respective fields.How is Data Science different from Artificial Intelligence?Our discussion so far has revolved around Data Science and related concepts. In the same context, there’s another important term, Artificial Intelligence (AI). There are times when terms like AI and Data Science are used interchangeably while there people who perceive them differently as well. To understand each side, let us first try and understand the term Artificial Intelligence.Artificial Intelligence can be defined in many ways. The most consistent and commonly accepted definition states:“The designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment”The above definition comes from AI heavyweights Dr. Peter Norvig and Dr. Stuart Russell. In simple words, this definition highlights the presence of intelligent agents which act based on stimulus from the environment, which in turn has an effect on the environment as well. Sounds very similar to how we, as humans, function.The genesis of Artificial Intelligence as a field of study/research is credited to the famous Dartmouth workshop in 1956. The workshop was held by John McCarthy and Marvin Minsky, amongst other prominent personalities from computer science and AI space. Their workshop provided the first glimpse of intelligent systems/agents. The programs were learning strategies for the game of checkers. The programs were reported to play better than average human beings by 1959! A remarkable feat in itself. Since then, the field of AI has gone through a great many changes, theoretical and practical advancements.The field of AI is focussed towards being successful at maximizing the agent’s chances of achieving a stated goal. The goal can be termed simple (if its only about winning or losing) or complex (take next steps based on rewards from past moves). Based on these goal categories, AI has focussed at solving problems in the following high-level domains over the course of its history:Knowledge RepresentationThis is one of the core concepts in classical AI research. As part of Knowledge Representation or Knowledge Engineering, we try to capture the world knowledge (where world is some specific narrow domain) possessed by experts. This was the foremost area of research for expert systems. The field of Ontology is highly associated with Knowledge Representation.Problem Solving and Reasoning TasksThis is one of the earliest areas of research. Herein, the researchers focussed at mimicking human reasoning step by step for tasks such as puzzle solving and logical deductions.PerceptionThe ability to utilize input from different sensors such as microphones, cameras, radars, temperature sensors and so on for decision making. This is also termed as Machine Perception with modern day applications like speech recognition, object detection and so on.Motion and ManipulationThe ability to move and explore the environment is an important characteristic highly utilized in the robotics space. Particularly industrial robots , robotic arms and the amazing machines from groups like Boston Dynamics are prime examples.Social IntelligenceIt is considered one of the far fetched goals wherein the intelligent systems are expected to understand human emotions and motives to take decisions. Current-day virtual assistants(the likes of Google Assistant, Alexa, Cortana, etc.) provide a glimpse of such advantages by allowing them(virtual assistants) to converse, joke and make small talk.The domains of Learning Tasks, characterized as supervised and unsupervised learning along with Natural Language Processing tasks have been traditionally associated with AI. Yet, with recent advancements in these fields, they are sometimes seen separately or no longer part of AI. This is also known as AI effect or Tesler’s Theorem. The AI effect simply states:“AI is whatever hasn’t been done yet”On the same grounds, OCR or optical character recognition, speech translation and others have become everyday technologies. This advancement has led to these technologies being no longer considered as part of AI research anymore.Before we move on, there is another important detail about AI. Artificial Intelligence is categorized into two broad categories. These are:Narrow AIAlso termed as weak AI. This category is focussed at tractable AI tasks. Specifically, most of current-day research is focussed on narrow tasks like developing autonomous vehicles, automated speech recognition, machine translation and so on. These areas work towards building intelligent systems which mimic human level performance but are limited to specific areas only.Deep AIThis is also termed as strong AI or better, Artificial General Intelligence. If an intelligent agent is capable of performing any intellectual task, it is considered to possess Artificial General Intelligence. AGI is considered to be a summation of knowledge representation, reasoning, planning, learning, and communication.Deep AI or AGI seems like a far fetched dream yet advancements like Transfer Learning and Reinforcement Learning techniques are steps in the right direction.Image SourceNow that we understand Artificial Intelligence and its history, let us attempt at understanding how it is different from Data Science. Data Science, as we know, is an amalgamation of tools and techniques from different fields (similar to AI). From the above discussion, we see, there is a definite overlap between the definition of weak/narrow AI and Data Science tasks. Yet, Data Science is considered to be more data-driven and focussed on business outcomes & objectives. It is more application oriented study and utilization of tools and techniques. Though, there are certain overlaps and similarities in the areas of research and tools, Data Science and AI are certainly not the same. It would be hard to even set them as subset-superset entities either. They are best seen as interdisciplinary fields which make the best of uncertainties.SummaryData Science is THE keyword for every industry for quite a few years now. In this article on What is a Data Scientist, we covered a lot of ground in terms of concepts and related aspects. The aim was to help you understand what really makes Data Scientist the “top and trending ” job of the 21st century.The discussion started off with a formal definition of Data Science and how it is ushering in the fourth paradigm to tackle this constant flood of data. We then briefly touched upon the subtle differences between Data Science and Statistics along with the point of contention between the experts from the two fields. We also presented an honest opinion on what all it takes, in terms of technical skills and educational qualifications, to become a Data Scientist. Sure, it is cool to be one, but it is not as easy as it seems.Along with the skills, we touched upon the learning path to become a Data Scientist. In this section, we covered the fundamental concepts one should know to advanced techniques like Reinforcement Learning and so on.The world is in deep shortage of Data Scientists. Top universities have taken up this challenge to upskill the existing and next generation of workforce. We discussed some of the courses being offered by these universities from across the globe. We also touched upon different companies that are hiring data scientists and at what salaries.In the final leg, we introduced concepts related to Artificial Intelligence. It is imperative to understand how different yet overlapping Data Science and AI are.With this, we hope you are equipped to get started on your journey to become a Data Scientist and contribute. If you are already working in this space, the article was aimed to demystify some commonly used terms and provide a high-level overview of Data Science.
Rated 4.5/5 based on 18 customer reviews
How to Become a Data Scientist

How to Become a Data Scientist

Blog
What is a Data Scientist?Data Scientist is a professional standing at the confluence of technology, domain knowledge, and business to tackle the data revolution. A Data Scientist needs to be a mathema...
Continue reading

Top 10 Python IDEs

What is an IDE?IDE stands for Integrated Development Environment. It is a piece of development software which allows the developer to write, run, and debug the code with relative ease. Even though the ability to write, run and debug the source code is the most fundamental features of an IDE, they are not the only ones. It is safe to say that all IDEs perform the fundamental tasks equally well, however, most modern IDEs come with a plethora of other features specifically tuned to make the workflow easier for a particular type of development pipeline. In this article, we will focus on IDEs that support Python as a programming language.IDEs are usually developed by a community of people (open source) or by a commercial entity. Each IDE comes with their own strengths and weaknesses. Some IDEs like Jupyter or Spyder are open source and are developed aimed at the Scientific and Artificial Intelligence research community. These IDEs have additional features which make it easy and fast to prototype Machine Learning models and Scientific simulations with great ease. However, they are not well equipped to sustain the development process of an end-to-end application.Why is IDE an important part of Development?Traditionally text editors like Nano or Vim (Linux/Unix), Notepad (Windows) and TextEdit (MacOS) were used to write code. However, they are very good at only one single thing, that is to write text. They lack the common functionalities like syntax highlighting, auto indentation, auto code completion, etc.Next comes the dedicated text editors which were designed to write and edit code for any programming language. These editors like Sublime Text and Microsoft Visual Studio Code are feature rich in terms of the common functionalities like syntax highlighting and auto-indentation. Some even have a Version Control System built in. However, they still lack a significant chunk of functionalities that IDEs have. Their main advantage over IDEs is that they are fast and easy to use.Finally, coming to IDEs, these are full-fledged development software which contains all the features and tools necessary to aid the complete development pipeline of any software. The main disadvantage of IDEs is that they are comparatively slower and more taxing on the system when compared to text editors.Top 10 IDEs used for PythonIn this article, we will look at the top 10 Python IDEs that are used across the industry. We will learn about their features, pros, cons and will finally conclude what makes one special over the other.1. PyCharmCategory: IDEWebsite: https://www.jetbrains.com/pycharm/PyCharm is a cross-platform Integrated Development Environment specifically developed for Python, by Czech company < rel="nofollow"a href="https://www.jetbrains.com" rel="noopener noreferrer" target="_blank">JetBrains. It primarily has two versions of the software that is available to download - Professional Edition and Community Edition. The Professional Edition has additional features for development, which the Community Edition lacks and is to be purchased. The Community Edition is released under Apache License and is a free-to-use, open source IDE which is identical to the Professional Edition in most ways, however, it lacks the additional features.Features: Listed below are some of the features of this IDEDevelopment Process: PyCharm Supports the complete development pipeline and its convenience starts to show right from the beginning of the creation of the project, where the developer is given the choice to choose between various interpreters, create a virtual environment or opt for remote development.Inbuilt VCS: Like many other modern IDEs, VCS is baked right into PyCharm. When inside a project which employs VCS, PyCharm automatically generates a graphical interface showing the various branches and the status of the projectDedicated Database Tools: PyCharm makes it quite easy to access and modify databases. It allows the developer to access any of the popular SQL databases like MySQL, PostgreSQL, Oracle SQL, etc. right from inside the IDE. It also allows to edit SQL commands, alter schemas, browse data, run SQL queries and analyze schemas with UML diagrams along with support for SQL Syntax Highlighting. Support for IPython Notebooks: As most Data Scientists will swear by, IPython Notebooks are one of the best functionalities that are available in Python. Even though PyCharm is not as functional in running IPython Notebooks as the more preferred Jupyter Notebooks are, PyCharm allows the developer to successfully open the IPython Notebook file with proper Syntax Highlighting and Auto-Code Completion and allows the developer to run the notebook as well.Dedicated Scientific Toolkit: One of the most commonly used features in this toolkit is SciView. SciView is primarily used for data visualization. It carries forward the functionalities of the well-known Spyder IDE, which comes as a part of Anaconda Installation. SciView allows the developer to easily view plots and graphs built into the editor without having to deal with pop-up windows showing the graphs. Additionally, one of the best features of SciView is the Variable Explorer or Data Explorer, which provides the user with a tabular visualization of the data and its values contained in the variable.  Pros: PyCharm offers what most other IDEs don’t, and that is a complete package that allows PyCharm to be used for any kind of end-to-end development or prototyping process across almost all development fields.Cons: PyCharm, being so packed in features, is sluggish and consumes a considerable about of system resources even while idling. This may create problems in low-end systems and prevent the developer from using his/her system’s full potential for the project.2. SpyderCategory: IDEWebsite: https://www.spyder-ide.org/Spyder is an open-source Scientific Python Development Environment which comes bundled with Anaconda. Spyder has multiple features that are developed to aid the scientific and data-driven development and hence is an ideal IDE for Data Scientists. It is written in Python itself with the PyQt5 library and hence offers some added functionality which is mentioned below.  Features: Listed below are some of the features of this IDEVariable Explorer: Variable explorer is one of the main features of Spyder. This allows the developer to view the contents, datatypes, and values of any variable in the program. This is particularly useful for data scientists since variable explorer allows the developer to view the format and shape of the data. Additionally, it allows the developer to plot histogram and visualize time-series data, edit DataFrames and Numpy arrays, sort a collection and dig into nested objects.  Live Library Docs: Accessing the documentation repeatedly for a particular class or object of a library via a third party browser can be tiresome. Spyder has an inbuilt HTML viewer which displays the documentation for that particular object or library directly inside the IDE.IPython Console: All the lines of codes that are executed by the Spyder IDE, is done so in the IPython console. This console stays open even after the program execution has been concluded and the user can further write extra commands to view, test or modify the existing objects while keeping the changes temporary, ie. outside the main editor. Debugger: Debugging is quite an important part of the development process of any software/program. Spyder supports inbuilt debugger via its iPython console, which allows the user to debug each step of the code individually.Plugins: Spyder, being open-source, supports third-party plugins which allow the developer to improve his/her development experience. A few of the most used ones are Spyder Notebook, Spyder Terminal, Spyder UnitTest, and Spyder Reports.Pros: Spyders is developed by scientists for scientists. Hence it consists of all the important tools and functionalities that may be required for the development process for any Data Scientist and is ideal for this situation.Cons: Spyder being specifically designed and developed for a certain community of developers (Data Scientists), it lacks most of the end-to-end development tools that are present in other IDEs like PyCharm.3. Jupyter NotebookCategory: IPython Notebook EditorWebsite: https://jupyter.org/  Jupyter Notebook is one of the most used IPython notebook editors used across the Data Science Industry. It makes the best use of the fact that Python is an interpreted language, which means that Python lines of code can be run one line at a time and the whole thing need not be compiled together like C/C++. This makes IPython Notebooks ideal for writing and prototyping Machine Learning models. Since there is a significant amount of preprocessing done initially, and after that, there is a process of repeated hyperparameter tuning and model prototyping, the ability to run a cell (a group of lines) together at a time gives Data Scientists the ability to tune their models easily.Features: Listed below are some of the features of this IDEMarkdown and LaTeX Support: Jupyter Notebooks, in addition to being able to write Python code, supports documentation and commenting with text formatting via Markdown Editor. Each cell can be converted to use Markdown or Code. Additionally, Jupyter Notebook, being a scientific tool, also supports LaTeX commands to write down equations at any cell in the notebook.Dedicated display for DataFrame and Plots: Since data is the core component of Data Science and Machine Learning, Jupyter Notebook has an inbuilt display for data-tables or pandas DataFrames. Additionally, Data Visualization is an important process of the Exploratory Data Analysis of the Data Science Pipeline. Thus, Jupyter Notebooks has an integrated display for plots and diagrams so that the developer does not have to deal with pop-up plots.Remote Development Support: Jupyter Notebook is a server-based application which, when run locally, creates a localhost server backend before opening up via a web browser. But the same can be used for remote development as the Notebook can be run on a remote server which can then be connected to, to run the notebook locally on the web browser, while the processing is done in the server side. Direct Command Prompt or Linux Shell access from inside the notebook: Since the notebook can be used as a remote development tool, the notebook allows the developer to directly access the Linux Shell or Windows Command Prompt directly from the notebook itself without having to open up the shell or command prompt. This is achieved by adding an exclamation mark (“!”) before writing the shell command.Multi-Language Support: Jupyter Notebook supports both Python and R. R is also a programming language popularly used by Data Scientists and Statisticians.Pros: The main advantage is the convenience of using it in R&D and in prototyping for Machine Learning and Scientific problems. It significantly reduces the time required for prototyping and tuning of Machine Learning models in comparison to other IDEs.Cons: The main con is that this IDE does not support the entire development pipeline and is ideal just for prototyping. It does not have the additional tools or functionalities that make other IDEs ideal for deployment of programs and scripts.  4. AtomCategory: Code EditorWebsite: https://atom.io/Atom initially started as an open source, cross-platform, light-weight Node.js based code editor developed by GitHub. It is popularly known as the “Hackable Text Editor for the 21st century” by its developers. Atom is based on Electron, which is a framework which enables cross-platform desktop application using Chromium and Node.js and is written in CoffeeScript and Less.Features: Listed below are some of the features of this IDEPlugins: Atom’s strength is its open-source nature and plugin support. Outside of the usual auto code-completion, syntax highlighting and file browser, it has no such “features’ of its own. However, there are numerous third-party plugins to full up this gap and make it a recommendable IDE. Some of the useful plugins are listed below:git-plus: Git-plus is a feature which allows the developer to use common Git actions without the need to switch terminal.vim-mode: This plugin allows developers who are used to vim, to feel right at home. It adds most of vim’s features to be readily available in Atom.merge-conflicts: Since Atom is developed by GitHub, this plugin provides the developers to find, preview and edit code which has merge-conflicts in a similar view to that of GitHub’s own merge-conflict viewer.Tight Git Integration: One of the advantages of being an IDE developed by GitHub is that it has very tight integration of Git built into it and it is hence quite easy to run git operations directly from inside the code editor.Package Installer: Atom has a user-friendly package installer which allows the developer to install and apply any available package or theme instantly. It does not require any restart of the app post-installation, and hence, avoids the inconvenience. Project Manager: Atom has an inbuilt project manager which allows the developer to easily access and manage all his projects in an organized manner.Pros: Being open source and with plugin support, Atom is one of most functional code editors out there. It checks all the boxes for it to be designated as an IDE. Additionally, it is lightweight when compared to other IDEs and is not resource hungry.Cons: Atom, essentially being a code editor, lacks a lot of the integrated tools that developers usually require to carry out a complete end-to-end development pipeline.5. Enthought CanopyCategory: IDEWebsite: https://www.enthought.com/product/canopy/  Canopy is an IDE developed and maintained by Enthought which is specially designed keeping Scientists and Engineers in mind. It contains integrated tools for iterative data analysis, visualization, and Python application development. It has two specialized versions of Canopy: Canopy Enterprise and Canopy Geoscience. Needless to mention that these products contain a specific set of features which is not present in the vanilla Canopy. In this section, we will concentrate on the vanilla version of Canopy, which is free to use.Features: Listed below are some of the features of this IDEIntegrated Package Installer: Canopy provides a self-contained installer which is capable of automatically installing Python and other scientific libraries with minimal effort from the user. It is similar to Anaconda installation.Scientific Tools: Like a few of the IDEs mentioned earlier, Canopy has a set of tools specifically tuned and designed for Scientific and Analytical data exploration and visualization. It has special tools for viewing and interaction with plots. In addition to that, it also contains a “variable browser” which allow the user to view the contents of variables in tabular form and their respective datatypes.Integrated IPython window: Similar to Spyder, Canopy contains an integrated IPython console which allows the developer to execute code line by line or all at once. This results in easier visualization and debugging.Integrated Scientific Documentation: Again, similar to Spyder, Canopy has inbuilt documentation support for scientific articles which allow the user to refer to the documentation for specific libraries directly inside the IDE, without the need to switch to another window and search for the documentation. This, in turn, makes the development process faster.Pros: Since it is specifically designed for Engineers and Scientists, it contains a set of specialized tools which allow the developers from that domain to build prototypes faster and with ease. Being similar to Spyder, it is a good alternative to Spyder IDE for Data Scientists.Cons: Canopy lacks the tools that are essential for deployment, group development, and version control system. This IDE is suitable for prototyping but not the development of deployable code.6. Microsoft Visual StudioCategory: IDEWebsite: https://visualstudio.microsoft.com/   Microsoft Visual Studio IDE is one of the most preferred IDEs across the development industry. It was initially designed for C/C++ development. However, with increasing popularity and adoption of Python in the industry, Microsoft decided to add support for Python Development via an open-source extension called Python Tools. This brought the best of both worlds together under one integrated environment. Visual Studio’s superior development centric features are second to none. With all the features bundled together, it almost comes neck-to-neck with PyCharm.Features: Listed below are some of the features of this IDEIntelliSense: IntelliSense is an auto-code-completion feature that is baked right into Microsoft Visual Studio’s editor. This allows the IDE to predict and autocomplete code while being typed by the developer with a high level of precision and accuracy.  Built-in Library Manager: Similar to other IDEs like PyCharm, Visual Studio has a built-in library manager, which allows the developer to easily find and download libraries from PyPI without the need to manually use pip via command line interface.Debugger: Microsoft’s offering is one of the best in the industry. It offers a plethora of debugging tools. Starting from basic debugging, like setting breakpoints, handling exceptions, step-wise execution and inspecting values, it goes all the way to Python Debug Interactive Window, which, in addition to supporting standard REPL commands, also supports special meta commands.Source Control: Again, similar to PyCharm, Visual Studio has a fully integrated Version Control System. It provides a GUI interface to ease the management of Git/TIF projects. Management of branches, merge conflicts and pending changes can be easily achieved by a specialized tool called Team Explorer.Unit Tests: Visual Studio can be used to set specialized test cases called “Unit Tests”, which allows the developer to test the correct working of the code under various input scenarios. It allows to view, edit, run and debug test cases from the Test Window.Pros: Microsoft Visual Studio is a very successful full-fledged IDE on its own, only became better with the added support for Python Development. Similar to PyCharm, it is one of the most complete and feature packed IDEs out there. Unlike PyCharm, it is quite lightweight in terms of System Resource Utilization.Cons: Machine Learning being one of the primary applications of Python, Microsoft Visual Studio lacks any kind of specialized tools for data exploration and visualization.7. Sublime TextCategory: Code EditorWebsite: https://www.sublimetext.com/Similar to Atom, Sublime Text is more of a Code Editor than IDE. However, due to its support for various packages, it packs in enough features to be considered for a full end-to-end development environment. Its support for languages is not limited to any one or two programming languages. It in-turn supports almost all languages that are used across the industry. It has syntax highlighting and auto code completion for almost all languages and hence is quite versatile and flexible. Sublime text has a free trial and post that it is paid. It is a cross-platform Editor, which supports a single license key across all the platforms.Features: Listed below are some of the features of this IDEKeyboard Shortcuts: One of the primary strengths of Sublime Text is its support for keyboard shortcuts for almost all operations. For developers who are familiar with the different shortcut combinations, it becomes quite easy for them to quickly perform certain tasks without having to tinker with the menu.Command Palette: This is a special functionality that can be accessed via keyboard shortcut: Ctrl+Shift+P, which pops up a textbox, where the developer can type to access functions like sorting, changing syntax and even changing the indent settings.  Package and API Ecosystem: Sublime text enjoys a plethora of various package and API support by the community which vastly enhances its functionality. Starting from remote access and development over servers, to packages specifically developed for certain languages; Sublime supports it all.  Added Editing Functionalities: One of the key features of Sublime that many other editors have been inspired from is its highly customized code editing interface, which allows the developer to have multiple cursors at once and dit more than one location simultaneously.Pros: Sublime Text is the fastest and the lightest Text Editor among the competition and yet is functional enough to be used as an IDE. It provides a unique combination of versatility and functionality, which is truly unique.Cons: Being a text editor, even though it makes up for its lack of built-in functionality via plugins and add-ons, at the end of the day it is still a text editor and lacks a few key features that dedicated IDEs possess.8. Eclipse + PyDevCategory: IDEWebsite: http://www.pydev.org/Eclipse is one of the best open-source IDE suites available for Java development. It supports numerous extensions. One such open-source extension is PyDev, which turns Eclipse into a powerful Python IDE.Features: Listed below are some of the features of this IDEDjango Integration: For backend developers, this IDE would make development easier and faster by having Django integration baked right into it, along with Syntax Highlighter and Code Auto Completer.Code Debugging and Analysis: Eclipse has a good set of code debugging and analysis tools and supports features like refracting, hinting code debugging and code analysis. It also has support for PyLint, which is an open-source code bug and quality checker for python.Package Support: Eclipse with PyDev brings a lot of additional features into the IDE. Support for Jython, Cython, Iron Python, MyPy etc. is inherently present in the IDE.Pros: The main advantage of Eclipse is that it is one of the most used IDE in the Java development industry, and hence, any Java developer will feel right at home with this. Additionally, the added package support makes it competitive enough to go head to head with the other available native python IDEs.Cons: Even though there is good package support with additional functionalities that make it unique, the integration of PyDev with Eclipse feels half baked. This is primarily noticeable when the IDE slows down while writing long programs with a lot of packages involved.9. WingCategory: IDEWebsite: https://wingware.com/Wing is a cross-platform Python IDE packed with necessary features and with decent developmental support. It is free for personal use but has a fee associated with it for the pro version, which is targetted towards commercial use. The pro version comes with a 30-day trial for developers to try it out. It even has a specialized version called Wing 101, which is targetted at beginners and is a toned downed version which makes it easier for beginners to start with.Features: Listed below are some of the features of this IDETest-Driven Development: One of the key features of Wing is its test-driven development and debugging tools. It supports unittest, pytest, doctest, nose, and Django testing framework.Remote Development: Similar to PyCharm, Wing supports easy-to-setup remote development which allows the developer to connect to remote servers, containers or VMs and develop remotely with ease.Powerful Debugger: It has a debugging toolset which allows the developer to perform easy bug-fixes. Some of the features it provides are conditional breakpoints, recursive debugging, watch value, multi-process, and multi-threaded workload debugging.Intelligent Editor: In addition to supporting the mundane syntax highlighting and auto code completion, Wing’s editor supports refactoring, documentation, invocation assistance, multi-selection, code folding, bookmarks, and customizable inline code snippets. Additionally, it can emulate Eclipse, Visual Studio, XCode, Emacs, and Vim.Pros: As apparent from the above-mentioned features, Wing provides quite a complete package in terms of development tools and flexibility. It can be coined as “ideal” for backend web development using Django.Cons: The commercial version can be quite expensive.10. RodeoCategory: IDEWebsite: https://rodeo.yhat.com/Rodeo is a cross-platform, open-source IDE developed by yhat. It is designed and developed targetting Data Science and thus, has a lot of tools required for Data Analysis and Visualization.Features: Listed below are some of the features of this IDEData Visualization: Similar to other IDEs targeted at Data Scientists, Rodeo also supports specialized data visualization tools and graph visualization.Variable Explorer: Again, similar to Spyder, Rodeo allows the user to explore the contents of the variables in the program along with their respective data types. This is an important tool in Data Science.Python Console: Rodeo has a Python console built into the IDE which allows the user to execute and debug code line by line along with block execution.Documentation Viewer: Rodeo has a built-in documentation viewer, which allows the developer to consult the official documentation of any library on-the-go.Pros: It is lightweight and fast, and hence is ideal for quick code prototyping for Data Science.Cons: The Development of this IDE has been halted for the past two years. It does not receive any new updates and the project is likely dead. Even though it is a good IDE, it may never receive new updates in the future.ConclusionHaving listed out the features, pros, and cons of some of the best IDEs available for Python, it is time to conclude which one is the best.To be honest, there is no clear answer to which IDE is the best since most of them are specifically designed for a given group of developers or scientists. Hence, we will choose the most preferred IDE for each type of use case.General Python/Web Development: This is more like an all-rounder IDE, which can perform any given task with relative ease. In this use case, PyCharm and Microsoft Visual Studio come neck-to-neck in terms of their features and ease of use. However, being natively developed for Python and with added functionality for the Scientific community, PyCharm clearly has an edge over Visual Studio. Hence PyCharm is the most preferred one here.Scientific Development and Prototyping: This use case is mainly targeted at Data Scientists and Machine Learning Engineers who primarily handle data. In this use case, the two most used IDEs are Jupyter Notebooks and Spyder. Spyder is more like an IDE with additional features specifically tailored towards Data Science. Jupyter is an IPython Notebook which cannot be used for development, but us superior in model building and prototyping. Hence, there is no clear winner here, since the usage of the IDE solely depends on the user’s requirements.Code Editors: The final category is simple code editors which perform similarly to full-fledged IDEs due to additional packages and add-ons. Sublime Text is a clear winner in this segment, primarily due to its simple and fast interface along with great community support and a good python development support.
Rated 4.5/5 based on 25 customer reviews
Top 10 Python IDEs

Top 10 Python IDEs

Blog
What is an IDE?IDE stands for Integrated Development Environment. It is a piece of development software which allows the developer to write, run, and debug the code with relative ease. Even though the...
Continue reading

What’s New in React 16.8

What is React?React is a library by Facebook that allows you to create super performant user interfaces. React allows you to disintegrate a user interface into components, a functional unit of an interface. By composing components together, you can create UIs that scale well and deliver performance.What sets React apart is its feature set.1. The Virtual DOM - An in-memory representation of the DOM and a reconciliation algorithm that is at the heart of React’s performance.2. Declarative Programming & State - State is the data that describes what your component renders as its content. You simply update the state and React manages the rest of the process that leads to the view getting updated. This is known as declarative programming where you simply describe your views in terms of data that it has to show.3. Components - Everything that you build with React, is known as a component. By breaking down UIs into functional and atomic pieces, you can compose together interfaces that scale well. The image below demonstrates a login interface which has been composed together using three components.4. JSX - The render method inside a class component or the function component itself allows you to use JSX, which is like an XML language that incorporates JavaScript expressions. Internally, JSX is compiled into efficient render functions. 5. Synthetic Events - Browsers handle events differently. React wraps browser specific implementations into Synthetic Events, which are dispatched on user interaction. React takes care of the underlying browser specific implementation internally.6. Props - Components can either fetch data from an API and store in the local state, or they can ingest data using props, which are like inlets into a prop. Components re-render if the data in the props update.The road to React 16.8On 26th September, 2017, React 16.0 was announced with much fanfare. It was a major leap forward in the evolution of React and true to its promise, the 16.x branch has marched on, conquering new heights and setting benchmarks for the library, the developer experience,and performance.So, let’s look back at the 16.0 branch, right from its inception and analyze its evolution, all the way to React 16.8.React 16.0Released: 26th September, 2017React 16.0 marked a major leap forward in the evolution of the library and was a total rewrite. Some of the major features introduced in this release include:A new JavaScript environment: React 16.0 was written with modern JavaScript primitives such as Map and Set in mind. In addition, this version also introduced the use of requestAnimationFrame. As a result, React 16.0 and above are not supported by Internet Explorer < v11 and need a polyfill to work.Fiber: React 16.0 introduced a brand new reconciliation engine known as Fiber. This new engine is a generation leap over the previous generation of React’s core and is also responsible for the many new features that were introduced in this release. Fiber also introduces the concept of async rendering which results in more responsive apps because React prevents blocking the main thread. Fiber incorporates a smart scheduling algorithm that batches updates instead of synchronously re-rendering the component every time. Re-renders are only performed if and when optimally needed.Fragments: Till this release, the only way to render lists of components was by enclosing them in a div or some other enclosing node that would also get rendered in place. React 16.0 introduced the concept of fragments allowing you to render an Array of nodes directly without the need of an enclosing element.Code Example : import React, { Component } from 'react'; import { render } from 'react-dom'; const FruitsList = props => props.fruits.map((fruit, index) => {fruit}); class App extends Component {   constructor() {     super();     this.state = {       fruits: ["Apple", "Mango", "Kiwi", "Strawberry", "Banana"]     };   }   render() {     return (                            );   } } render(, document.getElementById('root'));The component in the example above simply renders an Array of list items with keys and without an enclosing element at the root. This saves an extra and unwanted element from being rendered in the DOM.Numbers & Strings: Components, in addition to rendering Arrays using fragments, were also empowered with the ability to return plain strings, which are rendered as text nodes. This prevents the use of paragraph, span or headline tags for instance when rendering text. Likewise, numbers could be rendered directly.Code Example:import React, { Component } from 'react'; import { render } from 'react-dom'; const App = () => 'This is a valid component!'; render(, document.getElementById('root'));Error Boundaries: Until this release, error management in React was quite painful. Errors arising inside components would often lead to unpredictability and issues with state management and there was no graceful way of handling these issues. React 16.0 introduced a new lifecycle method called componentDidCatch() which could be used to intercept errors in child components, to render a custom error UI. Such components that allow the interception of errors in child components are known as error boundaries. In addition to rendering custom error UIs, error boundary components can also be used to pass data to loggers or monitoring servicesCode Example : import React, { Component } from 'react'; import { render } from 'react-dom'; class ErrorBoundary extends Component {   state = {     error: false   }   componentDidCatch(error, info) {     this.setState({ error: true });   }   render() {     if (this.state.error) {       // You can render any custom fallback UI       return There was an Error!;     }     return this.props.children;   } } class DataBox extends Component {   state = {     data: []   }   componentDidMount() {     // Deliberately throwing an error for demo purposes     throw Error("I threw an error!");   }   render() {     return 'This App is working!'   } } const App = () =>  render(, document.getElementById('root'));Portals: Using portals, components get the ability to render content outside the parent component’s DOM node and into another DOM node on the page. This is an incredible feature as it allows components mounted inside a given node, to render content elsewhere on the UI, without explicitly bringing it inside the hierarchy of the parent node. This is made possible using the createPortal(component, DOMNode) function from the react-dom packageCode Example :import React, { Component } from 'react'; import { render, createPortal } from 'react-dom'; import "./style.css"; const Notice = () => createPortal('This renders outside the parent DOM node', document.getElementById("portal")); class App extends Component {   render() {     return ['Renders in the root div', ]   } } render(, document.getElementById('root'));Improved Server-Side Rendering: Single page apps such as the one React delivers are great for performance because they execute in the client’s browser, but are terrible in terms of search engine optimization (SEO). Additionally, the client has to wait for the application package to load before the app renders. Server side rendering solves these problems by rendering the page on the server so the user sees the content right away before the client version of the app takes over for that incredible experience. React 16.0 introduced a new and rewritten server renderer that supports streaming which allows data to be streamed to the client’s browser as it is processed on the server. This naturally boosts performance and is approximately 4x faster than React 15.x’ SSR system.Reduced file size: React 16.0 is smaller than its predecessor, with approximately 32% smaller codebase. This results in more optimised app bundles and consequently a faster load time on the client.Support for custom DOM attributes: Any HTML or SVG attributes that React does not recognize are simply passed on to the DOM to render. This prevents unwanted errors, but more importantly, React 16.0 does away with an internal whitelist mechanism that used to prevent unwanted attributes from getting processed appropriately. This removal of the whitelist mechanism has resulted in a smaller codebase which we discussed earlier.VersionRelease DateFeatures added in the release16.19th November, 2017Support for Portals in React.ChildrenIntroduced the react-reconciler package16.228th November, 2017Fragment as a named exportCode Example 16.329th March, 2018The brand new Context APICode Example :React.createRef()Code Example :React.forwardRef()Code Example :static getDerivedStateFromPropsgetSnapshotBeforeUpdateStrict Mode16.423rd May, 2018Profiler (Experimental)Code Example :16.55th September, 2018Mouse events16.623rd October, 2018React.memo()Code Example : React.lazy() & Code splitting using the Suspense APICode Example :Context for class componentsCode Example :getDerivedStateFromError()16.719th December, 2018React 16.7 was a small release with a few bug fixes and performance enhancements in the React DOM package.React 16.8Released: 6th February, 2019React 16.8 marks a major step forward in React and the way developers can implement function components and use React features.Hooks : So far, the only way to implement local state was to build class components. If a function component needs to store local state in the future, the only way was to refactor the code and convert the function component into a class component. Hooks enables function components to not only implement state, but also add other React features such as lifecycle methods, optionally without the need to convert the component to a class component.Hooks offers a more direct way to interact with React features. If you’re starting a new React project, Hooks offers an alternative and somewhat easier way to implement React features and might be considered as a good replacement for class components in many cases. Hooks is demonstrated later in this article.Companies using ReactReact was built by Facebook to solve real and practical challenges that the teams were facing with Facebook. As a result, it was already battle-tested before release. This and the continuous and progressive development of React has made it the library of choice for companies worldwide. Facebook itself maintains a huge code base of components ( ~50K+ ) and is a big reason why new features are gradually added without sudden deprecations or breaking changes.All of these factors contribute to an industry grade library. It is no wonder that interest for React has grown tremendously over the past 3+ years. Here’s a Google Trends graph demonstrating React’s popularity when compared to Angular, over the past 3 years.Some of the big & popular names using React in production include:AirBnbAmerican ExpressAir New ZealandAlgoliaAmazon VideoAtlassianAuth0AutomatticBBCBitlyBoxChrysler.comCloudFlareCodecademyCourseraDailymotionDeezerDiscordDisqusDockerDropboxeBayExpediaFacebook (Obviously)Fiatusa.comFiverrFlipboardFlipkartFree Code CampFreechargeGrammarlyHashnodeHousing.comHubSpotIGNIMDBImgurInstagramIntuitJeep.comKhan AcademyMagic BusMonster IndiaNHLNaukri.comNBC TV NetworkNetflixNew York TimesNFLNordstromOpenGovPaper by FiftyThreePayPalPeriscopePostmanPractoRackspaceRalph LaurenRedditRecast.AIReuters TVSalesforceShure UKSkyscannerSpotify Web PlayerSquarespaceTreeboTuneIn RadioTwitter - FabricUberUdacityWalmarWhatsApp for WebWixWolfram AlphaWordPress.comZapierZendeskThis is of course a small list, compared to the thousands of sites and apps that are built using React and its ecosystem of products.New features of React 16.8React 16.8 added the incredible Hooks API, giving developers a more direct and simpler way to implement React features such as state and lifecycle methods in function components, without the need to build or convert function to class components. In addition to these abilities, the API is extensible and allows you to write your own hooks as well.Hooks is an opt-in feature and is backward compatible. And while it offers a replacement to class components, their inclusion does not mean that class components are going away. The React team has no plans to do away with class components.As the name implies, Hooks allows your function component to hook into React features such as state and lifecycle methods. This opens up your components to a number of possibilities. For instance, you can upgrade your static function component to include local state in about 2 lines of code, without the need to structurally refactor the component in any form.Additionally, developers can write their own hooks to extend the pattern. Custom hooks can use the built-in hooks to create customised behaviour which can be reused.The Hooks API consists of two fundamental and primary hooks which are explained below. In addition to these fundamental hooks, there are a number of auxiliary hooks which can be used for advanced behaviour. Let’s examine these, one by one.useState : The useState hook is the most fundamental hook that simply aims to bring state management to an otherwise stateless function component. Many components written initially without local state in mind, benefit from easy adoption of state without refactoring needed.The code below is a simple counter which started off as the following basic component:Before Using Hooksconst App = ({count}) => ({count});To turn this into a stateful counter, we need two more ingredients:A local state variable called “Count”Buttons to invoke functions that increment and decrement the value of the “Count” variable.Let’s say we have buttons in place, to bring the state variable to life, we can use the useState() hook. So, our simple component changes as follows:After using the useState() Hookconst App = () => { const [count, setCount] = useState(0);  return (        {count}     setCount(count + 1)}>Increment     setCount(count - 1)}>Decrement      ); }The statement const [count, setCount] = useState(0) creates a local state variable named “Count” with an initial value of 0, as initialized by the useState() method. To modify the value of the “Count” variable, we’ve declared a function called “setCount” which we can use to increment or decrement the value of the “Count” variable.This is really simple to understand and works brilliantly. We have two buttons named Increment and Decrement and they both invoke the “setCount()” method which gets direct access to the “count” variable to update directly.Code Example on (StackBlitz) :import React , {useState} from 'react'; import { render } from 'react-dom'; import "./style.css"; const App = () => { const[count, setCount]= useState(0); return( {count} setCount(count +1)}>Increment setCount(count -1)}>Decrement ); } render(, document.getElementById('root'));useEffect : The useEffect hook enables a function component to implement effects such as fetching data from an API, which are usually achieved using the componentDidMount() and the componentDidUpdate() methods in class components. Once again, it is important to iterate that hooks are opt-in which is what makes them flexible and useful.Here’s the syntax for the useEffect() hook:const App = () => {  const [joke, setJoke] = useState("Please wait..."); useEffect(() => {    axios("https://icanhazdadjoke.com", {      headers: {        "Accept": "application/json",        "User-Agent": "Zeolearn"      }    }).then(res => setJoke(res.data.joke));  },[]);   return ({joke}); }Code Example:import React , {useState, useEffect} from 'react'; import { render } from 'react-dom'; import "./style.css"; import axios from "axios"; const App = () => { const[joke, setJoke]= useState("Please wait..."); useEffect(() => { axios("https://icanhazdadjoke.com", { headers: { "Accept":"application/json", "User-Agent":"Zeolearn" } }).then(res => setJoke(res.data.joke)); },[]); return(Dad says,"{joke}"); } render(, document.getElementById('root'));The code above fetches a random dad joke from the “icanhazdadjoke.com” API.  When the data is fetched, we’re using the setJoke() method as provided by the useState() hook to update the joke into a local state variable named “joke”. You’ll notice the initial value of “joke” is set to “Please wait…”.This will render right away while useEffect() runs and fetches the joke. Once the joke is fetched and the state updated, the component re-renders and you can see the joke on the screen.But behind all this, there is an important caveat. Note the second argument to the useEffect() function, an empty array. If you remove this array, you’ll get a weird problem where the component keeps re-rendering again and again and you see a new joke update repeatedly. This happens because unlike componentDidMount(), useEffect() runs both on mount and update, so whenever a component re-renders, the hook runs again, updates, re-renders and the process repeats.To stop this behaviour, a second argument, an Array may be passed as shown above. This array should ideally contain a list of variables which you need to monitor. These could also be props. Whenever the component re-renders, the state or prop mentioned in the array is compared with the previous value and if found same, the component doesn’t re-run the hook. This also happens when there is nothing to compare, as in the case of a blank array, which is what we’ve used here. This is, however, not the best of practices and may lead to bugs since React defers execution of the hook until after the DOM has been updated/repainted.The useEffect() function can also return, which is somewhat equivalent to componentWillUnmount() and can be used for unsubscribing from publisher-subscriber type APIs such as WebSockets.Besides the above two hooks, there are other hooks that the API offers:useReducer : If you’ve ever used Redux, then the useReducer() hook may feel a bit familiar. Usually, the useState() hook is sufficient for updating the state. But when elaborate behaviour is sought, useReducer can be used to declare a function that returns state after updates. The reducer function receives state and action. Actions can be used to trigger custom behaviour that updates state in the reducer.Thereafter, buttons or other UI elements may be used to “dispatch” actions which will trigger the reducer.This hook can be used as follows:const [state, dispatch] = useReducer(reducer, {count: 0});Here, reducer is a function that accepts state and action. The second argument to the useReducer function is the initial state of the state variable.Code Example : import React , {useReducer, useEffect} from 'react'; import { render } from 'react-dom'; import "./style.css"; import axios from "axios"; const reducer = (state, action) => { switch(action.type){ case'ticktock': return{ count: state.count +1}; case'reset': return{ count:0}; } } const timer; const App = () => { const[state, dispatch]= useReducer(reducer,{count:0}); return( {state.count} { timer = setInterval(() => dispatch({ type: 'ticktock' }), 1000); }}>Start { clearInterval(timer); dispatch({ type: 'reset' }); }}>Stop&Reset ); } render(, document.getElementById('root'));In the code example above, we have a reducer function that offers two state, “start” and “reset”. The “start” action simply increments the count by 1, while “reset” sets it to 0.The Start button then instantiates a setInterval timer that keeps dispatching the “start” action, which keeps incrementing the count every second.The Reset button clears the timer and dispatches the “reset” action which resets the count back to 0.useReducer is best used when you have complex state logic and useState is not enough.Here’s a summary of other available hooks in the v16.8 release:useCallback : The useCallback hook enables you to implement a memoization enriched callback function which enables an equality check between a function and inputs, to check if renders should be performed. This is equivalent in concept to the shouldComponentUpdate function that the PureComponent allows you to implement.useMemo : This hook enables you to pass in a function and an array of input values. The function will only be recomputed if the input values change. This, like the useCallback, enables you to implement equality check based optimizations and prevent unwanted renders.useRef : This hook is useful for accessing refs and initializing them to a given value.useImperativeHandle : This hook enables you to control the object that is exposed to a parent component when using a ref. By using this hook, you can devise custom behaviour that would be available to the parent using the .current property.useLayoutEffect : This hook is similar to the useEffect hook but it is invoked synchronously after the DOM has been mutated and updated. This allows you to read elements from the DOM directly. As a result, this can block updates and hence should ideally be avoided.useDebugValue : This hook is used to display a custom label for hooks in the React DevTools.To summarize, v16.8’s revolutionary, Hooks API opens the door to a whole new way of developing React components.Upgrading to React v16.8.xUpgrading to v16.8 is a relatively simple affair, mainly because it doesn’t introduce breaking changes unless you’re on a very old branch. Team React ensures that incremental upgrades do not introduce sudden API changes or breaking changes that would cause an app to crash or behave erratically.Likewise, if you’re anywhere on the 16.0 branch already, you can conveniently upgrade to 16.8.x by either downloading and installing both react and react-dom packages using npm or yarn, or using the CDN links to unpkg.com, as listed here https://reactjs.org/docs/add-react-to-a-website.html If you’ve used create-react-app to setup your React project, then you can edit the package.json to upgrade versions of react-scripts, react and react-dom to their latest versions before running npm install to download and upgrade the packages.Into the futureA product’s future depends on how well it is embraced by consumers. Many products are built first and attempts are made to entice developers to create a demand. React isn’t one of those frameworks. It was born from the kiln of production at Facebook and it powers more than 50 thousand components and more growing every day. And besides Facebook, React now empowers thousands of companies to write and design scalable UIs that are highly performant with a fantastic developer experience.It is thus, quite natural that the team at Facebook is hard at work, developing the next cutting edge edition of React. Over the years, a vibrant and global community of React developers have sprung up and they’re actively contributing to the ecosystem, in much the same fervour as seen during the days of jQuery.With that said, React is poised to take a leap forward. React 16.x has paved the way for the future by introducing the “Fiber” reconciliation engine and a slew of super cool features such as Context, Hooks and the Suspense API.Going forward, React’s next big feature will land in Q2 of 2019. Concurrent Rendering would allow React to prioritize updates such that CPU usage is optimised for high-priority tasks first, thereby massively improving the user experience.Another feature that is expected to land later in 2019 is Suspense for Data Fetching. This API will allow components to display a fallback UI if asynchronous data fetching is taking more time than a prescribed limit. This ensures that the UI is responsive and displays indicators for the user to understand data fetch latencies in a better way.To summarize, React 16.x is going to get a whole lot better before the leap to v17.In ConclusionThe team at Facebook have done a commendable job with React. Their commitment to improving the user experience as well as the developer experience is seen by the incredible array of features that are released from time to time. Whether it is the Context API, or Hooks or the upcoming concurrent rendering, React is the battle-tested weapon of choice for building your next winning site or even a mobile app!
Rated 4.5/5 based on 12 customer reviews
What’s New in React 16.8

What’s New in React 16.8

Blog
What is React?React is a library by Facebook that allows you to create super performant user interfaces. React allows you to disintegrate a user interface into components, a functional unit of an inte...
Continue reading

Docker vs. Kubernetes

Containers are a virtualization technology; however, they do not virtualize a physical server. Instead, a container is operating-system-level virtualization. What this means is that containers share the operating system kernel provided by the host among themselves along with the host.Container ArchitectureThe figure shows all the technical layers that enable containers. The bottommost layer provides the core infrastructure in terms of network, storage, load balancers, and network cards. At the top of the infrastructure is the compute layer, consisting of either a physical server or both physical as well as virtual servers on top of a physical server. This layer contains the operating system with the ability to host containers. The operating system provides the execution driver that the layers above use to call kernel code and objects to execute containers.What is DockerDocker is used to manage, build and secure business-critical applications without the fear of infrastructure or technology lock-in.Docker provides management features to Windows containers. It comprises of two executables:Docker daemonDocker clientThe Docker daemon is the workhorse for managing containersImportant features of docker :Application isolationSwarm (Clustering and Scheduling tool)ServicesSecurity ManagementEasy and Faster configurationIncrease ProductivityRouting MeshWhy use docker for Development?Easy Deployment.Use any editor/IDE of your choice.You can use a different version of the same programming language.Don’t need to install a bunch of language environment on your system.The development environment is the same as in production.It provides a consistent development environment for the entire team.What is Kubernetes:Kubernetes is a container orchestration system for running and coordinating containerized application across clusters of Machine. It also automates application deployment, scaling and management.Important features of Kubernetes :Managing multiple containers as one entityContainer replicationContainer Auto-ScalingSecurity ManagementVolume managementResource usage monitoringHealth checksService DiscoveryNetworkingLoad BalancingRolling Updates Kubernetes ArchitectureWhat is Docker swarm:Docker swarm is a tool for clustering and scheduling docker containers. It is used to manage a cluster of docker nodes as a single virtual system. It uses the standard docker application programming interface to interface with other tools. Swarm uses the same command line from Docker.This can be understood with the following diagram :It uses three different strategies to determine which nodes each container should run:SpreadBinPackRandomImportant features of Kubernetes :Tightly integrated into docker ecosystemUses its own APIFilteringLoad BalancingService DiscoveryMulti-host NetworkingScheduling systemHow are Kubernetes and Docker Swarm related :Both provide load balancing features.Both facilitate quicker container deployment and scaling.Both have a developer community for help and support.Docker Swarm vs Kubernetes :One needs to understand that Docker and Kubernetes are not competitors. The two systems provide closely related but separate functions.Docker SwarmKubernetesContainer LimitLimited to 95,000 containersLimited to 3,00,000 containersNode SupportSupport 2000+ nodesSupport up to 5000 nodesScalabilityQuick container deployment and scaling even in large containersProvides string guarantees to the cluster states at the expense of speed.Developed ByDocker IncGoogleRecommended Use CaseSmall clusters, Simple architectures, No Multi-User, for small teamsProduction-ready, recommended for any type of containerized environments, big or small, very feature richInstallationSimple installation but the resultant cluster is not comparatively strongComplex installationBut a strong resultant cluster once set upLoad BalancingCapability to execute auto load balancing of traffic between containers in the same clusterManual load balancing is often needed to balance traffic between different containers in different podsGUIThere is no dashboard which makes management complexIt has an inbuilt dashboardRollbacksAutomatic rollback facility available only in docker 17.04 and higherAutomatic rollback with the ability to deploy rolling updatesNetworkingDaemons are connected by overlay networks and the overlay network driver is usedOverlay network is used which lets pods communicate across multiple nodesAvailabilityContainers are restarted on a new host if host failure is encounteredHigh availability, Health checks are performed directly on the podsLet’s understand the differences category wise for the following points :Installation/Setup:Docker Swarm: It only requires two commands to set up a cluster, one at the manager level and another at the worker end.Following are the commands to set up, open terminal and ssh into the machine :$ docker-machine ssh manager1 $ docker swarm init --advertise-addr Kubernetes: In Kubernetes, there are five steps to set up or host the cluster.Step 1: First run the commands to bring up the cluster.Step 2: Then Define your environment.Step 3: Define Pod network.Step 4: Then bring up the dashboard.Step 5: Now, the cluster can be hosted.GUI:Docker Swarm: Docker Swarm is a command line tool. No GUI dashboard is available. One needs to be comfortable with console cli, to fully operate docker swarm.Kubernetes: Kubernetes has a Web-Based kubernetes user interface. It can be used to deploy the containerized application to a kubernetes cluster.NetworkingDocker Swarm: User can encrypt container data traffic while creating an overlay network.Lots of cool things are happening under the hood in docker swarm container networking which makes it easy to deploy production application on multi-host networks. The Node joining a swarm cluster generates an overlay network for services that span every host in the docker swarm and a host-only docker bridge network for containersKubernetes: In Kubernetes, we create network policies which specify how the pods interact with each other. The networking model is a flat network, allowing all pods to interact with one another The network is implemented typically as an overlay. The model needs two CIDRs: one for the services and the other from which pods acquire an IP address.ScalabilityDocker Swarm: After the release of Docker 1.12, we now have orchestration built in which can scale to as many instances as your hosts can allow.Following are the steps to follow:Step 1: Initialize SwarmStep 2: Creating a ServiceStep 3: Testing Fault ToleranceStep 4: Adding an additional manager to Enable Fault toleranceStep 5: Scaling Service with fault toleranceStep 6: Move services from a specific nodeStep 7: Enabling and scaling to a new nodeKubernetes: In Kubernetes we have masters and workers. Kubernetes master nodes act as a control plane for the cluster. The deployment has been designed so that these nodes can be scaled independently of worker nodes to allow for more operational flexibility.Auto-ScalingDocker Swarm: There is no easy way to do this with docker swarm for now. It doesn’t support auto-scaling out of the box. A promising cross-platform autoscaler tool called “Orbiter” can be used that supports swarm mode task auto scaling.Kubernetes: Kubernetes make use of Horizontal Pod Autoscaling , It automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization.This can be understood with the following diagram :AvailabilityDocker Swarm: we can scale up the number of services running in our cluster. And even after scaling up, we will be able to retain high availability.Use the following command to scale up the service$ docker service scale Angular-App-Container=5in a Docker Swarm setup if you do not want your manager to participate in the proceedings and keep it occupied for only managing the processes, then we can drain the manager from hosting any application.$ docker node update --availability drain Manager-1Kubernetes: There are two different approaches to set up a highly available cluster using kubeadm along with the required infrastructures such as three machines for masters, three machines for workers, full network connectivity between all machines, sudo privileges on all machines, Kubeadm and Kubelet installed on machines and SSH Access.With stacked control plane nodes. This approach requires less infrastructure. The etcd members and control plane nodes are co-located.With an external etcd cluster. This approach requires more infrastructure. The control plane nodes and etcd members are separated.Rolling Updates and Roll BacksDocker Swarm: If you have a set of services up and running in swarm cluster and you want to upgrade the version of your services.The common approach is to set your website in Maintenance Mode, If you do it manually.To do this in an automated way by means of orchestration tool we should make use of the following features available in Swarm :Release a new version using docker service update commandUpdate Parallelism using update—parallelism and rollback—parallelism flags.Kubernetes:To rollout or rollback a deployment on a kubernetes cluster use the following steps :Rollout a new versionkubectl patch deployment $DEPLOYMENT \      -p'{"spec":{"template":{"spec":{"containers":[{"name":"site","image":"$HOST/$USER/$IMAGE:$VERSION"}]}}}}'Check the rollout statuskubectl rollout status deployment/@DEPLOYMENTRead the Deployment historykubectl rollout history deployment/$DEPLOYMENT kubectl rollout history deployment/$DEPLOYMENT --revision 42Rollback to the previously deployed versionkubectl rollout undo deployment/$DEPLOYMENTRollback to a specific previously deployed versionkubectl rollout undo deployment/$DEPLOYMENT --to-revision 21 Load BalancingDocker Swarm: The swarm internal networking mesh allows every node in the cluster to accept connections to any service port published in the swarm by routing all incoming requests to available nodes hosting service with the published port.Ingress routing, the load balancer can be set to use the swarm private IP addresses without concern of which node is hosting what service.For consistency, the load balancer will be deployed on its own single node swarm. Kubernetes: The most basic type of load balancing in Kubernetes is load distribution, easy to implement at dispatch level.The most popular and in many ways the most flexible way of load balancing is Ingress, which operates with the help of a controller specialized in pod (kubernetes). It includes an ingress resource which contains a set of rules for governing traffic and a daemon which applies those rules.The controller has an in-built feature for load balancing with some sophisticated capabilities.The configurable rules contained in an Ingress resource allow very detailed and highly granular load balancing, which can be customized to suit both the functional requirements of the application and the conditions under which it operates.Data Volumes:Docker Swarm: Volumes are directories that are stored outside of the container’s filesystem and which hold reusable and shareable data that can persist even when containers are terminated. This data can be reused by the same service on redeployment or shared with other services.Swarm is not as mature as Kubernetes. It only has one type of volume natively, which is a volume shared between the container and its docker host, but It won’t do the job in a distributed application. It is only helpful locally.Kubernetes: At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the volume type used.There are many volume types :LocalNode-Hosted Volumes (emptyDir, hostpath and duh)Cloud hostedgcePersistentDisk (Google Cloud)awsElasticBlockStore (Amazon Cloud – AWS)AzureDiskVolume ( Microsoft Cloud -Azure)Logging and MonitoringDocker Swarm: Swarm has two primary log destinations daemon log (events generated by docker service) and container logs(generated by containers). It appends its own data to existing logs.Following commands can be used to show logs per container as well as per service basis.Per Container :docker logs Per Service:docker service logs Kubernetes: In Kubernetes, as requests get routed between services running on different nodes, it is often imperative to analyze distributed logs together while debugging issues.Typically, three components make up a logging system in Kubernetes :Log Aggregator: It collects logs from pods running on different nodes and routes them to a central location. It should be efficient, dynamic and extensible.Log Collector/Storage/Search: It stores the logs from log aggregators and provides an interface to search logs as well. It also provides storage management and archival of logs.Alerting and UI: The key feature of log analysis of distributed applications is virtualization. A good UI with query capabilities, Custom Dashboard makes it easier to navigate through application logs, correlate and debug issues.ContainersPackaging software into standardized units for shipment, development and deployment is called a container. It included everything to run an application be it code, runtime, system tools, settings and system libraries. Containers are available for both Linux and windows application.Following are the architecture diagram of a containerized application :Benefits of Containers :Great EfficiencyBetter Application DevelopmentConsistent OperationMinimum overheadIncreased PortabilityContainer Use Cases :Support for microservices architectureEasier deployment of repetitive jobs and tasksDevOps support for CI(Continuous Integration) and CD(Continuous Deployment)Existing application refactoring for containers.Existing application lift and shift on cloud.Containers vs Virtual Machines :One shouldn’t get confused container technology with virtual machine technology. Virtual Machine runs on a hypervisor environment whereas container shares the same host OS and is lighter in size compared to a virtual machine.Container takes seconds to start whereas the virtual machine might take minutes to start.Difference between Container and Virtual Machine Architecture :Build and Deploy Containers with Docker:Docker launched in 2013 and revolutionized the application development industry by democratizing software containers.  In June 2015 docker donated docker specification and runc to OCI (Open container Initiative)Manage containers with KubernetesKubernetes(K8s) is a popular open-source container management system. It offers some unique features such as Traffic Load Balancing, Scaling, Rolling updates, Scheduling and Self-healing(automatic restarts)Features of Kubernetes :Automatic binpackingSelf-HealingStorage OrchestrationSecret and Configuration ManagementService discovery and load balancingAutomated rollouts and rollbacksHorizontal ScalingBatch ExecutionCase Studies :IBM offers managed kubernetes container service and image registry to provide a fully secure end to end platform for its enterprise customers.NAIC leverages kubernetes which helps their developer to create rapid prototypes far faster than they used to.Ocado Technology leverages kubernetes which help them speeding the idea to the implementation process. They have experienced feature go the production from development in a week now. Kubernetes give their team the ability to have more fine-grained resource allocation.First, we must create a docker image and then push it to a container registry before referring it in a kubernetes pod.Using Docker with Kubernetes:There is a saying that Docker is like an airplane and Kubernetes is like an airport. You need both.Container platform is provided by a company called Docker.Following are the steps to package and deploy your application :Step 1: Build the container imagedocker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .Verify that the build process was successfuldocker imagesStep 2: Upload the container imagedocker push gcr.io/${PROJECT_ID}/hello-app:v1Step 3: Run your container locally(optional)docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/hello-app:v1Step 4: Create a container clusterIn case of google GCPgcloud container clusters create hello-cluster --num-nodes=3 gcloud compute instances listStep 5: Deploy your applicationkubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port 8080 kubectl get podsStep 6: Expose your application on the internetkubectl expose deployment hello-web --type=LoadBalancer --port 80 --target-port 8080Step 7: Scale up your applicationkubectl scale deployment hello-web --replicas=3 kubectl get deployment hello-webStep 8: Deploy a new version of your app.docker build -t gcr.io/${PROJECT_ID}/hello-app:v2 .Push image to the registrydocker push gcr.io/${PROJECT_ID}/hello-app:v2Apply a rolling update to the existing deployment with an image updatekubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2Finally cleaning up using:kubectl delete service hello-webKubernetes high level Component Architecture.ConclusionDocker swarm is easy to set up but has less feature set, also one needs to be comfortable with command lines for dashboard view. This is recommended to use when you need less number of nodes to manage.However Kubernetes has a lot of features with a GUI based Dashboard which gives a complete picture to manage your containers, scaling, finding any errors. This is recommended for a big, highly scalable system that needs 1,000s of pods to be functional. 
Rated 4.5/5 based on 11 customer reviews
Docker vs. Kubernetes

Docker vs. Kubernetes

Blog
Containers are a virtualization technology; however, they do not virtualize a physical server. Instead, a container is operating-system-level virtualization. What this means is that containers share t...
Continue reading

How to Install React on Mac

React is an open-source, front-end library to develop web applications, it is JavaScript-based and led by the Facebook Team and by a community of individuals and corporationsIn this document, we will cover installation procedure of react on mac operating systemPrerequisitesThis guide assumes that you are using macOS. Before you begin, you should have a user account with installation privileges and should have unrestricted access to all mentioned web sites in this document.Audience:This document can be referred by anyone who wants to install latest nodejs on macSystem requirementsmacOS >=10.104 GB RAM10 GB free spaceInstallation Procedure To install react tooling we need nodejs and npm. First let’s understand what these are and why we need them.What is Nodejs and Why you need for react development?Node.js is an open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. Node.js lets developers use JavaScript to develop wide variety of applications like network applications, command line tools, web api, web applications. You need nodejs for dev tooling (like local web server with live reloading features) and dev experience, you do not need nodejs to run react in production.What is npm and Why you need for react development?Npm stands for node package manager, it is a dependency management tool for javascript applications. This tool will help to install and the libraries and other tools to support react development.Let’s start with nodejs installation post completion on nodejs we will install create-react-app command line and will create a new react project1.Download nodejsVisit nodejs download page hereClick on macOS Installer to download the latest version of node installable package.2.Install nodejsClick on the download node-vxx.xx.xx.pkg ( for example node-v10.15.0.pkg) in previous step to start the installation which brings up below screen. Please click continueBy clicking continue in previous step you will be asked to accept license, please click continuePlease accept the agreement by clicking AgreeClick continueClick install, which would prompt you for the credentialsProvide username and password and click Install Software On successful installation you will see the below screen which shows the summary of the installation.To access the node and npm executable from terminal make sure /usr/local/bin is in your $PATH. You can verify that by running echo $PATH command in terminal3.Testing InstallationOpen terminal and run below command to test nodenode -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your node version is > v10.0.0)Open terminal and run below command to test npmnpm -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your npm version is  >5 )Install create react appSetting up productive react development environment would need to configure tools like babel, webpack which are complex to configure for a newbie in react world. There are several tools that help to alleviate this problem out of which create react app is the easiest and finest tool with production grade configurations pre-built. The goodness of this tool is, it does not lock in and allows you to alter the configuration by ejecting the create react app.  We will install create-react-app using npm. On terminal run the install command shown belownpm install -g create-react-appOn successful installation you should see the output like above (note your create-react-app version may be different by the time you run this install command) Test create-react-appTo test the create-react-app, run below command create-react-app --versionCongratulations, you have successfully installed create-react-appRunning the first Hello World applicationCreate react application using create-react-app hello-react command as shown belowThis command creates a new folder named hello-react and creates all the files and setups the necessary libraries within this folder and makes the react project ready to be executed without any additional configuration Once project is created change into project directory and run application using npm start command as shown belownpm start command starts webpack development server which in turn performs all the build process and opens a browser window and loads application url which runs at http://www.localhost3000.org/ by default.You will see a beautiful window like below one which show you the react icon and some text.As discussed, create react app comes with great tooling, one of the productive features is webpack hot reloading, which deploys the change on live and saves developer lot of time to redeploy and reload work.Let’s open the project and make some changes to see the experience of this great feature. We will go through the project structure to understand the importance of the file create by create react app.Open the project which was created in previous step in any JS editor, here you see the project is open in vscode editor. On to the left in explorer section you see file explorer which shows you several folders and files which were created by create-react-app. Let’s walkthrough them.node_modules: This folder is managed by package manager (npm) and contains all the project library dependencies. Files in this folder should not be modified as the changes made are not guaranteed to be safe as updating/installing any other library can overwrite the changes made.public: This folder contains the assets that should be served publicly without any restrictions and contains files like index.html, favicon, manifest.json. src: This is the folder that contains all your development effort. As a react developer you spent lot of time in this folder creating components and other sources of code.Now let’s get into the files inside this folder. To start with index.js is the starting file of this project where the execution on your code starts, this file injects App component which is exported from App.js. All the output you see in the browser is resultant on this file, lets make some change to this file, we will edit the existing code with new changes as shown in below screenshot.Save the file and switch back to browser to see the changes deployed and loaded as discussed, this one of the features of create-react-app to tooling setup.  With new changes your browser window should look like belowUninstall create-react-appYou can uninstall any library or tool setup via npm install can be uninstalled using npm uninstall. Perform below step to remove create-react-app installed previouslyOn command prompt run npm uninstall -g @angular/cli
Rated 4.5/5 based on 24 customer reviews
How to Install React on Mac

How to Install React on Mac

Tutorials
React is an open-source, front-end library to develop web applications, it is JavaScript-based and led by the Facebook Team and by a community of individuals and corporationsIn this document, we will ...
Continue reading

How to Install React on Ubuntu

React is an open-source, front-end library to develop web applications, it is JavaScript-based and led by the Facebook Team and by a community of individuals and corporations.In this document, we will cover installation procedure of angular on Ubuntu 16.04 operating system PrerequisitesThis guide assumes that you are using Ubuntu16.04. Before you begin, you should have a user account with installation privileges and should have unrestricted access to all mentioned web sites in this document.Audience:This document can be referred by anyone who wants to install latest nodejs on Ubuntu 16.04System requirementsUbuntu 16.044 GB RAM10 GB free spaceInstallation ProcedureTo install react tooling we need nodejs and npm. First let’s understand what these are and why we need them.What is Nodejs and Why you need for react development?Node.js is an open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. Node.js lets developers use JavaScript to develop wide variety of applications like network applications, command line tools, web api, web applications. You need nodejs for dev tooling (like local web server with live reloading features) and dev experience, you do not need nodejs to run react in production.What is npm and Why you need for react development?Npm stands for node package manager, it is a dependency management tool for javascript applications. This tool will help to install and the libraries and other tools to support react development.Let’s start with nodejs installation post completion on nodejs we will install create-react-app command line and will create a new react project1.Install nodejs - Setup PPATo get you a more recent version of Node.js installed on ubuntu is to add a PPA (personal package archive) maintained by NodeSource. Open terminal and run below commandscd ~ curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.shRun the downloaded script using below commandsudo bash nodesource_setup.shThe PPA will be added to your configuration and your local package cache will be updated automatically2.Install Run sudo apt-get install nodejs -y to install 3.Testing nodejs InstallationOn terminal run below command to test nodenode -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your node version is >v10.0.0)On terminal run below command to test npm=npm -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your npm version is  >5 )Install create react appSetting up productive react development environment would need to configure tools like babel, webpack which are complex to configure for a newbie in react world. There are several tools that help to alleviate this problem out of which create react app is the easiest and finest tool with production grade configurations pre-built. The goodness of this tool is, it does not lock in and allows you to alter the configuration by ejecting the create react app.  We will install create-react-app using npm. On terminal run the install command shown belownpm install -g create-react-appOn successful installation you should see the output like above (note your angular/cli version may be different by the time you run this install command)Test create-react-appTo test the create-react-app, run below commandcreate-react-app --versionCongratulations, you have successfully installed create-react-appRunning the first Hello World application   1. Create react application using create-react-app command line using below commandcreate-react-app hello-reactThis command creates a new folder named hello-react and creates all the files and setups the necessary libraries within this folder and makes the react project ready to be executed without any additional configuration   2. Once project is created change into project directory and run application using npm start command as shown belownpm start command starts webpack development server which in turn performs all the build process and opens a browser window and loads application url which runs at http://www.localhost3000.org/ by default.You will see a beautiful window like below one which show you the react icon and some text.As discussed, create react app comes with great tooling, one of the productive features is webpack hot reloading, which deploys the change on live and saves developer lot of time to redeploy and reload work.Let’s open the project and make some changes to see the experience of this great feature. We will go through the project structure to understand the importance of the file create by create react app.Open the project which was created in previous step in any JS editor, here you see the project is open in vscode editor. On to the left in explorer section you see file explorer which shows you several folders and files which were created by create-react-app. Let’s walkthrough themnode_modules: This folder is managed by package manager (npm) and contains all the project library dependencies. Files in this folder should not be modified as the changes made are not guaranteed to be safe as updating/installing any other library can overwrite the changes made.public: This folder contains the assets that should be served publicly without any restrictions and contains files like index.html, favicon, manifest.jsonsrc: This is the folder that contains all your development effort. As a react developer you spent lot of time in this folder creating components and other sources of code.Now let’s get into the files inside this folder. TO start with index.js is the starting file of this project where the execution on your code starts, this file injects App component which is exported from App.js. All the output you see in the browser is resultant on this file, lets make some change to this file, we will edit the existing code with new changes as shown in below screenshot.Save the file and switch back to browser to see the changes deployed and loaded as discussed, this one of the features of create-react-app to tooling setup.  With new changes your browser window should look like belowUninstall create-react-appYou can uninstall any library or tool setup via npm install can be uninstalled using npm uninstall. Perform below step to remove create-react-app installed previouslyOn command prompt run npm uninstall -g @angular/cliUninstall nodejsYou can uninstall any software setup via apt install can be uninstalled using apt remove. Perform below step to remove nodejs installed previouslyOn terminal run below command to uninstall nodesudo apt remove nodejs
Rated 4.5/5 based on 26 customer reviews
How to Install React on Ubuntu

How to Install React on Ubuntu

Tutorials
React is an open-source, front-end library to develop web applications, it is JavaScript-based and led by the Facebook Team and by a community of individuals and corporations.In this document, we will...
Continue reading

How to Install React on Windows

React is an open-source, front-end library to develop web applications, it is JavaScript-based and led by the Facebook Team and by a community of individuals and corporations.In this document, we will cover installation procedure of react on windows 10 operating systemPrerequisitesThis guide assumes that you are using Windows 10. Before you begin, you should have a user account with installation privileges and should have unrestricted access to all mentioned web sites in this document.Audience:This document can be referred by anyone who wants to install latest nodejs on windows 10 System requirementsWindows 10 OS4 GB RAM10 GB free spaceInstallation ProcedureTo install react tooling we need nodejs and npm. First let’s understand what these are and why we need them.What is Nodejs and Why you need for react development?Node.js is an open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. Node.js lets developers use JavaScript to develop wide variety of applications like network applications, command line tools, web api, web applications. You need nodejs for dev tooling (like local web server with live reloading features) and dev experience, you do not need nodejs to run react in production.What is npm and Why you need for react development?Npm stands for node package manager, it is a dependency management tool for javascript applications. This tool will help to install and the libraries and other tools to support react development.Let’s start with nodejs installation post completion on nodejs we will install create-react-app command line and will create a new react project1.Download nodejsVisit nodejs download page hereClick on windows Installer to download the latest version of node installer.2.Install nodejsClick on the downloaded node-vxx.xx.xx.msi (for example node-v10.15.0.msi) in previous step to start the installation which brings up below screen. Please click NextBy clicking next in previous step, you will be asked to accept license, please accept by clicking checkbox and click Next Click NextClick NextClick Install, this may need elevated permissions, provide necessary rights requested. This step would take several minutes to finish installationClick Finish3.Testing InstallationOpen command prompt and run below command to test nodenode -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your node version is > v10.0.0)Open command prompt and run below command to test npmnpm -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your npm version is  >5 )Install create react appSetting up productive react development environment would need to configure tools like babel, webpack which are complex to configure for a newbie in react world. There are several tools that help to alleviate this problem out of which create react app is the easiest and finest tool with production grade configurations pre-built. The goodness of this tool is, it does not lock in and allows you to alter the configuration by ejecting the create react app.  We will install create-react-app using npm. On terminal run the install command shown belownpm install -g create-react-appOn successful installation you should see the output like above (note your create-react-app version may be different by the time you run this install command)Test create-react-app To test the create-react-app, run below commandcreate-react-app --versionCongratulations, you have successfully installed create-react-appRunning the first Hello World applicationCreate react application using create-react-app command line using below commandcreate-react-app hello-reactThis command creates a new folder named hello-react and creates all the files and setups the necessary libraries within this folder and makes the react project ready to be executed without any additional configurationOnce project is created change into project directory and run application using npm start command as shown belownpm start command starts webpack development server which in turn performs all the build process and opens a browser window and loads application url which runs at http://localhost:3000 by default.You will see a beautiful window like below one which show you the react icon and some text.As discussed, create react app comes with great tooling, one of the productive features is webpack hot reloading, which deploys the change on live and saves developer lot of time to redeploy and reload work.Let’s open the project and make some changes to see the experience of this great feature. We will go through the project structure to understand the importance of the file create by create react app.Open the project which was created in previous step in any JS editor, here you see the project is open in vscode editor. On to the left in explorer section you see file explorer which shows you several folders and files which were created by create-react-app. Let’s walkthrough themnode_modules: This folder is managed by package manager (npm) and contains all the project library dependencies. Files in this folder should not be modified as the changes made are not guaranteed to be safe as updating/installing any other library can overwrite the changes made.public: This folder contains the assets that should be served publicly without any restrictions and contains files like index.html, favicon, manifest.jsonsrc: This is the folder that contains all your development effort. As a react developer you spent lot of time in this folder creating components and other sources of code.Now let’s get into the files inside this folder. To start with index.js is the starting file of this project where the execution on your code starts, this file injects App component which is exported from App.js. All the output you see in the browser is resultant on this file, lets make some change to this file, we will edit the existing code with new changes as shown in below screenshot.Save the file and switch back to browser to see the changes deployed and loaded as discussed, this one of the features of create-react-app to tooling setup.  With new changes your browser window should look like belowUninstall create-react-app You can uninstall any library or tool setup via npm install can be uninstalled using npm uninstall. Perform below step to remove create-react-app installed previouslyOn command prompt run npm uninstall -g @angular/cliUninstall nodejsTo uninstall previously installed nodejs follow below stepsPress windows + R to open run and type appwiz.cpl and press ok. This will open Programs and Features the look for node.jsDouble click node.js or right click and select uninstall which will prompt as below and then choose, YesNodejs uninstallation process will initiate and would ask you to authorize the same via user control. Choose Yes, this will take a while and complete
Rated 4.5/5 based on 30 customer reviews
How to Install React on Windows

How to Install React on Windows

Tutorials
React is an open-source, front-end library to develop web applications, it is JavaScript-based and led by the Facebook Team and by a community of individuals and corporations.In this document, we will...
Continue reading

How to Install Angular on Mac

Angular is an open-source, front-end web application development framework, it is TypeScript-based and led by the Angular Team at Google and by a community of individuals and corporations.In this document, we will cover installation procedure of angular on mac operating systemPrerequisitesThis guide assumes that you are using macOS. Before you begin, you should have a user account with installation privileges and should have unrestricted access to all mentioned web sites in this document.Audience:This document can be referred by anyone who wants to install latest nodejs on macSystem requirementsmacOS >= 10.104 GB RAM10 GB free spaceInstallation ProcedureTo install angular cli we need nodejs and npm. First let’s understand what these are and why we need them.What is Nodejs and Why you need for angular development?Node.js is an open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. Node.js lets developers use JavaScript to develop wide variety of applications like network applications, command line tools, web api, web applications. You need nodejs for dev tooling (like local web server with live reloading features) and dev experience, you do not need nodejs to run react in production.What is npm and Why you need for angular development?Npm stands for node package manager, it is a dependency management tool for javascript applications. This tool will help to install the libraries and other tools to support angular development.Let’s start with nodejs installation post completion on nodejs we will install angular cli and create new angular project1.Download nodejsVisit nodejs download page hereClick on macOS Installer to download the latest version of node installable package. 2.Install nodejsClick on the download node-vxx.xx.xx.pkg ( for example node-v10.15.0.pkg) in previous step to start the installation which brings up below screen. Please click continueBy clicking continue in previous step you will be asked to accept license, please click continue Please accept the agreement by clicking AgreeClick continueClick install, which would prompt you for the credentialsProvide username and password and click Install SoftwareOn successful installation you will see the below screen which shows the summary of the installation.To access the node and npm executable from terminal make sure /usr/local/bin is in your $PATH. You can verify that by running echo $PATH command in terminal3.Testing InstallationOpen terminal and run below command to test node node -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your node version is > v10.0.0)Open terminal and run below command to test npmnpm -vYou should see an output like below (Note: Your version may vary depending on your date of installing as nodejs team make an aggressive release but make sure your npm version Is  >5 )Install angular cliSetting up productive angular development environment would need to configure tools like typescript, webpack and other angular dependencies which are complex to configure for a newbie in angular world. There are several tools that help to alleviate this problem out of which angular cli is the easiest and finest tool with production grade configurations pre-built. Angular cli comes with wide range of commands that help manage the angular development, testing and build processWe will install angular cli using npm. On terminal run the install command shown belownpm install -g @angular/cli On successful installation you should see the output like above (note your angular/cli version may be different by the time you run this install command) Test @angular/cli To test the @angular/cli run ng version commandCongratulations, you have successfully installed @angular/cliRunning the first Hello World applicationCreate angular application using ng new command as shown belowThis command creates a new folder named hello-angular and creates all the files and setups the necessary libraries within this folder and makes the angular project ready to be executed without any additional configurationOnce project is created change into project directory and run application using ng serve command as shown belowng serve -o command starts webpack development server which in turn performs all the build process and opens a browser window and loads application url which runs at http://localhost:4200 by default.On successful execution you should see the below output in browserAs discussed, angular cli comes with great tooling, one of the productive features is webpack hot reloading, which deploys the change on live and saves developer lot of time to redeploy and reload work.Let’s open the project and make some changes to see the experience of this great feature. We will go through the project structure to understand the importance of the files and folders createdOpen the project which was created in previous step in any JS editor, here you see the project is open in vscode editor. On to the left in explorer section you see file explorer which shows you several folders and files which were created by ng new command. Let’s walkthrough theme2e: this folder contains end to end testing source and configuration code. If you wish to write end to end testing automation code, you efforts go into this foldernode_modules: This folder is managed by package manager (npm) and contains all the project library dependencies. Files in this folder should not be modified as the changes made are not guaranteed to be safe as updating/installing any other library can overwrite the changes made. src: This is the folder that contains all your development effort. As a angular developer you spent lot of time in this folder creating modules, components, services, directive etc.,Other files that are outside src folder and configuration files for angular cli, editor, typescript, linting and npmNow let’s get into src folder. Within this folder there are several other folders like app – this folder contains all your source code, and this is the place all your development effort goesassets – contains the static asses to like images etc., environments – contains the per environment configuration file which holds the settings to be used for dev/test and prod environmentsOther files are configuration files and settings.Out all files main.ts is the starting file and the project code execution starts from this file.Let's make a code change to the AppComponent, open app.component.html and make the changes as shown in below code Save the file and switch back to browser to see the changes deployed and loaded as discussed, this one of the features of angular cli tooling setup.   With new changes your browser window should look like belowCongrats!! You have setup an angular project Uninstall angular cliOn command prompt run npm uninstall -g @angular/cli 
Rated 4.5/5 based on 26 customer reviews
How to Install Angular on Mac

How to Install Angular on Mac

Tutorials
Angular is an open-source, front-end web application development framework, it is TypeScript-based and led by the Angular Team at Google and by a community of individuals and corporations.In this docu...
Continue reading