top
Kickstart the New Year with exciting deals on all courses Use Coupon NY15 Click to Copy
Sort by :

Why Your Business is Falling Behind without Machine Learning

Many business owners shy away from machine learning (ML) for the fear of cost or lack of usefulness.  And while large enterprises used to have the edge on ML technologies, small businesses are stepping up their game by integrating affordable AI solutions.Need of Machine Learning for BusinessLet’s start with the basics: machine learning is an AI technology that finds patterns in data and uses those patterns to predict the future. Big businesses like Google and Amazon use machine learning all the time--most often for search algorithms and product suggestions--but small businesses are integrating machine learning too, for many reasons:Machine learning is more affordable nowMachine learning can be easy to understandMachine learning turns data into insightMachine learning saves you time on mundane tasksMachine learning technologies are starting to be integrated into every industryOf the businesses that are currently investing, 84% believe that adopting AI will lead to greater competitive advantages. Another 75% believe that AI and ML will provide opportunities for new businesses and new ways to enter their market. Additionally, because machine learning has been in development for a couple of decades now, you don’t have to start from scratch with your AI integration.  Open source ML algorithms are more widely available now as well as pre-trained ML models. These kinds of affordable, informed ML options are a great asset to small businesses.How do we use Machine Learning at ZipBooks? Even though we’re a startup, we quickly opted to integrate machine learning in order to keep up with our competitors.  As a cloud-based accounting software, we compete with products like Quickbooks 2019 and alternatives. The repetitive nature of bookkeeping means that our data lends itself well to pattern observation.  As ZipBookers enter transactions, our software automatically categorizes it based off of historical trends.  The more you categorize, the smarter our ML software becomes, meaning that your bookkeeping becomes more efficient and automated.  Additionally, we use data-driven intelligence to help users improve their invoicing processes and business health.  We provide smart scores based on accounting best practices. This allows ZipBookers to see what’s working and what’s not and then use our observations to improve their processes.How can I integrate Machine learning?This series by Growth Tribe tries to take the buzz out of AI and focus on the practical applications instead. Consider your business—machine learning is all about making predictions.  What kind of predictions should your small business be making? What is a mission critical function that you’d like to simplify?  In what ways could AI and machine learning be affordable and helpful to you?If you can’t think of realistic answers to these questions, then maybe wait before implementing machine learning.  There are some substantial upfront costs and if you don’t have the kind of data that would benefit from pattern observation, that’s okay.  You will likely find these areas of need later and when you do, you shouldn’t hesitate to implement AI solutions.If you have been able to identify some areas of need, here are some good places to start.   These 7 commonplace ML applications can keep your business from falling behind and will help you on how to apply machine learning to business problems.1. SEOGoogle’s Search Engine algorithms are largely informed by machine learning. Since there are billions of searches performed every day, you need to keep up with SEO in order to stay relevant.Analytic tools like SEMrush and Ahrefs use AI technology to improve their algorithms and offer fresh feedback to their customers.  This helps you to target keywords that are within your reach and compete for your spot at the top of the page.2. Ad campaignsMachine learning can help you to automate your marketing services.  By identifying patterns in user behavior, ML allows you to determine which individual advertisements are most likely to be relevant to specific users.  ML can also help with real-time bidding and successful retargeting. Create smart algorithms using machine learning technologies to improve your advertising.  Companies like Facebook do this all the time. Self-learning algorithms measure the success of previous campaigns in order to guide the direction of subsequent campaigns.  Tools like the RocketFuel project use AI solutions to optimize video ad length for potential customers.3. Online PaymentsRemember that machine learning is about monitoring patterns in order to adapt to change, so there needs to be some room for error.  There are certain tasks that do not allow for any error (like entering or submitting payments), so ML may not be the right tool for those jobs.However, there are aspects of online payments that can be simplified with AI technologies.  Automatic order tracking and invoicing is a great place to implement AI. We’ve used machine learning to track invoicing data and make recommendations to clients about how to improve their invoice score.  Many companies also use machine learning to track credit card purchases and detect fraud. Sift Science is an example of one of these companies; they specialize in applying machine learning to protect your business from user and payment fraud.4. Website DesignWhile some elements of design require the human touch, there are other routine tasks that can be automated, such as basic HTML and CSS coding.Companies like the uKit Group, train machines to evaluate and update the look of old websites using big data and web archives.  In today’s world, it doesn’t take long for your website to become out-of-date or irrelevant. Using AI solutions will allow you to identify problem areas and quickly implement solutions.5. PersonalizationIn the age of machine learning, your clients are expecting extreme personalization, whether they’re aware of it or not.  They want to see their name in your welcome email and personalized recommendations for products and services.North Face is one company that uses AI technology (Watson) to help personalize their customer’s shopping experience by asking a series of questions and then refining product selections based on the answers.  Netflix does this automatically by monitoring your viewing history and making new recommendations based on genre, gender or artwork.6. Email CategorizationAnother way to take the mundane out of your workday email categorization.  E-commerce sites with hundreds of customer support emails use AI technology to instantly label incoming emails and identify what type of ticket they’ve received (return, complaint, review, etc).  Knowmail is one solution that “fixes” email for you, though Google has been doing it for free for years.  Gmail’s AI has even gone so far as to automatically answer your email with suggested responses.7. Monitor your MarketAs a small business, you regularly monitor market conditions and competitive trends.  You can use AI-powered products to simplify the process for you. Regression models can help you make your pricing and sales forecasting more dynamic.For example, Crayon is an intelligence tool that alerts you to changes in industry pricing and helps you stay ahead of the curve.  Sales platforms like DOMO use AI to detect changes in data and predict ROI.  Final ThoughtsThe truth is, if you’ve already started implementing SaaS products, cloud-based software and other digital solutions, you are ready for machine learning.  Machine learning is affecting your small business and there are many non-intimidating ways to start transitioning towards full integration.If you want to avoid being outsmarted by digital businesses, you need to seek out machine learning solutions.
Rated 4.5/5 based on 12 customer reviews
Why Your Business is Falling Behind without Machine Learning 2055 Why Your Business is Falling Behind without Machine Learning Blog
Jaren Nichols 15 Nov 2018
Many business owners shy away from machine learning (ML) for the fear of cost or lack of usefulness.  And while large enterprises used to have the edge on ML technologies, small businesses are steppi...
Continue reading

How to Harness AI & Cloud Computing Technology to Make Everything Better

It should come as no surprise that many small to midsize business owners take pride in overseeing every aspect of their business. However, sometimes this can hamper productivity and business growth. I am talking about a situation where things cannot be controlled in regards to servers, data and software applications. And that is why incorporating the combination of Artificial Intelligence, and cloud computing turns out to be a viable option.Contemplated and theorized in the 1950s, the ability of machines to perform intellectual tasks is what artificial intelligence is all about. Since then, the tech branched out into several sub-levels such as machine learning and deep learning. While on the other hand, one trend in computing has been pretty loud and clear: centralization, mainframe systems, personalized power-to-the-people, do-it-yourself PCs- Cloud computing. Surprisingly, both the technologies have led to the Internet's inexorable rise.In simple terms, Cloud Computing is all about any computing service provided over the Internet or a similar network. Now when you sit all day long at your PC and type any query into Google, your PC strives hard to find all relevant answers for you. But have you realized that these words or the questions are shuttled over the Net to one of Google's hundreds of thousands of clustered PCs, which dig out your results and send them promptly back to you? The following post emphasizes how AI is taking long strides in the cloud computing realm.AI and Cloud Computing together reshaping the IT InfrastructureCompetition has raised the standard bar to a level. This probably means you need to consider an unprecedented number of different measures to sustain. In the present scenario, AI-optimized application infrastructure is in vogue. More and more vendors are introducing IT platforms featuring pre-built combinations of storage, compute, and interconnect resources can accelerate and automate AI workloads.AI being an unfamiliar discipline, professionals find AI hardware and software stacks complicated in regards to tuning and maintaining. As a result, data ingestion, preparation, modeling, training, inferencing and such AI workloads require optimization.Lastly, AI-infused scaling, acceleration, automation, and management features have become a core competitive differentiator. Let’s delve into the details:AI-ready computing platforms: As I said before, Artificial Intelligence workload is gaining momentum like never before, and operations are being readied to support. In addition to this, vendors are launching compute, storage, hyperconverged, and other platforms. At a hardware level, AI-ready storage/computer integration is becoming a core requirement for many enterprise customers. Infrastructure optimization tools: With the incorporation of AI, there has been witnessed an inexorable rise in self-healing, self-managing, self-securing, self-repairing, and self-optimizing. AI’s growing role in the management of IT, data, applications, services, and other cloud infrastructure stems from its ability to automate and accelerate many tasks more scalably, predictably, rapidly, and efficiently than manual methods alone. Now before you incorporate AI, it is essential to ensure that all your computing platforms are ready for AI workloads. Its benefits include:Hyperconverged infrastructure- This means high-end support to the flexible scaling of computing, storage, memory, and interconnect in the same environmentOften combined with Intel CPUs, NVIDIA GPUs, FPGAs, and other optimized chipsets, the Multi-accelerator AI hardware architectures act as a substantial workload within broader application platformsRight from ultra-high memory bandwidth, storage class memory, and direct-attached Non-Volatile Memory Express (NVMe), the memory-based architectures drives to minimize latency and speed data transfersEmbedded AI to drive optimized, automated, and dynamic data storage and workload management across distributed fabrics within 24×7 DevOps environmentsConclusionAt present, there is a massive need for incorporating intelligent ways for IT infrastructure. With growing workloads, increased the pace of innovation, exponential data growth, and users in the system (IoT, machine agents), conventional IT methods are no longer used to cope with the rising demands.  Have you check out the AI-first cloud model yet! If not you must because it offers:Support for mainstream AI frameworksGPU optimized infrastructure:Management toolsAI-first infrastructure servicesIntegration with mainstream PaaS servicesThere’s just one catch. You’ve got to start somewhere. Ideas and opportunities don’t just materialize out of thin air. On and all, Artificial Intelligence Technology brings out a unique flair that can positively transform the next generation of cloud computing platforms.
Rated 4.5/5 based on 12 customer reviews
How to Harness AI & Cloud Computing Technology to Make Everything Better

How to Harness AI & Cloud Computing Technology to Make Everything Better

Blog
It should come as no surprise that many small to midsize business owners take pride in overseeing every aspect of their business. However, sometimes this can hamper productivity and business growth. I...
Continue reading

Pros and Cons Of Developing Hybrid Apps With Ionic Framework

Development of mobile applications has seen a massive boost since the inception of smartphones. Today, every one of us uses smartphones for too many reasons, willingly or unwillingly. To cope up with growing demands, mobile apps are continually evolving to provide awe-inspiring customer satisfaction.You can develop a mobile app in one of the two ways.  The traditional approach is to develop every app individually for different platforms using technologies and languages native to the system you're designing for. So, you need to use Google's Android Studio and the Java programming language when developing an android app whereas you'd be using the Apple Developer Program and the Swift language for developing an app for iOS.The second approach is to develop hybrid apps. These are mobile apps that you develop only once but works the same on every platform. The primary principle for developing such apps is to use web technologies that are universal to every platform.Is Ionic framework good for app development?Ionic is one of the most popular frameworks for developing hybrid mobile apps, aimed at reducing development time and increasing productivity. It was created by Max Lynch, Ben Sperry and Adam Bradley, three core developers of Difty Co. They were trying to develop a drag-and-drop interface building tool using jQuery and Bootstrap before curating their ideas together for building Ionic. Max is the current CEO of the framework's parent company Ionic. You can follow him on twitter.Why Use Ionic?We know you're not going to use yet another framework just because we say so. However, with its top-notch performance metrics prepackaged with convenient components, you'll be loving the framework as soon as you start developing with it.Ionic is built on top of the Angular framework from Google and aims at reducing manual labor for you. It can save you a significant percentage of your overall development time and help you get your product out in the market very fast.The open source nature of the framework ensures you'll always be able to find and use third-party solutions when developing most of the common functionalities for your app. You can even extend the functionality of those pre-existing solutions pretty quickly. Here is the list of various advantages and disadvantages of using the Ionic framework to build the robust hybrid apps:Use Cases: When to UseNot every approach is suitable for every app. In this section, we'll try to outline the best use cases of Ionic framework for app development. You can develop your mobile app with Ionic if you fall under any of the below-mentioned cases. Looking to launch your startup?Hybrid apps primarily gained its traction because of their low development time paired with their consumption of less resource. Often when you want to launch your new business idea into production, you'll need a great app to ensure your success.Ionic apps are very fast to develop and can go in production within weeks, depending on the urgency. This is why most starting entrepreneurs choose to side with hybrid apps.You are a developerDeveloping Ionic apps can open up new opportunities for both seasoned and starting developers alike. As the pioneer hybrid mobile app platform, Ionic apps consist of the majority of hybrid apps you can find on most app stores.Just in 2015 alone, more than 1.3 million Ionic apps have made their ways into these app stores. So, choosing to develop Ionic apps can have a positive impact on your development career.You want an app, for cheap!Often we receive numerous emails from our readers asking us advice on how they can build a new app for them real cheap. Although we always suggest you to opt-in for industry-grade apps, which are not-so-cheap, we know many of you will side with cheaper alternatives for specific reasons.Thanks to Ionic, now you're able to get real world apps at a much competitive budget. As they require less resource, developers, and time, Ionic apps are considerably cheaper than their native counterparts.Use Cases: When to go NativeAlthough impressive if judged by their approach, Ionic framework, like every other framework have their overheads and drawbacks in developing apps. Continue reading below to find out when not to use Ionic apps, but to side with native apps.You need Performance, for realWell, let us all face it, despite how convenient they are, hybrid apps can never live up to the performance standards of native apps.As native apps are created by using native technologies, they are perfectly compatible with the host system's architecture and components, thus making sure the performance always remains above the bar. If performance is something you don't want to sacrifice, develop your apps with native technologies.Your app requires complex navigationDespite their excessive effort, Ionic apps are not exactly suitable for creating an app or game that need very complex navigational and routing logic.As web technologies used by hybrid apps can not control these core system components as fluently as native apps, you should drop them for native applications.Conclusion:Ionic framework is the de-facto platform to build hybrid mobile applications. And as the demand for fast and low-cost mobile apps continue to rise, they're surely going to retain their position. However, you should consider the requirements and use cases of your app first before choosing any technology over the other.
Rated 4.5/5 based on 12 customer reviews
Pros and Cons Of Developing Hybrid Apps With Ionic Framework

Pros and Cons Of Developing Hybrid Apps With Ionic Framework

Blog
Development of mobile applications has seen a massive boost since the inception of smartphones. Today, every one of us uses smartphones for too many reasons, willingly or unwillingly. To cope up with ...
Continue reading

Why Big Data and IoT Differs Yet is Compliant Across a Common Ground

Data may very well be the new oil for this century. The world generates data on an average of 2.5 quintillion bytes each day. That’s more than we can manage. However, leveraging these massive data troves could reveal fascinating insights that have the potential to transform whatever field it touches. This is where big data and IoT comes into the spotlight and so understanding the relation between them is crucial. Globally, IoT has moved up the ladder with IDC estimating $1.2 trillion spending by 2022. Big data is not behind with revenues forecasted to reach about $260 billion in 2022. Data remains fundamental to both. Big data involves collecting large volumes of data from sources like social media, devices, and sensors. IoT is procuring data from every device that surrounds us like home appliances. However, the underlying characteristics differ in both these technologies as they are two entirely different concepts. It would be better if we come to an understanding of these differences since our focus is on finding this out so as to apply them collectively in different fields to gain competitive advantage.Big Data and IoT are Two Entirely Different Concepts- IoT vs Big DataBig DataAs said, big data is all about huge volumes of data either structured or unstructured. The data is so large that it remains incapable of processing via conventional data processing software or techniques. It is characterized by three key determinants known as the 3Vs of big data - Volume, Velocity, and Variety.The volume deals with the large amounts of data procured, which are measured in larger units of data like terabytes, petabytes, and exabytes. Velocity determines the speed at which the gathered data must be processed to derive insights quickly and in real-time. Variety is about the nature of the data, whether structured or unstructured that include different file types.Together, these three characteristics - volume, velocity, and variety determine whether a stream of data could be labeled as big data. The need for higher computing power to process big data brings it in correlation with machine learning, natural language processing, business intelligence etc., which makes up the key components of its ecosystem.Currently, big data has acquired renewed importance particularly in businesses and industries that rely on data. Companies like IBM, SAP, Microsoft etc. have increased their spending on big data analytics and management. Moreover, with the proliferation of data globally, big data is swiftly penetrating into diverse applications such as healthcare, industry, enterprises, research & development, education, retail, e-commerce, and many more.Big Data Analytics: The Revolution Has Just Begun A talk from Dr. Will Hakes of Link Analytics, giving an extensive, high-level talk on the way big data analytics is changing- and will continue to change- the business intelligence landscape.                                                                        [Source : SAS]IoTImagine a world where your home appliance plugs into the internet. Your home could automatically respond to your presence such as a Hi-Fi system that plays your favorite music while you are at the living room or lighting systems that automatically switch on or off and optimize energy usage to save on your monthly utility bills. This is exactly what constitutes the Internet of Things.IoT extends connectivity from digital devices to non-internet physical devices that we use in our everyday life such as the appliances in our homes, automobiles, and other objects. All of them remain plugged to the internet enabling seamless interconnectivity to collect and exchange data for real-time performance metrics as well as remote monitoring and control.Sensors are the primary means by which devices and objects gather information from their surroundings. When connected to an Internet of Things platform, the data from all these devices are integrated and subjected to analytics. Once done, this can reveal readily useful information like patterns, potential issues and even put forth recommendations aimed at improving their performance.A more concrete example of IoT at work is in an IoT enabled car. The data collected from various sensors positioned at crucial components inside the car can help integrate the control and communications of the entire unit. For instance, the IoT platform can continuously assess road traffic conditions and suggest the driver change routes or adopt driving practices that heighten safety depending on the weather. Being a breakthrough technology with the potential to disrupt everything, IoT finds use particularly in agriculture, energy conservation, transportation, healthcare, manufacturing etc.Differences in Data Stream Management and ProcessingBig data involves collecting huge volumes of data. However, it does not subject these data streams to any kind of processing at once for extracting meaningful insights or patterns for making real-time decisions. Analysis occurs at a later stage and there is a delay or gap between when the data is procured and when it gets processed.IoT, on the other hand, is all about collecting and processing data streams at the same time. The data gathered gets processed instantly in real-time for optimizing the performance as well as correcting any malfunctions or identifying problems early on. In IoT, the data streams are constantly managed real time for accurate data-driven decisions and analysis.Since the collected data is analyzed only after a certain interval, big data solutions find use primarily in areas such as capacity planning, predictive maintenance, etc. Whereas IoT is all about time, as the simultaneous collection and processing of data find application in real-time scenarios like vehicle dynamics or traffic management to optimize their operational capabilities as well as detect issues.Big Data is Human Generated, Whereas IoT is Machine GeneratedBig data is about data generated solely due to human activity across different interfaces. For instance, our social media usage of scrolling through news feeds, subscribing to various channels and the messages we send and receive will leave out data trails that are collected by these platforms over time to build up a picture of our behavior and detect patterns in usage so as to personalize their services.Besides social media, human-generated data used in big data projects comes from other sources like emails, browsing activity etc. Amassed over longer periods, these data remain helpful in detecting patterns in human behavior with certainty, which remains helpful for planning and fixing long-term solutions.IoT relies extensively on machine-generated data from sensors attached to everyday objects like home appliances and personal digital devices. An IoT platform collects and analyzes these data real-time to optimize operations and detect potential problems simultaneously. A typical example is smart traffic lights that can switch traffic across the lanes real-time depending on the number of vehicles on the road or any congestions in vehicular traffic.Summing UpAs we have seen above, both big data and the Internet of Things shares several differences that make them two dissimilar concepts. Big data is more into collecting and accumulating huge data for analysis afterward, whereas IoT is about simultaneously collecting and processing data to make real-time decisions. In spite of their differences, it is not correct to say that the two never complement one another. In fact, both these technologies seem better off when combined together.Being perfectly compatible, big data can leverage the power of IoT to make sense of the data collected in real-time. The reverse is also true as IoT can benefit from the massive data troves gathered in big data projects to optimize, detect and even predict issues, so as to prevent them from occurring. Merging these two together is one step to being proactive with data, to shift processes and operations entirely into a data-driven approach.
Rated 4.5/5 based on 11 customer reviews
Why Big Data and IoT Differs Yet is Compliant Across a Common Ground

Why Big Data and IoT Differs Yet is Compliant Across a Common Ground

Blog
Data may very well be the new oil for this century. The world generates data on an average of 2.5 quintillion bytes each day. That’s more than we can manage. However, leveraging these massive da...
Continue reading

The Logic Behind a 16 Digit Credit Card Number

A credit card or a debit card has a unique 16 digit number that everyone is aware of. But, do you know these numbers are not a just random string of numbers rather it reveals a lot more information than you think.Ever wondered what does numbers on credit card actually represents? How that 16 digits identify the bank and the account from which your money needs to be paid to the billing party?                                                                                A sample credit cardLet’s dive into these 16 digits that we just see a string of random numbers and is often wondering what numbers on credit card mean.Understanding numbers on Credit CardMajor Identity Identifier (MII): The first digit of the card represents the category of industry which issued your credit card. This is known as “Major Industry Identifier” short for MII.Following is the list of Major Identity Identifier MII:ISO And Other Industries,Airlines,Airlines And Other Industries,Travels And Entertainment American Express Or Food Club,Banking And Finance VISA,Banking And Finance Mastercard,Banking And Merchandising,Petroleum,Telecommunications And Other Industries,National AssignmentIssuer Identification Number (IIN): The first six digits of the credit card number are known as the Issuer Identification Number short for IIN and this is to identify the institution that issued the card.The series followed by VISA cards starts from 4xx while that of Mastercard starts from 51–55 as the prefix.Account Number associated with Credit Card: Taking away the first 6 identifier digits, the following 6 to 9 digits constitutes unique bank account number. Now, this is not exactly the bank account number of a customer but the account number assigned to your credit account.Check Digit: And lastly, one last digit known as the “check digit” which is generated in such a way as to satisfy a certain condition.Here is the logic behind the condition that must be satisfied:Step 1: Let’s consider the card number as 1802909582961827Step 2: As of now, just consider first 15 digits i.e., 180290958296182|7Step 3: Now beginning from the left, start with the first number, take every second number and multiply it by 2 as shown here:Starting from the first i.e., 1 and taking every second number, we will get the following numbers: 1 0 9 9 8 9 1 21 * 2 = 20 * 2 = 09 * 2 = 189 * 2 = 188 * 2 = 169 * 2 = 181 * 2 = 22 * 2 = 4Step 4: Sum up the digits if the multiplication results in a 2 digit number. i.e., 18 => 1 + 8 = 9This will give us all the result in a single digit number.So, we get : 2,0,9,9,7,9,2,4Step 5: Now, Sum up all the single digits and all the digits left in the card number.i.e., 2+0+9+9+7+9+2+4 + 8+2+0+5+2+6+8 = 73Add the sum together. Now, what must be added to the sum obtained so that it is divisible by 10? In this case, it is 7So, 73+ 7 = 80 which is divisible by 10.This last digit so obtained is called as Checksum digit.That’s the logic behind credit/debit card numbers.How many digits in a Credit Card Number?Now, the number of digits in a credit card number varies with the issuing authority.Following is the list of some issuing authorities along with their card length: Issuing Authority with Card Length (Description)VISA and VISA Electron: 13 or 16 digitsMastercard: 16 digitsDiscover: 16 digitsAmerican Express: 15 digitsDiners Club Card: 14 digits (including International and Blanche)Maestro Cards: 12 to 19 (multi-national debit card)Laser: 16 to 19 (Ireland)Switch: 16, 18 or 19 (United Kingdom)Solo: 16, 18 or 19 (United Kingdom)JCB: 15 or 16 (Japan Credit Bureau: Japan)China UnionPay: 16 (People’s Republic of China: China)What checks will we perform on your number?We perform some checks on your number and explain what each part of it means.That being said, we’ll show you results for:Luhn algorithm checkMajor Industry IdentifierIssuer Identification NumberPersonal Account Number and ChecksumCard Verification Number : There is another number which is usually behind the card usually of 3 to 4 digits and is known by various names like CVVs, CVV2s called card verification value, and card security code (CSC). These are all calculated using the CVV algorithm.The codes have different names as per issuing authority:                                                              Code issued by issuing authorityThey are required by payment systems such as MasterCard and Visa to authenticate their credit or debit cards.There are several types of security codes:The first code is encoded on track 2 of the magnetic stripe of the card and used for card present transactions called CVC1 or CVV1,The second code is often sought by merchants for card occurring by fax, mail, or Internet telephone, it is called CVV2 or CVC2.The card security code is typically the last three or four digits printed.The logic behind CVV generation:To generate or calculate the 3-digit CVV, the algorithm requires is:Primary Account Number (PAN), a 4-digit Expiration Date, a pair of DES keys (CVKs), and a 3-digit Service Code. This algorithm is only known to the bank and not for any person or organization.Tips to keep your Card and PIN safe:Sign your card as soon as you receive it.Safeguard your card as though it was a cash.Memorize your PIN. Never write it down.Do not forget to receive your card back from the salesclerk or waiter when you use it.Torn the receipts that contain the full account number if you do not need to keep them.Make sure to review your account statements as soon as you receive them so that you are confirmed that all the transactions are yours.Make a list of your credit card numbers and customer service phone numbers and store it in a safe place, probably encrypted software or some hard safe place.This way it will be easier for you to call to cancel the cards if your purse or wallet is lost or stolen.To verify any credit card or debit card click here.What NEXT?In the next part of this series, we will see the implementation of the code that will generate valid credit/debit card numbers along with their Issuing Authority and Account Number.Hope you have found this post useful. Please don’t forget to like and share with your friends because sharing is caring. :)
Rated 4.5/5 based on 11 customer reviews
The Logic Behind a 16 Digit Credit Card Number

The Logic Behind a 16 Digit Credit Card Number

Blog
A credit card or a debit card has a unique 16 digit number that everyone is aware of. But, do you know these numbers are not a just random string of numbers rather it reveals a lot more information th...
Continue reading

Python Programming: Choosing between Flask and Django

A web developer always has an option to pick from a wide range of frameworks while they are using Python as a server-side programming language. He can opt to take advantage of the full-stack Python web frameworks to quicken the development of complex and comprehensive web applications by availing some robust tools and features. He can also use micro and lightweight Python web frameworks to develop simple web applications without embedding any extra time and effort.Python programmers prefer both Flask and Django. Since, Django is a full-stack web framework for Python, whereas Flask is a lightweight and extensible Python web framework. Django is developed based on a batteries-included approach. It enables programmers to accomplish simple web development tasks without using any third-party tools and libraries. Although, It lacks some robust features that are provided by Python.Hence, it is vital for web developers to understand the significant differences between Flask vs Django.Comparing Two Python Web Frameworks in 2018: Flask and DjangoType of Web FrameworkAs noted earlier, Django is developed based on a batteries-included approach. The batteries that are involved in Django make it easier for developers to achieve basic web development tasks like URL routing, user authentication, and database schema migration. It also accelerates the process of custom web application development as it provides an ORM system, built-in template engine, and bootstrapping tool. Flask, on the other hand, is a simple, and minimalist lightweight web framework. Although it does not have some of the built-in features that are provided by Django still, it helps developers to keep the essence of a web application extensible and straightforward.FlexibilityThe batteries that are included in Django help developers to create a variety of web applications without using any third-party tools and libraries. But the developers lack the options to make changes to the modules provided by Django. Hence, developers generally have to build web applications by availing the built-in features offered by Django. On the other hand, Flask is a small but extensible web framework enabling developers in developing web applications with flexibility using various web development tools and libraries. Many developers find it easier to learn Flask than Django due to its smooth and customizable architecture.Admin systemDjango has an admin system that comes with an ORM (Object Relational Mapper) database system and directory structure. Developers feel that it is an all-inclusive experience when it comes to developing with Django meaning that multiple projects have the same directory structure.Flask, on the other hand, does not have these features, If you want to have an admin system or use an ORM, you’ll need to install custom modules. The Flask framework leaves this it up to the developer. It gives them the option to use it with SQLAlchemy, MongoDB or something more simple like SQLite. This can be a preferable choice as with ORM you might sometimes waste development time if you are unable to modify the SQL query directly.Development speedDjango frameworks are known amongst developers for their fast development of complex web applications. Since it is fully featured; developers have all the tools that they require to implement and develop easily reliable, scalable, and maintainable web applications, in record time. On the other hand, Flask’s simplicity allows experienced developers to create smaller applications in short timeframes.Community One of the main advantages of the Django framework is that it has an active developer community which means that if you require help, or when it’s time to scale your app, you will have an easier job finding other developers to join in and start contributing, plus a wealth of useful content already in the public domain. The Flask community is currently not as big, and so information may be harder to come by.MaturityDjango is a very mature framework, having seen its first release in 2005 which means that it has gathered numerous extensions, plugins and third-party apps covering a wide range of needs. Flask in comparison, is much younger, having been introduced in 2010, so it doesn’t have quite the same variety of options available.Template EngineFlask is developed based on the Jinja2 template engine. It enables developers to stimulate the development of dynamic web applications by taking help of an integrated sandboxed environment and writing templates in an expressive language. Django, on the other hand, is with a built-in template engine that enables developers to define the web application’s user-facing layer without putting extra time and effort. It even allows developers to stimulate custom user interface development by writing templates in Django template language (DTL).Built-in Bootstrapping ToolDjango has a built-in bootstrapping tool called Django-admin. It enables developers to build web applications without any external input and to divide a single project into multiple numbers of applications. The developers can use Django-admin to develop new applications within the project itself. They can also use the apps to add more functionality to the web applications based on varied business requirements.Usage and Use CasesSeveral well-known websites utilize both Flask and Django. But the statistics posted on various sites depict that Django is more popular than Flask as the developers can take full advantage of the robust features provided by Django to deploy and build complicated web applications rapidly. At the same time, they use Flask to quicken the development of simple websites that use the fixed content.  The developers still have the option to extend and   Flask programming according to the project specifications.In ConclusionWe know that Django is a more complex framework than Flask so if you're learning web programming, it can be quite challenging to understand which pieces are liable for what functionality, and what is required to get the results that you aspire. However, once you get accustomed to Django, the extra work it does can be beneficial and can save a lot of time of setting up monotonous, boring components of a web application.Sometimes it is difficult to choose between these two frameworks, but the nice thing is that when you get into the more advanced functionality of these frameworks, such as templates, these two remain very similar in many aspects. Therefore, It's easy to switch from one to the other if you ever require.If you still doubt as to which framework is better to adopt: Flask or Django after reading this guide, I would recommend that you go ahead with Flask — you'll figure out how the pieces fit together more efficiently.
Rated 4.5/5 based on 11 customer reviews
Python Programming: Choosing between Flask and Django

Python Programming: Choosing between Flask and Django

Blog
A web developer always has an option to pick from a wide range of frameworks while they are using Python as a server-side programming language. He can opt to take advantage of the full-stack Python we...
Continue reading

Unit Testing Using Mocha and Chai

Mocha is a testing framework for NodeJs, which simplifies asynchronous testing. Mocha tests run serially.Chai is a BDD / TDD assertion library for node and the browser that can be paired with any JS testing framework.This article aims at providing a basic understanding of running mocha/chai on Node JS Apps, writing tests using Mocha, and setting up unit testing using Mocha and Chai. We’ll be testing a simple user application, using Mocha and Chai.My assumption is that the user has a basic understanding of NodeJs.Requirements- Node installed(At least v6 and above)- MongoDB installed- Text editorThe application source code can be found in here.The Application Structurepackage.jsonIn the root of your project, run npm init to create a package.json file. Update the file with the following contents:{   "name": "mocha-test",   "version": "1.0.0",   "description": "Mocha testing article",   "main": "index.js",   "scripts": {     "start": "node index.js",     "test": "mocha"   },   "author": "Collins",   "license": "ISC",   "dependencies": {     "body-parser": "^1.15.1",     "config": "^1.20.1",     "express": "^4.13.4",     "mongoose": "^4.4.15",     "morgan": "^1.7.0"   },   "devDependencies": {     "chai": "^3.5.0",     "chai-http": "^2.0.1",     "mocha": "^2.4.5"   } }ControllersThis folder contains a user.js file which houses various functions of the application. Update the file with the following contents:let mongoose = require('mongoose'); let User = require('../models/user'); function createUser(req, res) { var newUser = new User(req.body); newUser.save((err,user) => { if(err) { res.send(err); } else { res.json({message: "User successfully created", user }); } }); } function getUsers(req, res) { let query = User.find({}); query.exec((err, users) => { if(err) res.send(err); res.json(users); }); } function updateUser(req, res) { User.findById({_id: req.params.id}, (err, user) => { if(err) res.send(err); Object.assign(user, req.body).save((err, user) => { if(err) res.send(err); res.json({ message: 'User updated', user }); }); }); } function deleteUser(req, res) { User.remove({_id : req.params.id}, (err, result) => { res.json({ message: "User deleted", result }); }); } module.exports = { getUsers, createUser, deleteUser, updateUser };index.jsThis file houses all the server configuration. Fill the file with the following:let express = require('express'); let app = express(); let mongoose = require('mongoose'); let morgan = require('morgan'); let bodyParser = require('body-parser'); let port = 5000; let user = require('./controllers/user');       mongoose.connect("127.0.0.1:27017");                                       app.use(bodyParser.json());                                     app.use(bodyParser.urlencoded({extended: true}));               app.use(bodyParser.text());                                     app.use(bodyParser.json({ type: 'application/json'}));   app.get("/", (req, res) => res.json({message: "User management app"})); app.route("/user") .get(user.getUsers) .post(user.createUser); app.route("/user/:id") .delete(user.deleteUser) .put(user.updateUser); app.listen(port); module.exports = app; ModelsNext, we are going to create models for our application. Create a folder called models and inside this folder, create a file named user.js. Fill the file with the contents below:let mongoose = require('mongoose'); let Schema = mongoose.Schema; let UserSchema = new Schema(   {     firstName: { type: String, required: true },     lastName: { type: String, required: true },     emailAddress: { type: String, required: true },     username: { type: String, required: true},     password: { type: String, required: true },       },   {     versionKey: false   } ); module.exports = mongoose.model('user', UserSchema);It is time to run our application. First off, install the required packages, i.e npm install, then run the application, npm start. You can browse the app on port 5000 using Postman. For example:CREATE:READ UPDATE The above tests may fool you that the app is 100% functional, without flaws. This is not the case. We should write tests to prove that our app is functioning as expected.TestsMocha by default expect a folder called test. Let’s create the folder, and inside this folder, create a test file called user.js . Populate the file with the following contents:As you are familiar with JS, in the above code block, I’m just importing the required packages and files, for our tests to run successfully. Next, create the parent block for the tests:We want the db to be cleared before each test is executed. Mocha has what we call beforeEach . This runs a block of code before each test in a describe. Next, let’s add a test which is going to test the creation of a user: The code block above, first ensures that a user cannot be created without an E-mail address. It also proves that actually when valid data is posted to the /user endpoint, a user is created successfully. Let’s test this. Run npm test. The result should be as follows:A complete user.js test file, which has tests addressing, CREATE, READ, UPDATE AND DELETE, is as shown below. The code is self-explanatory. Copy the code and update the test file that you have so far.let mongoose = require("mongoose"); let User = require('../models/user'); let chai = require('chai'); let chaiHttp = require('chai-http'); let server = require('../index'); let should = chai.should(); chai.use(chaiHttp); describe('Users', () => { beforeEach((done) => { User.remove({}, (err) => {    done();   });     });       describe('/POST user', () => {         it('it should not create a user without email address', (done) => {             let user = {                 firstName: "John",                 lastName: "Doe",               username: "user",               password: "pass"             }               chai.request(server)               .post('/user')               .send(user)               .end((err, res) => {                     res.should.have.status(200);                     res.body.should.be.a('object');                     res.body.should.have.property('errors');                     res.body.errors.emailAddress.should.have.property('kind').eql('required');                 done();               });         });                 it('it should create a user ', (done) => {           let user = {               firstName: "John",               lastName: "Doe",               emailAddress: "doe@email.com",               username: "me",               password: "pass"           }               chai.request(server)               .post('/user')               .send(user)               .end((err, res) => {                     res.should.have.status(200);                     res.body.should.be.a('object');                     res.body.should.have.property('message').eql('User successfully created');                     res.body.user.should.have.property('firstName');                     res.body.user.should.have.property('lastName');                     res.body.user.should.have.property('emailAddress');                     res.body.user.should.have.property('username');                 done();               });         });     });   describe('/GET user', () => {   it('it should GET all the users', (done) => { chai.request(server)     .get('/user')     .end((err, res) => {   res.should.have.status(200);   res.body.should.be.a('array');   res.body.length.should.be.eql(0);       done();     });   });   });   describe('/PUT/:id user', () => {   it('it should update a user given an id', (done) => {   let user = new User({firstName: "John", lastName: "Doe", emailAddress: "doe@email.com", username: "user", password: "pass"})   user.save((err, user) => { chai.request(server)     .put('/user/' + user.id)     .send({firstName: "John", lastName: "Doe", emailAddress: "john@email.com", username: "user", password: "pass"})     .end((err, res) => {   res.should.have.status(200);   res.body.should.be.a('object');   res.body.should.have.property('message').eql('User updated');   res.body.user.should.have.property('emailAddress').eql("john@email.com");       done();     });   });   });   });   describe('/DELETE/:id user', () => {   it('it should delete a user given an id', (done) => {   let user = new User({firstName: "John", lastName: "Doe", emailAddress: "doe@email.com", username: "user", password: "pass"})   user.save((err, user) => { chai.request(server)     .delete('/user/' + user.id)     .end((err, res) => {   res.should.have.status(200);   res.body.should.be.a('object');   res.body.should.have.property('message').eql('User deleted');       done();     });   });   });   }); }); Running the tests, you should have something similar to:ConclusionIf you’ve keenly followed the above steps from installing Mocha and Chai on Node JS to testing Node JS code with Mocha and Chai, you should be having a functional app with passing tests.This tutorial doesn’t exhaustively cover Mocha testing. There are other concepts which I haven’t illustrated here. For a deeper understanding, you can head on to the docs.Please don’t confuse the above with TDD. After understanding how to test an application, we can build an application using principles of TDD. Hopeful, that’s my next tutorial. Stay tuned! 
Rated 4.5/5 based on 11 customer reviews
Unit Testing Using Mocha and Chai

Unit Testing Using Mocha and Chai

Blog
Mocha is a testing framework for NodeJs, which simplifies asynchronous testing. Mocha tests run serially.Chai is a BDD / TDD assertion library for node and the browser that can be paired wit...
Continue reading

Important Tips and Features in Node.js to Practice

Node.js is a free open source server that works on various platforms like (Windows, Linux, Unix etc. It was initially inscribed by Ryan Dahl in 2009. Node.js has an event-driven architecture which leads to optimization in throughput and scalability in web applications and real-time web applications. Because of this, it has a wider acceptance amongst the developer’s community. Various Corporates like Netflix, Microsoft, Wal-Mart, Yahoo have started using Node js because of its diverse scalability and lightning-fast speed. The best way of utilizing Node.js in the corporate world is by hiring Node js Developer so that he can ensure everything is working smoothly.This tutorial demonstrates some of the significant Node.js development tips and best practices because of which you can improve your Node.js skills-1) Use of Modularization of CodeToday’s web applications face a lot of difficulties like the performance problem, error handling problem, structure problem, confusion among developers due to a large number of code lines. There are approximately 1500-3000 code lines, even in a small project. Handling large programs has become one of the most prominent problems of developers. To tackle these problem computer scientists began to organize the code into smaller pieces.  This is the concept of Modularization. It is used in Node.js to makeDebugging of code easierReusability of code possibleReadability of code easier                                                                       Source: www.dynatrace.com2) Use of JavaScript Standard StyleLack of development style sets can lead to severe complications in the code. This is avoided by opting for Javascript Standard Style. As Node.js is a Javascript itself, the same language can be used on the backend and frontend. This helps in breaking down the boundaries between front- and back-end developments. Using Javascript Standard Style also reduces unnecessary complications in managing the code.3) Use of Asynchronous programming Traditionally, input-output operations used to happen synchronously. This means that the main thread used to be blocked until the file is being read, nothing could be done in the meantime. The blocking of main thread for multiple operations affects the CPU utilization and the overall performance of web application adversely.To solve performance problem and to utilize your CPU better, Node.js introduced an asynchronous programming model. In this resources aren’t blocked instead all tasks are performed simultaneously. Asynchronous processing is a form of input/output processing where all the tasks continue to work in parallel order thereby making proper use of CPU. The Promise object and the Event Loop are proved to be of immensely helpful in Asynchronous programming. The event loop acts as the heart of Node.js and does the work of scheduling asynchronous operations.4) Semantic VersioningSemantic Versioning (referred to, as SemVer), is a versioning system that’s been mostly used in past some years. Updating packages without semVer can lead to major issues. Therefore the use of Semantic Versioning to notify customers about updates is important. SemVer does the work of jolting the accurate component up at the exact time.5) Grouping of ‘Require’ Statements and placing them at the Top Require statements work synchronously, and if they are not together they can slow or stop the execution. Hence, it is always recommended to group “the require” statements and put them at the start. This way issues of performance and slow processing can be avoided.6) Guessing of error in early stagePresence of any kind of bug in the program causes a lot of glitches during processing and executing the program. It can make program useless or may lead to many worse scenarios. For large programs, it was always difficult to detect the problem than to tackle it.Because of the Modularization, it has become easier to detect the errors and bugs at an early stage thereby reducing the chances of ugly scenarios and increasing the efficiency of the code. Handling of error has become easier in Node.js.                                                                           Source: https://hacks.mozilla.org7) Reliable tools for good SecurityNode.js has achieved a lot of popularity because of it's easy to use, fast performing features and diverse scalability. With popularity comes the risk of attack. While working with Node.js, the developer plays with a huge amount of sensitive user data. Hence security has to be the first priority. Node.js and its core contributors uphold various different means and sources to provide the required security to the Node.js project and to provide security to the user’s data. There are various applications and tools within Node.js to take care of Brute force attack, Data validation, project security, Session Management.The rate of these tools and applications is much more effective, therefore many developers prefer the Node.js security mediums.Hire a Node.js developer or do it yourself, the above-mentioned features and techniques supported by Node.js should always be implemented properly to improve your Node JS performance for the smooth working of code, effective error handling and fast debug free processing.
Rated 4.5/5 based on 11 customer reviews
Important Tips and Features in Node.js to Practice

Important Tips and Features in Node.js to Practice

Blog
Node.js is a free open source server that works on various platforms like (Windows, Linux, Unix etc. It was initially inscribed by Ryan Dahl in 2009. Node.js has an event-driven architecture whic...
Continue reading

Exploring Change Detection Strategies In Angular

IntroductionOne of the core features of JavaScript frameworks, particularly Angular is the ability to detect data changes within any component and speedily re-renders the view automatically to reflect that change.This amongst other things makes the process of representing the current state of the DOM somewhat easy. Generally, an understanding of tools like Angular will be more profound with a full grasp of how most things work under the hood.This Guide aims to give you a solid foundation on how change detection works in Angular. With this, you can further explore the subject and gain more insight.PrerequisiteA basic knowledge of JavaScript and Angular will help you get a profound understanding of this tutorial. This will not be an introduction to Angular but a quick look at how change detection works. If you are ready, let’s dive right into it! What is change detection?Managing the synchronization of app state and the user interface has long been a major source of complexity in UI development. The state of an application like forms, links, and buttons varies from a data structure like objects to arrays and so on. Based on a particular event to manipulate the user interface from the browser, change detector will generate a DOM output and renders it to the user.Change detection monitors the state of data within the browser especially at runtime when DOM has already been rendered. Let’s see how Angular detects and handles this.What causes a change detection in AngularTechnically speaking, change detection becomes more challenging when data keeps changing over time as a result of users’ interaction with the user interface of a web application.Angular as a web application framework uses plain old JavaScript objects ( POJO ) for models. Unfortunately, these models don’t have set methods that can be called to update them. Some frameworks have an API to provide the functionality of updating the DOM once a change has been detected. As this isn’t available in Angular, it has to run a change detection at a specific point where the model state needs to be changed.Model state in Angular can change as a result of, but not limited to:DOM events ( click, focus, submit, blur and so on )AJAX requestsTimers ( setTimeout(), setInterval() )So, in order to keep the DOM updated, Angular needs to perform its change detection right after any of this entry point.How Angular detects a change Every Angular application is a tree of components. The image below illustrates an example of an Angular application with few components: At runtime, Angular creates a change detection class (CD) for every component in an application. So, just like we have a tree of components, we also have a tree of change detectors: Each change detector is created based on the data structure of the components that it targets. In a nutshell, when change detection runs, Angular walks down the tree of change detector that takes care of every individual component.Demonstrating a strategy like change detection is best done by using an example, some sort of demo. So, let’s quickly set up a simple Angular project. Note: I assumed that you already have Angular-cli tool installed. Check here for instructions if otherwise.Run the command below to create an Angular application named book-store## create a new angular project ng new book-store Generate components for the bookstore:## Generate books component ng g c books ## Generate book component ng g c book In all the components, we are going to import and implement the DoCheck interface. This is a lifecycle hook that is called whenever Angular checks all the components for any change to the state of models. We won’t do much but to simply log each component’s specific information to the console. To get started, add the content below to the appComponent:// ./src/app/app.component.tsimport { Component, DoCheck } from '@angular/core';   @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] })export class AppComponent implements DoCheck { title: 'Change Detection Angular' books = [   { title: 'b1'},   { title: 'b2'},   { title: 'b3'} ];   ngDoCheck() {   console.log("AppComponent is being checked!"); }   onClick() {} } And update the appComponent template with:// ./src/app/app.component.html    {{ title }}   Click the button in AppComponent   bookComponentSimilarly for the bookComponent, locate ./src/app/book/book.component.tsand paste the code below:// ./src/app/book/book.component.tsimport { Component, Input, DoCheck } from '@angular/core';   @Component({ selector: 'app-book', templateUrl: './book.component.html', styleUrls: ['./book.component.css'] })export class BookComponent implements DoCheck { @Input() book;   ngDoCheck() {   console.log("Book Component is being checked!"); } } booksComponentThe booksComponent :// ./src/app/books/books.component.tsimport { Component, Input, DoCheck } from '@angular/core';   @Component({ selector: 'app-books', templateUrl: './books.component.html', styleUrls: ['./books.component.css'] })export class BooksComponent implements DoCheck {   @Input() books;   ngDoCheck() {   console.log("Books Component is being checked!"); }   } The booksComponent template: // ./src/app/books/books.component.html List of Books   A careful look at all the components, you will realise that we have successfully implemented this DoCheck interface at three different level i.e appComponent, bookComponent and booksComponent.Change detection processMaking use of the newly created application, let’s examine how the change detection process is being carried out. Under the hood, Angular internally makes use of a library called zone.js to detect whenever a change occurs in an application. You can read more about the library here.Next, run the application with:ng serve Click on the button to see the results on the console.So what's going on here?This shows that whenever the button is clicked, whether there is a change in our application state or not, Angular traverses the tree of change detector and kind of asks the change detector for each component if there is a change in that component or not. This operation runs from top to bottom throughout the application irrespective of the event’s source location and it applies to all DOM event in an Angular application.Similarly, a setTimeout() function will result in the same process. To try this out, update the appComponent by adding a constructor:// ./src/app/app.component.tsimport { Component, DoCheck } from '@angular/core';   ...export class AppComponent implements DoCheck { ... // add a constructor constructor() {   setTimeout(() => {}, 3000); }   ... } Change detection strategiesWe have two change detection strategies to understand in Angular:Default strategyonPushThe default strategy looks at all the binding expressions in the component’s template that needs to be updated and keeps track of its previous as well as the new state. Once there is a change, it notifies Angular and that’s when the components need to be re-rendered. Let's’ take a look at an example, in the appComponent, go ahead and attach a function to the onClick event and change the title of the first book in our store like this:// ./src/app/app.component.ts import { Component, DoCheck } from '@angular/core';   @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements DoCheck { ... // change the title of the first book onClick() {   this.books[0].title = "My new title" } } So, whenever we clicked the button, you would notice that the title of the first book in our list is updated. This is because the change detector for the bookComponent looks at the binding expression (i.e {{ book.title }}) in the template and was able to re-render the new state of the element.PerformanceChange detection is very performant, but once the application starts to grow with more complexity and lots of components with complex views and large dataset, the default strategy can be a little bit slow during this process because change detection will have to perform more and more work.Imagine having up to 10 or more binding expressions within a component that needs to be updated. This can amount up to a thousand or more comparison checks carried out simultaneously within a component every time we have a DOM manipulating browser event. Though change detectors are highly optimized to perform thousands of checks in just a few milliseconds.So, this large comparison checks may have a noticeable impact on your application. This is where the second change detection strategy ( i.e onPush) can help to optimize the change detection process by reducing the number of comparison checks.A quick example:Go back to the bookComponent and import the changeDetection strategy as shown below:// ./src/app/book/book.component.ts import { Component, Input, DoCheck, ChangeDetectionStrategy } from '@angular/core';   @Component({ selector: 'app-book', templateUrl: './book.component.html', styleUrls: ['./book.component.css'], changeDetection: ChangeDetectionStrategy.OnPush // add this }) export class BookComponent implements DoCheck { @Input() book;   ngDoCheck() {   console.log("Book Component is being checked!"); } } This means that each component can have its own change detection strategy, now go back to the browser and click the button again.Have you noticed that when we clicked the button, the title of the first book did not change? And that is because we are updating the title property of the same object.The book object is the same before and after the click handler was executed. To get this right, whenever we use onPush strategy, rather than modifying the object, we should create a new copy of it with the updated property. In other words, we should treat the objects as immutable just like a value type in JavaScript.So, let’s replace the first book in this array with a new book object. Open the appComponent and update as shown:// ./src/app/app.component.ts import { Component, DoCheck } from '@angular/core';   @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements DoCheck { ... // create a new object onClick() {   this.books[0] = { title: "My new title" } } }Here, we created a new object and updated the title of the first book with it. This is applicable for objects with more than one property. Now go back to your browser and try it out again:ConclusionKnowledge acquired here will not only help you understand how change detection works in Angular but also help to improve the performance of your Angular application. Complete source code can be found on GitHub.I hope you found this helpful. Feel free to leave a comment if you have suggestions or any other questions.
Rated 4.5/5 based on 11 customer reviews
 Exploring Change Detection Strategies In Angular

Exploring Change Detection Strategies In Angular

Blog
IntroductionOne of the core features of JavaScript frameworks, particularly Angular is the ability to detect data changes within any component and speedily re-renders the view automatic...
Continue reading

IBM Introduces AI OpenScale & Multicloud Manager To Manage AI & Cloud Deployment

A Multinational Computer Technology and IT Consulting Corporation known by the nickname "Big Blue" i.e IBM on October 15, 2018 came up with the Artificial Intelligence (AI) OpenScale, a new technology platform that will enable companies to manage Artificial Intelligence transparently throughout the full Artificial Intelligence lifecycle, irrespective of the applications built or whichever environment the enterprise is working on.IBM Artificial Intelligence (AI) supports machine learning and deep learning models that are developed and trained in any open source model, which includes Scikit learn, SparkML, Tensorflow, and Keras, and also supports the applications and models trained and hosted in any integrated development environment such as Seldon, Microsoft AzureML, and Watson."Our strategy is to use an open, interoperable approach to fuel the AI economy," David Kenny, senior vice president of IBM Cognitive Solutions, said in the release. "We believe AI OpenScale represents a new technology category and the start of a new era in the mass adoption of AI for business because it is open - making any AI much easier to operate and fully transparent."The Artificial Intelligence OpenScale helps in auditability, automates explainability, and mitigates bias throughout the lifecycle of AI in a vendor-agnostic way. It also explains how AI recommendations are made in every business terms to help companies to understand how applications reach decisions.IBM also announced the Industry-first service IBM Multicloud Manager, that automates how companies can move, manage and how they can integrate apps across multiple cloud environments.Multicloud Manager also helps businesses to manage and combine workloads on clouds from providers including IBM and businesses like Microsoft, Amazon, and Red Hat. Multicloud Manager offers more visibility across clouds, improved governance and security, and automation to track risk.IBM Multicloud Manager will be available in late October 2018, whereas, IBM AI OpenScale will be available in the next coming year through IBM Cloud Private and IBM Cloud.Source: IBM official blog
Rated 4.5/5 based on 11 customer reviews
IBM Introduces AI OpenScale & Multicloud Manager To Manage AI & Cloud Deployment

IBM Introduces AI OpenScale & Multicloud Manager To Manage AI & Cloud Deployment

What's New
A Multinational Computer Technology and IT Consulting Corporation known by the nickname "Big Blue" i.e IBM on October 15, 2018 came up with the Artificial Intelligence (AI) OpenScale, a new ...
Continue reading

Setting caching headers for a SPA in NGINX cache

When your frontend app is a SPA, all the assets get loaded into the browser and routing happens within the browser unlike a SSR app or conventional/legacy web apps where every page is spat out by the server. If caching is misconfigured or not configured, you will have a horrifying time during deployments.Muscle memories of your developers will make them hit hard refresh when they hear the word “Code Deployed” but your customers will rant and rave when their web page gets mangled in the middle of something important because of your deployment.Having read on the internet before “Browsers and Web servers have been configured by default handle basic caching” made me procrastinate my learning on caching until one day. It started annoying QA and started killing developer’s productivity. That day I told myself “You are not gonna sleep tonight!”Here is a guide to caching headers for SPA in Nginx.How to Cache headers for SPA on Nginx?Primary RequirementThe configuration which I’m going to explain wil+l work only if your SPA uses webpack or any other bundler which can be configured to append random characters to file names in the final distribution folder on every build (revving). This is quite a standard practice in modern web development. I’m pretty sure it will be happening in your system without your knowledge.Checkout Revved resources section at https://developer.mozilla.org/en-US/docs/Web/HTTP/CachingStatus Quo2 WebApps segregated by NGINX locations pathsAWS ELB is sitting in front of the NGINX Web ServerAWS CloudFront is sitting in front of AWS ELB. (Actual caching is done here)NGINX is sending out last-modified and etag headers.I have some faint idea on how caching works.Configure caching in NGINXThe headers which we are going to need are cache-control, etag and vary. vary is added by NGINX by default, so we don’t have to worry about it. We need to add the other two headers in our configs at the right place to get caching working.We have to configure the following things:disable caching for index.html ( Every time browser will ask for a fresh copy of index.html)Enable caching for static assets (CSS, JS, fonts, images) and set the expiry as long as you need ( eg: 1year).1. Let's disable caching for index.html My current config.location /app1 {   alias /home/ubuntu/app1/;   try_files $uri $uri/ /index.html; } After disabling caching.location /app1{    alias /home/ubuntu/app1/;    try_files $uri $uri/ /index.html;    add_header Cache-Control "no-store, no-cache, must-revalidate"; }How this will work?When I hit /app1 from my browser NGINX will serve the index.html from /home/ubuntu/app1 directory to my browser, at the same time it will also execute the add_header directive which will add the Cache-Control "no-store, no-cache, must-revalidate"; to the response header. The header conveys the following instructions to my browserno-store: don’t cache/store any of the response in the browser.no-cache: ask every-time(every request) with the server “Can I show the cached content I have to the user?”must-revalidate: once the cache expires don’t serve the stale resource. ask the server and revalidate.The combination of these three values will disable caching for the response which is received from the server.2. Let's enable caching for static assetsMy current config.#for app1 static files location /app1/static {    alias /home/ubuntu/app1/static/; }After enabling caching.#for app1 static files location /app1/static {    alias /home/ubuntu/app1/static/;    expires 1y;    add_header Cache-Control "public";    access_log off; }How to implement cache headers with Nginx?We enable aggressive caching for static files by setting Cache-Control to "public" and set expires header to 1y. We do this because our frontend build system generates new file names (revving) for the static assets every time we build and new file names invalidate the cache when browsers request it. These static files are referred in index.html which we have disabled caching completely. I disable access logs for static assets as it adds noise to my logs.That's it! This must set up the Nginx caching headers for SPA to create a beautiful app.NGINX add-header GotchaWe usually add the headers which we want to be common for all the location blocks in the server block of the config. But beware that these headers will not get applied when you add any header inside a location block.http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_headerThere could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.server {   # X-Frame-Options is to prevent from clickJacking attack   add_header X-Frame-Options SAMEORIGIN;   # disable content-type sniffing on some browsers.   add_header X-Content-Type-Options nosniff;   # This header enables the Cross-site scripting (XSS) filter   add_header X-XSS-Protection "1; mode=block";   # This will enforce HTTP browsing into HTTPS and avoid ssl stripping attack   add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";   add_header Referrer-Policy "no-referrer-when-downgrade";   location /app1 {       alias /home/ubuntu/app1/;       try_files $uri $uri/ /index.html;        add_header Cache-Control "no-store, no-cache, must-revalidate";      } In the above example the security headers in the beginning will not be applied to /app1 block. Make sure you either duplicate it or have it written in a separate .conf file and import it in every location block.BonusEnabling CORS for fonts when serving through a different CDN domain.location /app1/static/fonts {    alias /home/ubuntu/app1/static/fonts/;    add_header “Access-Control-Allow-Origin” *;        expires 1y;    add_header Cache-Control “public”; } Adding the Access-Control-Allow-Origin header will instruct the browsers to allow loading fonts from a different sub-domain. Note that I also enabled aggressive caching for fonts too.Adding vary by gzip# Enables response header of "Vary: Accept-Encoding" gzip_vary on; This will add Vary: Accept-Encoding header to the publicly cacheable, compressible resources and makes sure that the browser will get the correct encoded cached response.NGINX HTTP to HTTPS Redirection# Get the actual IP of the client through load balancer in the logs real_ip_header     X-Forwarded-For; set_real_ip_from   0.0.0.0/0; if ($http_x_forwarded_proto = 'http') {   return 301 https://$host$request_uri; } Add the above in your server block and open port 80 along with 443 in your AWS ELB. This redirect http to https and also log the actual client IP n your logs.Putting all the above things together, this how the final config would look like.server {     server_name www.my-site.com     listen       80;         # Get the actual IP of the client through load balancer in the logs     real_ip_header     X-Forwarded-For;     set_real_ip_from   0.0.0.0/0;         # redirect if someone tries to open in http     if ($http_x_forwarded_proto = 'http') {       return 301 https://$host$request_uri;     }     # X-Frame-Options is to prevent from clickJacking attack     add_header X-Frame-Options SAMEORIGIN;         # disable content-type sniffing on some browsers.     add_header X-Content-Type-Options nosniff;         # This header enables the Cross-site scripting (XSS) filter     add_header X-XSS-Protection "1; mode=block";         # This will enforce HTTP browsing into HTTPS and avoid ssl stripping attack     add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";         add_header Referrer-Policy "no-referrer-when-downgrade";         # Enables response header of "Vary: Accept-Encoding"     gzip_vary on;         location /app1 {         alias /home/ubuntu/app1/;         try_files $uri $uri/ /index.html;         add_header Cache-Control "no-store, no-cache, must-revalidate";         }         #for app1 static files     location /app1/static {         alias /home/ubuntu/app1/static/;         expires 1y;         add_header Cache-Control "public";         access_log off;      }           #for app1 fonts     location /app1/static/fonts {         alias /home/ubuntu/app1/static/fonts/;         add_header "Access-Control-Allow-Origin" *;             expires 1y;         add_header Cache-Control "public";     } }                                                                                       Final config snippet.
Rated 4.5/5 based on 111 customer reviews
Setting caching headers for a SPA in NGINX cache

Setting caching headers for a SPA in NGINX cache

Blog
When your frontend app is a SPA, all the assets get loaded into the browser and routing happens within the browser unlike a SSR app or conventional/legacy web apps where every page is spat out by the ...
Continue reading