top
Sort by :

7 Big Data Analytics Trends that Big Data Engineers and CIOs Should Know

Big data deals with large data sets which include both structured or unstructured data that traditional software is inefficient to deal with. Different businesses, organizations, and even governments use this data to make better strategic moves. Big data doesn’t rely on the amount of data you are working with but focuses on what you do with it. Big data in today’s world is changing the way organizations manage and use business information. Major advances in big data analytics were made in the year 2016 and 2017 is expected to be much better and bigger. According to the Gartner survey in 2016, nearly 48% of organizations invested in big data and nearly 3/4th of those have already planned to invest or have invested in 2017. As big data technologies and techniques are growing rapidly in the market, big data project managers and CIOs should be aware of the emerging big data analytics trends in 2017. Here is a list of top 7 big data trends for 2017. Let us have a look. Big Data helps fulfill the customer needs Improving customer satisfaction is very important today. As customers are growing rapidly and leading to a tough competition, it is difficult to have the upper hand always. But big data helps you achieve it by analyzing the data such as what customers wish to purchase and what they purchased previously. This helps businesses gain the accurate and deep understanding of what their customers are looking for, which in turn helps to lead among competitors. So many companies are using big data analytics to study and predict consumer behavior and American Express is one among them. By browsing past transactions, the firm makes use of modern predictive models instead of traditional BI-based hindsight reporting. This empowers more accurate forecast of customer loyalty. American Express, with the help of big data, predicted that 24% of accounts will be closed within the next four months in their Australian market. Combining IoT, Big Data, and Cloud IoT, Big Data, and Cloud are dependent on each other. Almost every IoT device includes a sensor, which collects a large amount of data (Big Data) and then delivers the data to a server for further analysis (Cloud). As people communicate with devices directly, a large volume of data will be generated by IoT. Hence, big data is required to handle such large variety of data. Gartner stated that IoT has now officially considered big data as the most popular and hyped technology. According to a report, the smart city project by Indian government will use about 1.6 billion IoT devices by 2016 and smart commercial buildings are expected to be a giant user of IoT until 2017. These two departments will use more than 1 billion connected devices by 2018. It also predicted that by the end of the decade, tens of billions of devices will join the global network, offering a number of opportunities and concerns for policymakers, regulators, and planners. IoT technology is still in the beginning stage, but the data from the connected devices have become and are continuing to become more for the cloud. So we will see the leading data and cloud firms fetching IoT services to the real world where data can run smoothly to their cloud analytics engines. Big Data platform built only for Hadoop will fail Big data and Hadoop are identical to each other. Hadoop together with big data technologies analyses and delivers data at the right time to the right place. Organizations with different and complex environments are focusing on gaining a deep understanding of data analytics from both Hadoop and non-Hadoop sources varying from systems of records (SOR) to cloud warehouses, structured and unstructured. In the year 2017, big data platforms that are data and source agnostic will succeed and the ones that are developed only for Hadoop will fail to deploy across use cases. This is because Hadoop is developed only for large amounts of data and it is pointless to use Hadoop clusters for small data. Most of the businesses that have a small amount of data used Hadoop because they felt that it is mandatory to be successful. After a long period of research and working together with data scientists, they came to know that their data can perform better in other technologies. Big Data offers high salaries The growth of big data analytics will result in high salaries and high demand for IT professionals with high standard big data skills. Robert Half Technology predicted that the average payment for data scientists in 2017 will increase by 6.5% ranging from $116,000 to $163,500. In the same way, big data engineers are expected to see a salary hike of 5.8% ranging from $135,000 to $196,000 by next year. “By 2015, 4.4 million IT jobs globally will be created to support big data, generating 1.9 million IT jobs in the United States,” said Peter Sondergaard, Senior Vice President at Gartner and global head of Research. “In addition, every big data-related role in the U.S. will create employment for three people outside of IT, so over the next four years a total of 6 million jobs in the U.S. will be generated by the information economy.“ Self-Service analytics analyses the data effectively Gartner described self-service analytics as, “a form of business intelligence (BI) in which line-of-business professionals are enabled and encouraged to perform queries and generate reports on their own, with nominal IT support.” As big data experts are demanding high salaries, many companies are looking for tools that help normal business professionals to meet their own big data analytics requirements by reducing the time and complexity, especially when dealing with different data types and formats. Self-service analytics helps businesses to analyze the data effectively without the need for big data experts. Companies such as Alteryx, Trifacta, Paxata, Lavastorm etc are already in this field. These tools are reducing the complications with Hadoop and will continue to gain popularity in 2017 and beyond. Qubole, a startup company offers a self-service platform for big data analytics that self-optimizes, self-manages and enhances the performance automatically resulting in outstanding flexibility, agility, and TCO. It helps businesses concentrate on their data, but not the data platform. Apache Spark strengthens Big Data Apache Spark has now become the big data platform of choice for many companies. According to a research conducted by Syncsort, 70% of IT managers and BI analysts recommended Spark over Hadoop MapReduce, due to its real-time stream processing. Spark strengthens big data as it is more mathematical, convenient, and natural. Its big computing big data capabilities have improved the platforms promoting artificial intelligence, machine learning and graph algorithms. It doesn’t mean that Apache Spark replaces Hadoop, it just improves the big data computing capabilities of Hadoop. Companies that are using Spark and Hadoop together are gaining greater value from big data. Hadoop MapReduce using  2100 EC2 machines took 72 minutes to sort 100TB of data on disk, while Spark took 23 minutes using 206 machines. This shows that Spark could sort the same data 3 times faster using 10 times fewer machines. Spark designers using a 29-GB dataset on 20 “ml. Xlarge” EC2 nodes with four cores each, compared the logistic regression implementation performance on Hadoop and Spark. Each iteration took 127s with Hadoop and 174s with Spark. But from the second iteration, Spark took only 6s which means that it runs up to 10x times faster, because of the reusability of the cached data. Results are shown in the below graph. Fig: Logistic Regression Implementation in Hadoop and Spark Growth of Cloud-based Big Data Analytics Cloud computing and big data are the two hottest technologies trending in the IT department today and the current trends show that both of them will be even more combined in the future years. Some of the services such as Google BigQuery, Microsoft Azure SQL Data Warehouse and Amazon Redshift are depending on cloud computing, enabling customers to change the volume of storage and processing for which they depend on the data warehouse. According to IDC, by 2020, investments for cloud-based BDA (big data analytics) technologies will grow 4.5x faster than investing for on-premises big data analytics solutions. Cloud computing reduces the cost engaged in big data analytics. According to a recent survey, currently, 18% of small enterprises and 57% of medium enterprises are using analytics solutions. And those numbers are predicted to escalate in the coming years because of the importance of the cloud. Businesses using Big Data and Cloud can accelerate their product development cycle, react quickly to changing market conditions, and unveil new markets that they were not aware of earlier. It is clear that big data and cloud computing can be a break point for smaller enterprises. Conclusion Big data is a technology that is moving quickly in terms of importance. It serves as a backbone for organizations to make sense of the crazy and fast world we live in. Gartner predicted that by the year 2020, IoT and big data will be used together to update and digitize 80% of business processes. To use the complete power of big data, first, figure out how you can use your company’s strategic data and master data to build analytics and reporting that represents your core strengths and operations. Also Read: DevOps for Big Data
7 Big Data Analytics Trends that Big Data Engineers and CIOs Should Know 78 7 Big Data Analytics Trends that Big Data Engineers and CIOs Should Know Blog
Geneva Clark 21 Nov 2017
Big data deals with large data sets which include both structured or unstructured data that traditional software is inefficient to deal with. Different businesses, organizations, and even governments ...
Continue reading

Google releases developer preview of TensorFlow Lite

Back in the month of June, Google made an announcement at Google I/O about the new version of TensorFlow. Finally on Tuesday, 14th November, Google released the developer preview of TensorFlow Lite. The search giant has released the software library with the aim of creating more lightweight machine learning solutions for smartphones and embedded devices. Today more and more mobile devices are integrated with purpose-built custom hardware in order to process machine learning workloads efficiently. Google’s TensorFlow Lite supports the Android Neural Networks API that could help in quick initialization and improvement in model load times on a variety of mobile devices. The primary purpose of TensorFlow Lite is to bring low-latency inference from machine learning models to devices that are relatively less robust. To put it simply, rather than learning a new power from some existing data, TensorFlow Lite will aim at applying the existing power of models to the new data provided. As per the Google Official Developer’s Blog, “With this developer preview, we have intentionally started with a constrained platform to ensure performance on some of the most important common models. We plan to prioritize future functional expansion based on the needs of our users.” TensorFlow Lite has already got support for a number of models that have been trained and optimized for mobile. The natural language processing models include MobileNet, Inception v3, and Smart Reply.  
Google releases developer preview of TensorFlow Lite

Google releases developer preview of TensorFlow Lite

What's New
Back in the month of June, Google made an announcement at Google I/O about the new version of TensorFlow. Finally on Tuesday, 14th November, Google released the developer preview of TensorFlow Lite. T...
Continue reading

DevOps for Big Data - Integration benefits and challenges

DevOps? “DevOps is not a goal, but a never-ending process of continual improvement” - Jez Humble DevOps is the advanced standard of software development and delivery that improves the communication and collaboration between development and operation teams. Collaboration and communication are crucial for DevOps and QA (Quality Assurance) is essential for an effective communication of Dev team and Ops team. DevOps methodologies gaining widespread acceptance Lack of communication between developers and operations team has slowed down the process of development. DevOps was developed to overcome this drawback by providing better collaboration which results in faster delivery. It offers uninterrupted software delivery by minimizing and resolving the complex problems faster. Most of the organizations have adopted DevOps methodologies to enhance user satisfaction, deliver the high-quality product within short time and improve efficiency and productivity. DevOps structures and strengthens the software delivery pipeline. It has become more popular in 2016 as more and more organizations moved to the DevOps usage. Clients who adopted technologies like Cloud, Big Data etc. are demanding companies to deliver the software-driven capabilities that they have ever done before. A recent survey proved that 86% of organizations believe that continuous software delivery is crucial to their business. Need of DevOps for Big Data The process of gaining an accurate and deep understanding of Big Data projects is really challenging. And with lack of communication between Big Data developers and IT operations, it becomes even more tough, which is common for more companies. Because of this, IT developers are facing many difficulties to deliver quality results. This has stimulated analytics scientists to update their algorithms which require infrastructure and resources excessively than originally expected. And on the other hand, with lack of communication, the operations team is kept out of the process until the last minute. This declines the potential competitive advantage of big data analytics which is why DevOps is needed for Big Data to stop this from happening. DevOps tools for Big Data result in the higher efficiency and productivity of Big Data processing. DevOps for Big Data makes use of almost the same tools like the traditional DevOps environments such as bug tracking, source code management, deployment tools and continuous integration. Challenges involved in the integration of Big Data and DevOps If you have finally chosen DevOps to integrate with your Big Data project, then it is crucial to understand the different types of challenges that you might experience in transit. The operations team of an organization must be aware of the techniques that are used to implement analytics models and acquire in-depth knowledge of big data platforms. And the analytics experts must learn some advanced things as well, as they work close to social engineers. Additional human resources and cloud computing will be required if you want to operate Big Data DevOps at maximum efficiency, as these services help IT departments to concentrate more on enhancing business values instead of focusing on fixing provisioning hardware, operating systems, and some other works. Benefits of integrating Big Data and DevOps are leading to more integration challenges. Though DevOps build strong communication between developers and operation professionals, it is not in the data scientist's language. And the testing of the function of analytic models should be meticulous and faster in the production-grade environments because of the high-performance requisites on advanced analytics. Benefits of integrating Big Data and DevOps Employing data specialists can be an added advantage for organizations who are working to adopt DevOps that helps to make the Big Data operations more powerful and efficient, as DevOps is not associated with data analytics. Integration of Big Data and DevOps results in the following benefits for organizations. Updates software in an effective way In general, the software combines with data in any manner. So, if you want to update your software, it is necessary to have knowledge of the types of data sources your application is collaborating. This can be known by interacting with your data experts which is nothing but the integration of DevOps and Big Data. Error rates can be minimized Mostly, data handling problems result in a high chance of errors when the software is being coded and tested. Finding and avoiding those errors in the first place in the software delivery pipeline saves time and effort. Data-related errors can be fixed in an application with strong collaboration between the DevOps and data experts team. Builds strong relationship with production and development environments A software that runs with Big Data can be difficult for non-data experts to understand, as the types and range of data in the physical world are varying tremendously. Data experts help the other teams to gain knowledge about the types of data challenges that their software will experience in production. DevOps team working in collaboration with Big Data team results in applications whose performance in the real world is the same as that of in development and testing environments. Conclusion Though DevOps has grown up and is matured enough to deliver the software and services faster, it is still not considered as a key approach by most of the enterprises worldwide. Large-scale enterprises are still following the old approaches because of the main reason that they believe the transition to DevOps might fail. Most of the industry leaders are responsible for this, as they explained that transit to DevOps is useful and helpful, that will deliver better results in the long run. But actually, the move to DevOps can help the businesses deliver high-quality products within a short time.  
  DevOps for Big Data - Integration benefits and challenges

DevOps for Big Data - Integration benefits and challenges

Blog
DevOps? “DevOps is not a goal, but a never-ending process of continual improvement” - Jez Humble DevOps is the advanced standard of software development and delivery that improves th...
Continue reading

Google unveils a new spatial audio SDK for AR and VR developers

Image Source: Google Official Blog Today, Google launched Resonance Audio, a new spatial audio SDK that helps to make the development of virtual reality and augmented reality easier across desktop and mobile platforms. The new Resonance Audio SDK uses Ambisonic techniques to maintain better audio quality on smartphones and it also helps developers design how audios undergo a change when you walk around and even when you turn your head. This software development kit runs on different platforms such as Android, Windows, iOS, Linux, and MacOS and provides integrations for Unreal and Unity Engine, DAW, Wwise, and FMOD. It can also be used in web projects and implemented into your DAW (digital audio workstation) of choice. Another feature of SDK is that it automatically offers near-field effects immediately when the sound sources come close to the listener’s head. It also allows sound source spread, by identifying the width of the source. “We’ve also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos”, said Eric Mauskopf, product manager at Google. Generally, game developers face difficulties when dealing with hundreds of sounds taking place simultaneously. All these complexities can cause many issues that could result in product shipping with basic audio. Resonance Audio helps resolve such problems by using some tricks like analyzing how certain sounds re-echo in different environments.  
Google unveils a new spatial audio SDK for AR and VR developers

Google unveils a new spatial audio SDK for AR and VR developers

What's New
Image Source: Google Official Blog Today, Google launched Resonance Audio, a new spatial audio SDK that helps to make the development of virtual reality and augmented reality easier across desktop ...
Continue reading

A Sneak Peek into the world of Digital Twin Technology

With the rapid growth of IoT (Internet of Things) devices, the significance of the concept of a digital form of a physical object has attracted people’s attention in recent times. Gartner evaluated that by the end of 2017, there will be nearly 8.4 Billion connected devices because of which it is difficult for the traditional processes and tools to understand the velocity and volume of digital data from IoT systems. Hence, Digital Twin, which is combined with machine learning and advanced analytical tools is the best one that overcomes the drawbacks of traditional tools. In simple words, Digital Twin is a virtual model of physical objects, systems, and processes that can be employed for different purposes. Why Digital Twin is Important? Digital twin is generally built during the design and manufacturing process but remains functional over the entire lifecycle of the product. Once the product is placed in the field, the digital twin provides the maintenance and service functions of the product exclusively. It also schedules the predictive and preventive maintenance activities including tooling and calibration management by analyzing the products performance and current status. It is clear that digital twin technology is very effective and valuable, as it enhances the maintenance operations for all types of equipment and machinery products. As Industrial IoT data is increasing rapidly, digital twin is becoming more useful and important in promoting the development of product design and support. Moreover, professionals can coordinate with each other under one platform, work efficiently and improve performance by reducing the errors with the help of this new technology. Generally, IoT sensors analyze the gathered data to make the business verdicts better. But including digital twin in the operation helps businesses make a digital replica of the product and understand the real-world experiences of the product in its context. So, now let us have a look at how digital twin works and its applications. How Digital Twin works: Digital Twin technology bridges the gap between the physical world and digital world. It gathers data from sensors that track the working of a physical object and then employs algorithms that provide a deeper understanding and awareness about the future, based on the dynamic model respond technique. Sensors here alert the users if the machine is about to crash or if it fails, which saves a lot of cost and time particularly when using big machines. The machine learning process here doesn’t require any knowledge on machines operation. It just requires a learning phase to anticipate the system performance. To conclude, digital twin is an effective software model that generates digital information for digital products. Industrial Applications of Digital Twin Technology: In the beginning, it was implemented only in the manufacturing industry, but now we can find the digital twin applications in various fields and a few are listed below: 1. Automobile- Creates digital replica of a connected vehicle Digital Twin in automobile sector helps to analyze the overall performance of the vehicle and the connected features as well. It is the best technology for building the digital replica of a connected vehicle. Example Hero Moto Corp was the first automobile company in India that started a project on Digital Twin in 2016 to make the changes and improvements digitally before spending money on physical facilities. NASA uses the digital twin technology to build the next generation space crafts which are completely impossible to track in the physical world i.e in real time. 2. Healthcare- Delivers high-quality services to patients Digital Twins in the healthcare sector helps to deliver high-quality services to the patients. For instance, a surgeon can get the digital visualization of the heart with the help of digital twins before operating on the patient’s heart. Example Recently, Dassault introduced “The Living Heart” which is a heart of digital twin created over a period of 2 years. Surgeons at Bioengineering institute have built a digital lung that operates like a blood and flesh one with 300 million alveoli. 3. Retail- Offers best consumer experience Customer experience plays an important role in any retail business. A digital twin enhances the customer services by designing modeling fashions and virtual twins for them. In addition, it also offers better security implementation, energy management, and in-store planning in an optimized way. Example Grundfos uses digital twin to serve their customers effectively through enhanced performance and product quality, optimised maintenance, improved development, productivity, and reduced overall risks and costs. 4. Smart Cities- Improves the economic development of a city A city with digital twins is said to be smart in the digital race. It improves the economic management and development of resources and enhances the citizens’ life quality. Digital twin gathers the data to help city planners in achieving the desired results in the future as well. Example 3DEXPERIENCity i.e ‘Virtual Singapore’ is a project that is in progress now and is anticipated to be done by 2018. As Singapore’s population is growing at a rapid pace, this will help improve the status of its living environment. 5. Industrial Firms- Monitors and manages the industrial systems digitally Industrial firms that are included with digital twins can track and manage the industrial systems virtually. Not only the operational data, a digital twin can also gather the environmental data such as configuration, location, financial models and more that helps in anticipating the future anomalies and operations. Example GE’s (General Electric) digital wind farm concept is the best example that shows how digital twins will enhance industrial performance. Digital wind farm helps to develop the structure of wind turbines before manufacturing. GE has already carried out more than 5,00,000 digital twins throughout the production line codes. On the other hand, the Singapore government, in collaboration with 3D design software giant Dassault Systemes, is creating a digital replica of the country with an objective to improve the urban planning process. Conclusion: In recent years, there has been an unexpected progress in the technologies and capabilities of both the physical product and virtual product, the Digital Twin. Digital Twin is generally driven by machine learning, artificial intelligence, sensors, data and analytics and depends on the IoT technologies. Digital Twin IoT is therefore expected to improve industrial IoT deployments. Professionals also predict that more than 85% of digital twins will be adopted by all IoT platforms, within the next five years.
A Sneak Peek into the world of Digital Twin Technology

A Sneak Peek into the world of Digital Twin Technology

Blog
With the rapid growth of IoT (Internet of Things) devices, the significance of the concept of a digital form of a physical object has attracted people’s attention in recent times. Gartner evalua...
Continue reading

List of 6 NodeJS Modules for Developing Networking and Server-Side Apps

Node.js is a cross-platform server-side runtime environment developed on Google Chrome's V8 Engine under MIT license. It uses JavaScript to build networking and server-side applications such as real-time chat app, web app, command line app, REST API server etc that runs on macOS, Windows and Linux. According to Stack Overflow, JavaScript is the most popular and preferred language in the developer community throughout the world now. Node.js offers a library of different JavaScript modules which makes the process of building web apps easier to a great extent. As each of the modules has its own environment, it won’t affect the other modules’ global scope. Here is a NodeJS modules’ list that will help you in developing applications effectively in a simple way. Async Async, a utility module offers powerful and straightforward functions for employing with asynchronous JavaScript. Even though it is designed to install via npm install async with Node.js, it can also be installed via browser- browser install async, component- component install caolan/async, jam- jam install async and spm- spm install async. It also provides 70 functions, which consists of usual 'functional' suspects such as map, reduce, filter, each etc along with few common patterns such as parallel, series, waterfall etc. for asynchronous control flow. Express Express is an open, fast and simple web framework that supports template engines and file uploading and helps to build single page web apps and real-time apps very quickly and easily in Node.js platform. Use npm install -g express command if you want to install it globally because it does not come with default modules of Node.js. It consists of some special features such as the capacity to identify a static file folder which functions as a simple file server. It also offers compact and powerful design for HTTP servers, which saves us from writing a bulk of typical HTTP server code. Forever Forever is an effective tool that restarts your Node process if it crashes. If you really don't want your app development process to stop even if it collides, then Forever is the best choice. Forever also allows you to define the time period of your Node process, so that the process will restart before it fails. It stops the auto-failing practices from being restarted repeatedly. It can be used in two different ways either by incorporating it into the code or using the command line. Forever can be installed as a command line module globally via npm install -g forever. If in case you chose to use Forever programmatically then forever-monitor needs to be installed. Gulp Developers who are well versed with JavaScript libraries may have heard about task runner Grunt, which is a tool for automating tasks such as compilation, minification, unit testing and much more. In the same way, Gulp is also a newer JavaScript task runner that performs the same thing with fewer updates. It is a streaming build system that allows developers automate tedious tasks in their workflow with respect to web development. If you have already installed Node.js and Node Package Manager (NPM) on your system, then Gulp can be installed just by typing command npm install gulp -g in the command line interface. Moment Moment offers date and time functionality which means you can add, format and alter the date and time as per your requirements. For example, if you are doing some modifications in the database and want to get the created time for that, then it can be done just by changing the "LastModifiedOn" field in the database to "DateCreatedOn". It is designed in such a way that it can be installed via npm install moment in Node.js platform or can be used directly in the browser. All the unit tests and AII code will run and work in both the environments. Mongoose Because of the shared use of JSON, Node.js and MongoDB are frequently used together. Mongoose is a MongoDB object data modeling that allows you to have a quick look at what the data structure is and also solve the common real-world application issues by maintaining the flexibility of MongoDB. Mongoose is designed specifically to run in an asynchronous environment and can be installed via npm install mongoose.  
List of 6 NodeJS Modules for Developing Networking and Server-Side Apps

List of 6 NodeJS Modules for Developing Networking and Server-Side Apps

Blog
Node.js is a cross-platform server-side runtime environment developed on Google Chrome's V8 Engine under MIT license. It uses JavaScript to build networking and server-side applications such as re...
Continue reading

Functional Programming in JavaScript

A Gentle Introduction to Functional Programming Functional Programming (often abbreviated as FP) is a programming paradigm that heavily utilizes functions to change how application state is managed. It is declarative rather than imperative, meaning programming in a functional manner tells your code what to do, rather than telling your code how to do it. Functional programming has a very steep learning curve, mainly because it’s very different than how most applications are written. However, once the basic patterns are learned, programming with the FP paradigm tends to result in more stable applications, because of its predictability. This article will break FP into three concrete steps. Of course, there’s a lot more than just three steps, but this will give you a great start in learning how FP works. Pure functions Functional Composition Currying Pure Functions Pure functions are simply functions that take an input and generate an output. They do not perform any side effects. Given the same input to a pure function, you can expect the same output. Run the same input through the same pure function and you should always get the same answer, each and every time. Here’s an example of a pure function: const add = x => x + 2 add(2) // always get 4 add(2) // still 4 add(2) // still 4 When you call,add(2) you get 4, no matter how many times you call it. Example CodePen: https://codepen.io/abustamam/pen/vWBpJL?editors=1010 However, here’s an example of an impure function: let y = 2 const add = x => x + y add(2) // get 2 at first… y = 5 add(2) // now, you get 7! Example CodePen: https://codepen.io/abustamam/pen/XzrVqG?editors=1010 Since the add function depends on y, which is outside of the scope of the function, it is considered impure. Its output can change because the function does not only depend on its own arguments. Another factor of pure functions is that it does not cause any side effects. That is, anything outside of the scope of the function stays the same. Here’s an impure function that breaks that rule: let y = 2 const add = () => y += 2 add() // 4 add() // 6 Example CodePen: https://codepen.io/abustamam/pen/mqbpKO?editors=1010 When writing pure functions for the purposes of functional programming, you want to be sure that any output is derived deterministically from the functions arguments and/or immutable constants. For example, you could have a constant called PI, to store the transcendental number, and your function can depend on it because that number is not going to change. Usage of pure functions is essential to understanding functional programming. If the state is changed outside of a function, a lot of bugs can be introduced due to impurity. Now, if you’re a web developer, it’s almost impossible to write everything in pure functions. For example, a simple console log is considered a side effect. Any function with a side effect is by definition not pure. A console log is a pretty harmless side effect, but what about DOM manipulations? What if I want the results of my mapping function to be written to the DOM? Do I need to pass the entire DOM in as a function, and then return the new DOM out? Well, that’s one of the purposes behind libraries such as React and Vue. They wrap up all of the DOM manipulating side effects into the library so that DOM can be written to deterministically (as a function or props or state), and you can still use pure functions to produce React or Vue components. Functional Composition One of the main allures of functional programming is the fact that it allows you to compose functions. This is a fancy way of saying, put the result of a function in another function. It could look like this: const add10 = x => x + 10 const double = x => x * 2 add10(double(5)) // 20 double(add10(5)) // 30 Example CodePen: https://codepen.io/abustamam/pen/GOKyBr?editors=1010 We can use a function called compose to express functional composition. Here’s a simple compose function: const compose = (...fns) => arg => fns.reduce((acc, fn) => fn(acc), arg) In case you aren’t familiar with the spread operator, we are allowing compose to take a variable number of arguments, and we store that in an array we are naming fns, the compose function that takes in a bunch of functions and then returns a new function that takes in the argument. Then the reduce function sequentially calls each function from left to right, passing in arg first, then passing that result into the next function. By using this compose function, we can now create new functions with it: const add10ThenDouble = compose(add10, double) const doubleThenAdd10 = compose(double, add10) add10ThenDouble(5) // 30 doubleThenAdd10(5) // 20 Example CodePen: https://codepen.io/abustamam/pen/aVoEje?editors=1010 Now, be very careful. The order of things can get very confusing. Note the equivalence of the functions: add10(double(5)) === doubleThenAdd10(5) // true double(add10(5)) === add10ThenDouble(5) // true It may seem backward, but your code actually goes from innermost to outermost parentheses (remember PEMDAS or BODMAS from elementary school). So for add10(double(5)), it doubles 5 first, then adds 10. When doing functional composition, we compose them in the order of computation, not in the order of writing when we write our code. However, it should be simple enough to write a function that applies the compose functions in reverse, so that it appears in the same order as writing the code. Currying Sometimes we want to compose functions that take more than one argument. How could that be possible? For example, if I had a few basic mathematical functions: const add = (x, y) => x + y const subtract = (x, y) => x - y const multiply = (x, y) => x * y const divide = (x, y) => x / y I’d like to be able to compose a bunch of these arithmetic functions together, but how? This is where a concept called currying comes in handy. Currying converts a function that takes in a certain number of arguments (which is called its arity), and if supplied fewer arguments than it takes, return a new function that takes in the rest of the arguments. This may sound bizarre, so here is how we would curry the arithmetic functions above: const add = x => y => x + y const subtract = x => y => y - x const multiply = x => y => x * y const divide = x => y => y / x Now, we can use functional composition to perform any amount of arithmetic chaining: compose(add(5), subtract(1), multiply(4), divide(2))(1) // 10 Example CodePen: https://codepen.io/abustamam/pen/OOLzoj?editors=1010 However, what if we had more complex functions that took in more than just two arguments? const sum = (x, y, z) => x + y + z const curriedSum0? = x => y => z => x + y + z // or is it const curriedSum1? = (x, y) => z => x + y + z // or perhaps const curriedSum2? = x => (y, z) => x + y + z The answer is it could be all of the above. You should be able to call a curried function with arguments in the same list, or applied separately, like so: curriedAdd(3)(4) // 7 curriedAdd(3, 4) // 7 curriedSum(1, 2, 3) // 6 curriedSum(1)(2)(3) // 6 curriedSum(1, 2)(3) // 6 curriedSum(1)(2, 3) // 6 Thankfully, just like there’s the compose function we made above, there’s also a curry function. Here’s a simple implementation: const curry = (uncurriedFn, ...args) => { // you can get the arity of a function by calling length on the fn const arity = uncurriedFn.length  const accumulator = (...accArgs) => {    let argsCopy = [...args] //      if (accArgs.length > 0) {         argsCopy = [...argsCopy, ...accArgs]    }   if (args.length >= arity) return uncurriedFn.apply(null, argsCopy)\     return curry.apply(null, [uncurriedFn, ...argsCopy]) // recurse  } // if any args passed in, pass them to the accumulator return args.length >= arity ? accumulator() : accumulator } Of course, this only works if there are zero optional arguments. Every required argument of the function you wish to curry must be named for fn.length to work. If there are any optional arguments, then currying will not work. This is why some functional libraries like lodash or Ramda split the variadic functions into multiples; for example, lodash’s regular get function, which accepts two arguments, three with a default, is split into get and getOr in the functional variety of the library. Now, we can do the functional composition to use our new sum function: const [  curriedAdd,  curriedSubtract,  curriedMultiply,  curriedDivide,  curriedSum ] = [  add,  subtract,  multiply,  divide,  sum ].map(i => curry(i)) // this simply creates curried versions of our functions compose(add(5), subtract(1), multiply(4), divide(2), sum(3, 4))(1) // 17 CodePen Example: https://codepen.io/abustamam/pen/rYBpob?editors=1010 Functional programming libraries like lodash FP and Ramda curry all functions by default, so you don’t have to worry about doing that wonky curry thing I did above. However, if you wish to curry your own functions, you can use your functional library’s curry function. Also keep in mind that due to data not being passed in until the end, you’ll find it a common pattern for functional programming library functions to take in data as the last argument, as opposed to the first (which is common in non-functional programming libraries, like vanilla lodash). This works 90% of the time, so be sure to keep your argument order in check. Common Functional Patterns in JavaScript There are a lot of functional patterns in JavaScript. In fact, JavaScript has its own methods that could be used in a functional manner. The three most common ones are map, filter, and reduce. The Map Function The map function operates on an array and spits out an array of the same length that has the mapper function applied to each item of the array. Before showing you how it’s used, here’s how it is often done in an impure manner: const nums = [1, 2, 3, 4, 5] let doubledNums = [] for (let i = 0; i < nums.length; i++) {  doubledNums.push(nums[i] * 2) } // some students may also use forEach: nums.forEach(num => doubledNums.push(num * 2)) This approach will work, but it breaks our rule of impure functions. The FP way of doing this would be to use the map function: const doubledNums = nums.map(num => num * 2) Example CodePen: https://codepen.io/abustamam/pen/pdzpYx?editors=1010 Not only is this more concise, but there are no extra variables, and the original variable nums are left untouched. The Filter Function The filter function is another native pattern of JavaScript. It operates on an array and returns all values that adhere to the filter function. Here’s a naive approach using beginner JavaScript methods: const nums = [1, 2, 3, 4, 5] let evenNums = [] for (let i = 0; i < nums.length; i++) {  if (nums[i] % 2 === 0) {    evenNums.push(nums[i])    } } // some students may also use forEach: nums.forEach(num => num % 2 && evenNums.push(num * 2)) And here’s how you could use filter for the same thing: const evenNums = nums.filter(num => num % 2 === 0) Example CodePen: https://codepen.io/abustamam/pen/POYEgJ?editors=1010 So it runs each num of the nums array through the filter function, and if the filter function returns truthy, the num gets to stay in the filtered evenNums array. The Reduce Function The reduce function may be familiar to anyone who has used Redux before (after all, it runs on reducers). The reduce function operates on an array, and it contains an accumulator. This could be a sum, it could be an array, it could be the intermediary results of functional composition. The point of reduce is that it’s extremely versatile. In fact, I will leave it as an exercise for you to recreate map and filter using the reduce function. The simplest way to show how the reduce function works is by using a sum. First, the beginner way: const nums = [1, 2, 3, 4, 5] let sum = 0 for (let i = 0; i < nums.length; i++) {  sum += nums[i] } // some students may also use forEach: nums.forEach(num => sum += num) As mentioned before, this leads to an impure function. We define a sum and it changes over time. We don’t want that. With reduce, we can specify our starting value, and then we can specify what we want to do at each step of the iteration: nums.reduce((accumulator, num) => accumulator + num, 0) // alternatively nums.reduce((accumulator = 0, num) => accumulator + num) This starts the accumulator at zero, and at each step, we pass the return value of the accumulation function as the accumulator for the next iteration. Here’s how it works piece by piece: const sum = nums.reduce((accumulator = 0, num, i) => {  console.log(‘Iteration’, i, ‘, accumulator’, accumulator, ‘, number, num) // for clarity  return accumulator + num }) console.log(‘Final sum:’, sum) We will see the following logs: Iteration 0, accumulator 0, number 1 Iteration 1, accumulator 1, number 2 Iteration 2, accumulator 3, number 3 Iteration 3, accumulator 6, number 4 Iteration 4, accumulator 10, number 5 Final sum: 10 Example CodePen: https://codepen.io/abustamam/pen/EbYozY?editors=1010 Let’s see how we can use reducers to make a simple state management system: const initialState = { counter: 0 } const counterReducer = (state = initialState, action = {}) => {  switch (action.type) {    case ‘INCREMENT’: {      const { counter } = state      return { ...state, counter: counter + 1 }    }    case ‘DECREMENT’: {      const { counter } = state      return { ...state, counter: counter - 1 }    }    default:      return state } } const state0 = counterReducer() // { counter : 0 } const state1 = counterReducer(state0) // { counter: 0 } const state2 = counterReducer(state1, { type: ‘INCREMENT’ }) // { counter: 1 } const fakeState = counterReducer(state1, { type: ‘INCREMENT’ }) // still { counter : 1 } const state3 = counterReducer(state2, { type: ‘INCREMENT }) // { counter: 2 } const state4 = counterReducer(state3, { type: ‘DECREMENT’ }) // { counter: 1 } const anotherFakeState =  counterReducer(state3, { type: ‘DECREMENT’ }) // still { counter: 1 } const state1Branch = counterReducer(state1, { type: ‘DECREMENT’ }) // { counter: -1 } initialState // untouched: { counter: 0 } As you can see, by keeping the reducer a function of a state and an action, we can maintain a history of all of our states, and we can even “time-travel” and see what would happen if we used a different action at a specific point in time. This makes Redux really simple to test, because once you know the inputs, the outputs should be predictable. By the way, the Redux example above is a perfect example of where I would use currying and functional composition. Let’s say I wanted to keep track of the counter and how many times the user clicked a button, any button. I’m going to use a function called set which is a curried function that takes in the key, the intended value, and the object to update. It does so in an immutable manner, without mutating the original object. The set function can come from the lodash FP library, but I’ve implemented it myself to reduce the amount of “magic” going on. const initialState = { counter: 0, timesClicked: 0 } const set = curry((key, value, obj) => ({ ...obj, [key]: value })) const counterReducer = (state = initialState, action = {}) => {  switch (action.type) {    case ‘INCREMENT’: {      const { counter, timesClicked } = state      return compose(        set(‘counter’, counter + 1),        set(‘timesClicked’, timesClicked + 1),      )(state)    }    case ‘DECREMENT’: {      const { counter, timesClicked } = state      return compose(        set(‘counter’, counter - 1),        set(‘timesClicked’, timesClicked + 1),      )(state)    }    default:      return state  } } CodePen Example: https://codepen.io/abustamam/pen/MOgrdL?editors=0010#0 As you can see, whenever the action.type matches, we pipe the state through the functions passed into the compose function. It’s like calling set(‘counter’, counter + 1, state) then set(‘timesClicked’, timesClicked + 1) on that new state. Some Real-World Examples I use functional composition all the time in my Node.JS and ReactJS apps. For example, to parse a URL like /users/me?apiKey=foo, you could use a function like this: const { flow, split, head, tail, filter } = require(‘lodash/fp’) const getUserPath = flow(  split(‘?’),    // splits string by ?, creates an array  head,          // gets the first value  split('/'),    // splits string by /, creates an array  tail,          // gets all but first element of array  filter(v => v) // removes all empty strings and falsey values ) getUserPath('/users/me?apiKey=foo') // [‘users’, ‘me’] Example CodePen: https://codepen.io/abustamam/pen/GOKQKv?editors=0010#0 This will return an array that reads something like [‘users’, ‘me’], which I can use to query resources or do whatever I want. Flow works like compose; I prefer it to compose because lodash’s native compose function goes from right to left, which is weird for me. Note that we could write a function called isTruthy and make it something like  v => v, then we could use filter(isTruthy) to make our code read a bit better. However, though not entirely necessary, that’s one of the options that are available to you in FP. I’ve also used this to work with large datasets, using the Highcharts library. const { map, sum, filter } = require(‘lodash/fp’) const getAvgInView = series => {  const data = flow(    filter(point => point.isInside),    map(point => point.y)  )(series.points)  return (sum(data) / data.length).toFixed(1) } const s = {  points: [    { x: 0, y: 10, isInside: false },    { x: 1, y: 100, isInside: false },    { x: 2, y: 50, isInside: true },    { x: 3, y: 10, isInside: true },    { x: 4, y: 30, isInside: true },    { x: 5, y: 10, isInside: false },    { x: 6, y: 1, isInside: false },  ] }; getAvgInView(s) // 30.0 Example CodePen: https:// https://codepen.io/abustamam/pen/dZbdMN?editors=0010#0 Pass in a series of data, and it will average all the y values of all the points that are inside the viewport. Highcharts does that for me, but I have to actually make the average calculation. Both of these examples take advantage of pure functions (none of these functions, composed or otherwise, mutate any data outside of the scope of the function), functional composition, and currying. By keeping these three concepts in mind, you should be able to convert a lot of your code into cleaner, functional code. Wrapping it all up I hope this introductory post was helpful for you to understand how functional programming works. To test your newfound knowledge, try to implement the map function and the filter function by using reduce. For simplicity, have your function take in an array rather than adding to the Array’s prototype, but you can do that as well. // all of these are valid const myMap = (mapperFn, arr) => { return arr.reduce(...) } const myMap2 = (arr, mapperFn) => { return arr.reduce(...) } Array.prototype.myMap = function(mapperFn) { return this.reduce(...) } Remember the rule of pure functions. You cannot create intermediary variables and mutate them. There’s no need for functional composition or currying. Good luck and happy coding!!  
Functional Programming in JavaScript

Functional Programming in JavaScript

Blog
A Gentle Introduction to Functional Programming Functional Programming (often abbreviated as FP) is a programming paradigm that heavily utilizes functions to change how application state is managed...
Continue reading

Google and Cisco Collaborate on a new Hybrid Cloud Environment

On Wednesday, Google announced its partnership with Cisco to offer a hybrid cloud solution that helps their clients amplify security and agility with an open platform for designing and directing applications both on Google Cloud and on-premises. "This joint solution from Google and Cisco facilitates an easy and incremental approach to tapping the benefits of the Cloud. This is what we hear customers asking for," said Diane Greene, CEO, Google Cloud. The complete solution will develop, manage, protect and track workloads, allowing clients to improve their existing investments, design their cloud movement at their own pace and avoid lock-in. It will also help developers to build new apps in the on-premises and cloud using the same production environment, runtime, and tools. "Our partnership with Google gives our customers the very best cloud has to offer— agility and scale, coupled with enterprise-class security and support," said Chuck Robbins, chief executive officer, Cisco. "We share a common vision of a hybrid cloud world that delivers the speed of innovation in an open and secure environment to bring the right solutions to our customers." The hybrid cloud solution offers a reliable Kubernetes environment for both Google’s managed kubernetes service, Google Container Engine, and on-premises Cisco Private Cloud Infrastructure. This helps developers to deploy the same code in any place and avoid lock-in, with their choice of operating system, software, hypervisor, and management. According to SUSE, the solution is especially profitable to organizations, as the trend of hybrid cloud continues to boom. Nearly 66% of enterprises anticipate this hybrid cloud growth to continue, whereas 36% are for public cloud and 55% for private cloud. Source: Google Official Blog
Google and Cisco Collaborate on a new Hybrid Cloud Environment

Google and Cisco Collaborate on a new Hybrid Cloud Environment

What's New
On Wednesday, Google announced its partnership with Cisco to offer a hybrid cloud solution that helps their clients amplify security and agility with an open platform for designing and directing appli...
Continue reading

Xamarin or Ionic - Selecting Better App Development Framework

Previously, apps are developed separately for multiple platforms using different programming languages for each platform such as Java for Android apps and Objective-C and Swift for iOS apps. Building applications that are compatible with multiple devices and platforms with common code instead different code for the same features are always preferred by developers. Xamarin and Ionic are the two most popular and effective app development frameworks that are in demand in the market to develop apps that run successfully on multiple platforms. Are you facing difficulty in choosing the best framework out of Xamarin and Ionic for building your mobile application? In this article, we will compare Xamarin vs Ionic and highlight the major differences that help you decide which framework suits best to develop your app. Ionic vs Xamarin Performance Ionic offers average user experience, as it uses web technologies to develop hybrid apps. Whereas, Xamarin offers high-quality native applications with best user experience. Hence, Xamarin is the best option if you are going to encounter with the apps in the market. Statistics below prove why Xamarin is better in providing high-quality services when compared to Ionic. Loads 25% faster than the hybrid apps Uses 50% less memory Uses 76% less CPU power Functions 22 times faster than hybrid apps Handling Complex Projects Xamarin is an obvious choice if you are dealing with complex projects, as it would help you get better results, and also its usage and maintenance are effective. Furthermore, complex applications with too much of code may not function effectively with Ionic. Ionic can be opted for low-budget projects and is perfect for applications that do not need advanced functionalities. Faster development can be achieved with Ionic. So, Ionic could be the best choice for faster time to market. Development Cost Microsoft has acquired Xamarin and included in Visual Studio (Windows) and Xamarin Studio (Mac) for free. Visual Studio license is required to develop high-priority apps, though the community version of Visual Studio is available. The license cost starts at $1000/year/developer. Whereas Ionic licenses are available at no extra cost, i.e, $0. Any organization or developer who has experience in using web technologies and proficient in Angularjs can start with it for free. You could also benefit free push notifications from Ionic Cloud Services. Development Environment Xamarin is the best choice for developers who are experts in C# and Visual Studio, as it uses C# and mostly depends on IDEs like Visual Studio and Microsoft technologies. It allows developers to code in C# and ports the applications to Android, iOS, and Windows with the native user interface for every platform. Moreover, Xamarin Test Cloud allows you to test your application on thousands of devices. Ionic is based on Angularjs and uses web technologies like HTML, CSS, JavaScript. It provides native look and feels by using Cordova, which helps you in using native device features such as GPS and Bluetooth in your applications. Ionic also uses the web view to provide UI components for several devices. Easy Maintenance Xamarin offers frequent updates to APIs whenever the new devices and features are launched to tap into it. But it is different and difficult with Ionic where you have to wait for the Ionic community to update the API to meet the requirements of new device features. If you want to update the API fast, then you should design the API by yourself, which consumes a lot of money and time. So, Xamarin would be the best from the maintenance point of view. Conclusion Both the frameworks Xamarin and Ionic have pros and cons and the choice of best framework completely depends on the project requirements. Ionic would be the best choice if you want to develop an app with less time and cost and Xamarin if you want to deliver an app with a better user experience and high quality.  
Xamarin or Ionic - Selecting Better App Development Framework

Xamarin or Ionic - Selecting Better App Development Framework

Blog
Previously, apps are developed separately for multiple platforms using different programming languages for each platform such as Java for Android apps and Objective-C and Swift for iOS apps. Building ...
Continue reading

Understanding Different Types of Artificial Intelligence Technology

Artificial intelligence has gained an incredible momentum in the past couple of years. The current intelligent systems have the capability of managing large amounts of data and simplifying complicated calculations very fast. But these are not the sentient machines. AI developers are trying to develop this feature in the future. In the coming years, AI systems will reach and surpass the performance of humans in solving different tasks. Different types of AI have emerged to assist other artificial intelligence systems to work smarter. In this article, we are going to have a look at different categories of artificial intelligence. Reactive Machines AI The fundamental types of artificial intelligence systems are quite reactive and they are not able to use previous experiences to advise current decisions and to configure memories. IBM’s chess-playing computer called Deep Blue defeated Garry Kasparov who is an international grandmaster in chess in the late 1990s, is one example of this type of machine. In the same way, Google’s AlphaGo defeated the top human Go experts but it can’t assess all the future moves. Its testing method is more enlightened than Deep Blue’s by using a neural network to assess the game developments. Limited Memory AI Limited memory AI is mostly used in self-driving cars. They will detect the movement of vehicles around them constantly. The static data such as lane marks, traffic lights and any curves in the road will be added to the AI machine. This helps autonomous cars to avoid getting hit by a nearby vehicle. Nearly, it will take 100 seconds for an AI system to make considered decisions in self-driving. Theory of Mind AI Theory of mind artificial intelligence is a very advanced technology. In terms of psychology, the theory of mind represents the understanding of people and things in the world that can have emotions which alter their own behavior. Still, this type of AI has not been developed completely in the society. But research shows that the way to make advancements is to begin by developing robots that are able to identify eye and face movements and act according to the looks. Self-aware AI Self-aware AI is a supplement of the theory of mind AI. This type of AI is not developed yet, but when it happens, it can configure representations about themselves. It means particular devices are tuned into cues from humans like attention spans, emotions and also able to display self-driven reactions. Artificial Narrow Intelligence (ANI) ANI is the most common technology that can be found in many aspects of our daily life. We can find this in smartphones like Cortana and Siri that help users to respond to their problems on request. This type of artificial intelligence is referred to as ‘weak AI’. Because it is not strong enough as we need it to be. Artificial General Intelligence (AGI) This type of artificial intelligence systems work like humans and is called as ‘strong AI’. Most of the robots are ANI, but few are AGI or above. Pillo robot is an example of AGI which answers to all questions with respect to the health of the family. It can distribute pills and give guidance about their health. This is a powerful technology which is necessary for living with a full-time live-in doctor. Artificial Superhuman Intelligence (ASI) This type of AI has the ability to achieve everything that a human can do and more. Alpha 2 is the first humanoid robot developed for the family. This robot is capable of managing a smart home and can operate the things in your home. It will notify you of the weather conditions and tells you interesting stories too. It is really a high-powered robot which you feel like is a member of your family. Also Read: List of Startups building websites with Artificial Intelligence Artificial Intelligence Impact on Content Marketing    
Understanding Different Types of Artificial Intelligence Technology

Understanding Different Types of Artificial Intelligence Technology

Blog
Artificial intelligence has gained an incredible momentum in the past couple of years. The current intelligent systems have the capability of managing large amounts of data and simplifying complicated...
Continue reading

Facebook re-licensed React - As it happened

Facebook has now successfully re-licensed React under the MIT license. Before that Facebook was using their own ‘BSD+Patents License’. Things began to change as the Apache Foundation officially announced that none of its software projects can include Facebook’s BSD+Patents licensed code. Chris Mattmann, Apache’s legal affairs director said that frameworks, tools, and libraries maintained under Facebook's open-source-ish BSD+Patents license should not be used in any new project. Mattmann said that if anyone has developed projects with Facebook React.js license, they are allowed to exterminate it on or before August 31, 2017, and should find a license that is compatible with the foundation policies. "No new project, subproject or codebase, which has not used Facebook BSD+Patents licensed jars (or similar), are allowed to use them," Mattmann wrote. "In other words, if you haven't been using them, you aren't allowed to start. It is Cat‑X." The licenses that are not acceptable to use in any Apache project comes under Cat-X and right now it includes BSD-4-Clause, BCL, GNU LGPL, GNU GPL, Microsoft Limited Public License and more. Apache has clearly mentioned a list of licenses on their official site that are not allowed to incorporate with any Apache products. In September, Automattic declared that it would stop React inclusion with its Gutenberg editor project unless they change their license. But now there is no need to worry about this problem, as Facebook has now successfully changed the license for all its open source projects which includes React, Flow, Jest, Immutable.js and others. Explaining the conclusion, Facebook engineering director Adam Wolff wrote that “React is the foundation of a broad ecosystem of open source software for the web, and we don’t want to hold back forward progress for nontechnical reasons.” Here’s all that transpired. April 20: Question Raised to ASF Legal about RocksDB Integrations Apache Cassandra put a question to Apache Software Foundation Legal as to whether they can use RocksDB as a direct dependency to expand its storage, which was a 3 clause BSD licensed. July 15: ASF Banned BSD+Patents License After a long debate, Chris A. Mattmann, Vice President of ASF’s legal affairs stated that Facebook’s BSD+Patents license is not well-suited with Apache Software Foundation policies to use as a dependency. July 15: RocksDB Changed its License RocksDB switched to Apache 2.0/GPL2 from its BSD-plus-Patent license. This signifies that RocksDB is now consistent with all Apache Software Foundation policies. Aug 18: Facebook Stated that they are not going to Change their BSD+Patents License Facebook announced that they will continue with the same BSD+Patents license because they strongly believe that it offers some special features to their users and the change may affect them a lot. This prompted few companies such as Hacker News, freeCodeCamp and Reddit to think about React substitutes. Sep 22: Facebook Announced its Shift to MIT License Facebook declared that it is going to change the BSD+Patents license to MIT license. This is mainly because of React, which is the heart of open source software that comes under BSD license. And they don’t want to desist it for non-technical reasons. “Wordpress, a blogging platform that is maintained by the Automattic, says it is now happy and beneficial with the Facebook license change and they are ready to use Facebook open source libraries in their future projects.” Sep 25: Facebook Officially Shifted to MIT License Facebook announced officially that now it has shifted to MIT license that is agreed upon by the ASF. Sep 26: Facebook Released React 16 Facebook released the updated version of React i.e React 16 which includes license updates as well. Therefore, React now again remains as the most approved tool for web development. Facebook team has worked on it for more than a year and finally achieved its goal. Moreover, they have rewritten the code that provides special and unique features to help developers create UIs that are superior to the existing ones.      
Facebook re-licensed React - As it happened

Facebook re-licensed React - As it happened

Blog
Facebook has now successfully re-licensed React under the MIT license. Before that Facebook was using their own ‘BSD+Patents License’. Things began to change as the Apache Foundation o...
Continue reading