top
Sort by :

Why Your Next Project Should Be A PWA?

Technology is ever changing and with the rapid growth of smartphone adoption, businesses are changing the way they market the products and services. As mobile evolves, the technology that comes with it also evolves. Here comes PWA- the next big thing for mobiles.Most of you must have heard about PWAs (progressive web apps) at some point in time. Before looking at the benefits of considering PWA for your next project, let’s first know what exactly PWAs are.This Progressive web apps tutorial will help you to learn more about why your next project should be a PWA.A short introduction to PWAProgressive Web Apps have enhanced mobile web performance and are transforming the way web applications used to be. In general, a PWA is a web page or website that looks and feels just like a native mobile application or a traditional application to the user. It runs on all browsers irrespective of opera, chrome or Samsung internet browser, and the users are not required to download it from the app store. Just click on the link and PWA will be installed on your smartphone.A PWA works in the same way as it would work with full internet access even in the absence of a network. It combines the benefits of mobile experience with the features of modern browsers in order to offer the functionality of both mobile apps and website while offering offline capabilities, enhanced performance, and speed. It is also known as hybrid web apps or installable web apps.Big players such as Flipkart, PayTM, Twitter, Wikipedia, MakeMyTrip etc. have recently launched their PWAs to offer mobile experience to users, without installing the app.Benefits of PWAHope you clearly understood what Progressive Web Apps are. And now, let us see the benefits of building Progressive web apps.Low data requirementsPWAs use a small amount of data compared to native applications. For example, if a native app consumes 10mb of data for processing a request, a PWA consumes only 500kb of data for the same request to process which is very less when compared to native apps.Low costThe cost of building a PWA is much cheaper than building a native app. Building both an iOS and Android application (native app) can cost $20k – $80k, while a PWA can be designed for a small amount of cost at around $6k – $10k.Responsive and progressivePWAs are highly responsive and progressive which can be used across all devices regardless of the browser being used and screen size.No updates requiredAs PWA is very similar to websites, you can see the updates when you open the progressive app and it doesn’t require a manual update.No download requiredUnlike native apps, there is no need to install a PWA from the play store, which is tedious. A PWA will be installed on the user’s device only after permission from the user to do so.Shareable and LinkablePWAs are linkable and shareable easily through URLs which means it can drive huge traffic and boost the searchability and accessibility of your app.PWAs are more secureCyber threats are constantly evolving more dangerous and challenging. As we all know, HTTP was not secure enough to protect the data, websites are now being programmed in HTTPs, which is the secure version of HTTP. And PWAs are highly secure when compared to traditional apps because they always served over encrypted HTTPs, which means data cannot be vandalized. This makes the user feel secure to give their personal data in PWA such as contact details or credit card information.PWA will reach a larger audience“PWAs are well suited for India, where the mobile web allows publishers to reach a large audience across a highly diverse set of devices and bandwidth,” said John Pallett, product manager at Google.With a wide range of most exciting features such as quick updates, faster navigation, offline working mode, sending push notifications and many more, it is gaining ground among the public. The web reach of PWA is much higher, it receives 11.4 million unique users when contrasted with web reach of a native application which receives only 4 million users every month.Let’s have a look at the comparison between PWA and Native App in terms of different features.FeaturesPWANative AppWorks offline                                       Home screen access                                        Push notifications                                        Requires no updates                                         Low data requirements                                         No download required                                         Shareable and Linkable                                                                                                                     Table: Comparison between PWA and Native AppPWA offers better User ExperienceProgressive Web Apps are more reliable, fast, and engaging too. They have changed our insight about web apps and opened new doors for delivering consistent and immersive user experiences regardless of the platform. PWAs will give users a better browsing experience while helping businesses attract more customers which ultimately results in more profit and sales.Example: Flipkart has seen 50% increase in the new customers after the launch of its PWA. It also reported that nearly 60% of users who are currently using its PWA had previously uninstalled its native app in order to save space in their smart phones.SEO benefits of the PWABesides offering better user experience, PWAs may prove that it is from an SEO point of view too.Speed up your Index processIt is a fact that PWA speeds up the process of your app being indexed. Similar to web pages, each and every PWA page is followed by a unique URL. So, it will be very easy for Googlebot to crawl and index it. And as it is already optimized for mobile, it would help accelerate the indexing process.Increasing traffic significantlyAs the Progressive Web Apps are displayed on the users home screen, it will prompt them to visit the website regularly. Apart from this, it will also allow you to implement push notifications in order to promote your campaigns and remind the users to return to your site too. This will drive more traffic to your app.Improve performancePWAs works much faster than traditional websites. As there is no need to download the app, all the updates will be done without any effort from user’s aspect. And from the sales perspective, the increase in speed leads to increased conversion rates and performanceConclusionPWAs are turning hot in today’s evolving world of technology. Popular websites like Tinder, Instagram, and Starbucks, are developing their Progressive Web Apps. Browsers like Samsung Internet, Firefox, Chrome, and Opera already support PWAs. And now by default, Service Workers are enabled in the Microsoft Edge on Windows Insider Build and latest Safari Technology Preview. So, we can say that Progressive web apps are the future of web development undoubtedly and they will become more predominant as the technology advances.According to a report from Gartner, PWAs will replace 50% of mobile applications both in B2C and B2E by 2020. Getting started with progressive web applications are the best choice for App development companies as they support all key features. If you are a developer and have worries about user engagement and offering better user experience, this is the best choice for you.
Rated 4.0/5 based on 45 customer reviews
Why Your Next Project Should Be A PWA? 154 Why Your Next Project Should Be A PWA? Blog
Susan May 22 May 2018
Technology is ever changing and with the rapid growth of smartphone adoption, businesses are changing the way they market the products and services. As mobile evolves, the technology that comes with i...
Continue reading

Self-Paced vs Classroom Training- Select The Best Fit For You

Learning, whether self-paced or classroom, is a deciding factor for your career. Both self-paced and classroom training methodologies empower learners with skills and knowledge. Both have their limitations and advantages. Some prefer face-to-face interaction with a tutor while the others like to learn at their own pace and time. What matters, in the end, is imbibing knowledge in a way that is low on effort and high on impact.In this article, we shall compare and contrast self-paced and classroom learning. This will help you decide on the mode of learning that is an ideal fit for you.Self-pacedThe self-paced method allows individuals to study at their own pace according to their own learning styles and interests by using a variety of media that help them acquire the knowledge required to pass exams. Here, the role of the tutor is to provide feedback on their proficiency, guide them in a right way, and design the learning environment based on their requirements.Method followedSelf-paced training does not follow a set timetable. Course materials are totally accessible when the course starts. Exams and assignments don’t have start or due dates except for the official end date of the course. You can finish exams and assignments at your own pace before the course closes.How it helpsSelf-paced learning can take place at home, at work or in the classroom environment. So, aspirants will have the flexibility to study at their convenient place and time. Self-paced classes helps individuals:To continue with the lessons without having to waitTo rewind trainer instructions and focus completely on the subject matter being taughtTo get 24/7 content accessTo interact with trainers on an individual basis or in small groupsBenefitsCost-effectiveFlexible environment and scheduleRequires very less timeStudents feel more comfortable in this environment, as they can take a break and get back to study at their convenience.In self-paced training, learners will have the flexibility of choosing trainers as per their wish.Students can also take the online course for free through MOOC (Massive Open Online Course), which is an online course that allows you to learn at your own pace and schedule.Challenges you might faceChallenges do exist with eLearning, including:Lack of learner engagement and motivationSMEs will not be available on the spot to clear your doubtsSelf-paced learning that uses animations or graphics to grasp the students’ attention might distract their focusStudents might face some technical issues due to lack of internet connectionTransferring knowledge as per one’s wisdom is challenging, because each individual is unique and has different understanding skills and grasping power. And this can't be achieved in self-paced learning, since there is no direct interaction between the trainer and the learner. In short, there is no customized knowledge delivery.End goal of the learners is not definedAs instructor is not available on the spot, an individual has to execute programs (applicable mainly for developers and programmers) on their own which might take a long time to do.In a study of 51,000 community-college students in Washington State conducted by community college research centre, it was found that students who opted for online or e-learning courses were more likely to drop out of the course or fail than students who opted the same course face to face i.e classroom training.Classroom training offers solutions to many of these challenges. But, if you don’t have time or interest to attend classroom training, you can opt for online training which also overcomes most of the challenges that are faced in self-paced learning.Is self-paced learning right for you?If your answer is yes to all the below questions then you will be able to study on your own. If it is no then you would benefit better from classroom training:Are you an individual who studies alone and does it well without much of external aid or advice?Are you good at grasping and reading new things quickly?Are you a visual learner or an auditory learner?Will you work smarter and faster under the pressure of deadlines?Individuals who are self-motivated, grasp things quickly, and have a tendency to do fine with self-study would like to learn independently and visually. If these are not your competencies, you could benefit obviously from the classroom training programs.Some facts and stats about self-paced learningAccording to Technavio’s market research analysis, by the year 2020, self-paced e-learning market is expected to grow at a CAGR of around 2%.According to a study conducted by Brandon-Hall, e-learning requires 40% to 60% less time of an individual when compared to traditional classroom training.As per the MarketsAndMarkets research, the market of LMS (learning management system), which is a platform for e-learning programs is set to grow from being 5.22 billion dollars in 2016 to a 15.72 billion dollars in 2021, with a CAGR of 24.7%.According to Learning House, 45% of online learners reported salary increase and 44% reported enhancements in their employment standing.  Classroom TrainingClassroom training is viewed as the best type of learning. Going to a class requires an investment of effort and time, however, the benefits are outstanding. The classroom environment expels you from the diversions of daily work so you can focus on enhancing your skills. Here you have the chance to connect with highly qualified and certified instructor face to face and discuss ideas and issues with your associates and partners.Method followedIn general, classroom training follows a predefined timebox. There will be a start date and a due date assigned for the exams and assignments, and the course should be completed within a defined period of time. Course materials will become accessible at definite times as the course continues.How it helps?Have a look at the top reasons listed below that tells you the how classroom learning helps:SMEs will be available on the spot to help you in solving your queriesInstant support and help in fixing the errorsPractical, real, and hands-on use of the tools and technologiesSocial interaction and sharing experiencesProfessional instructors who make the learning experience interesting, engaging, and enjoyableInvaluable feedback from the mentor and other peers in the groupThe reasons above demonstrate that classroom training is the most effective form of learning among all training programmes. They improve learning by the involvement of the human association that is natural in all classroom-based instructional classes.Technology has changed the way we learn than ever before by offering different delivery options. HD video, mobile devices, social media tools, and more have developed the learning ecosystem significantly. But, classroom training remains at the center of all these.BenefitsSelf-motivation, discipline, and more effective form of learningIndividuals have the chance to ask their queriesThey get an opportunity to discuss with other members of the group and learnMost effective for highly collaborative or complex subjectsWell-experienced instructors deliver the courseCustomized knowledge delivery is available. Because in the classroom environment the trainer who had extensive knowledge can figure out a learner's self-efficacy and intellectual ability by looking and interacting with them directly and transfers the knowledge accordingly.Instructor will be able to predict the end goal of the learners i.e whether they are taking the course to get higher salaries, or better career opportunities, or just for upgrading their professional skills, or for any other reasons and help them to achieve their goals.Instructors would help an individual to gain a strong command over practical knowledge. For instance, if the student stuck at programming then instructor will be available on the spot to help them in executing it.Challenges you might faceChallenges do exist with classroom learning, including:Investment of high cost, effort, and timeDelay in individual’s daily tasksIn classroom training, students might feel difficult to sustain attention to the class for long periods of time. This is because the number of intervals given is limited and the students are not allowed to take the break whenever they want.Individuals don’t have any option to choose their instructors. So, the teaching styles of trainers are unpredictable. Self-paced learning offers solutions to many of these challenges.Is classroom training right for you?Not every individual is open to learning in this classroom environment because not everyone learns at the same pace. Some people who are fast learners may feel frustrated for being held back while others who are slow learners are trying to hold on. This environment is best for those:Who want to interact with trainers and other learners personally in a live environmentWho want to get hands-on practice experienceWho have ample time to attend the classesSome facts and stats about classroom learningRegardless of the quick development of e-learning, many students say despite everything they lean towards the traditional classroom training.According to a new survey conducted in the US, nearly 78 percent of over 1,000  students agreed that they would believe the classroom learning is easier than any other development programs.Some reports stated that there is an added sense and value of worth for classroom training.Final ThoughtsHopefully, our explanation of Self-Study vs Classroom Study has helped you to choose the best option for you. Both are proven methods to increase one’s knowledge and develop learning. Each training method has its own pros and cons. So, we can’t say one is better than the other. The better question is ask yourself ‘which one works better for me’ instead of thinking ‘which one is better’. Finally, the choice is yours!The table given below demonstrates the difference between classroom and self-study training:                                       Parameters                             Self-paced             Classroom        Learn at your own pace                                                      Cost-effective                                                           Flexible                                             Learn from instructors worldwide                                                  Adaptive mode of delivery                                                 Clearly defined end goal                                        Still undecided about whether self-paced or classroom training is right for you? We suggest one more option that would be a great fit for you i.e blended learning which is a combination of both online and traditional classroom methods. This seems to be an ideal preference for the whole new cohort of do-it-yourself learners, who want to make the best use of both forms of learning.
Rated 4.0/5 based on 58 customer reviews
Self-Paced vs Classroom Training- Select The Best Fit For You

Self-Paced vs Classroom Training- Select The Best Fit For You

Blog
Learning, whether self-paced or classroom, is a deciding factor for your career. Both self-paced and classroom training methodologies empower learners with skills and knowledge. Both have their limita...
Continue reading

How To Break Into The Career Of IoT Development

Being involved in IoT development means honing and maintaining a versatile skill set, which requires familiarity with both the hardware and software side of things. You should not only know the intricate details of the hardware used —connectivity and compatibility are two major components, but you must also be able to craft and deliver the software experiences too.Take a connected or smart TV for example. On the hardware side, to be truly "smart," a TV must have an embedded Wi-Fi card that allows it to sync up with local networks. However, to be alluring to customers, it needs to include both an operating system and app ecosystem that enable users to interact. You need to make it easy to access, browse, install, access and use the various apps that create the user experience.What Is the Internet of Things Exactly?Getting back to basics, the Internet of Things (IoT) is a swath of internet-connected devices and platforms that can store information and exchange data. Many IoT devices — especially in the industrial market — have embedded sensors that work with software algorithms to react and carry out various actions. Aside from the internet access, what makes them "connected" is the fact that they can communicate or sync up with other smart devices.Say, for instance, that you have a smart thermostat, a smart light bulb and a smart power outlet in your home. These devices could all interact, syncing up to turn on or off on a user-set schedule or at times that an algorithm determines will be the most cost-effective.IoT is a huge deal, not just in the consumer world (smart homes), but in many other sectors too. In fact, when 2017 wrapped up, there were 8.4 billion connected “things” in use.How to Become an IoT DeveloperSo, the IoT and "smart" things market is massive. But, how do you get involved and what kind of skills and experience do you need?The most important skills that you should possess to start your career in IoT is that you should be a developer, creating platforms, systems, software and even hardware. Your job is essentially to ensure that whether all the necessary components are in working order and that your customers have the power to control and interact with the related devices.Remember the compatibility bit we mentioned in the introduction. It is also your responsibility to ensure that any devices, hardware or platforms you develop are compatible with a variety of other systems. In many cases, this calls for recurring, continuous development through released software and firmware updates.Online courses — such as this one— often recommend a proficiency in at least one coding language including Java, C, Python or Swift. But you'll also need to understand and know how to assemble the hardware, which requires lots of practice. Luckily, tinkerers and beginners can build their skills with devices like the Raspberry Pi and Arduino electronics kits.What About Existing Developers?Many people wish to change or redirect their path at some point in their career. Advanced to intermediate developers, in particular, might want to get into IoT to stay relevant, because, it offers a more competitive and enjoyable experience.Whether you're a beginner or an expert, the initial training is still the same. Here's what you'll need to know:IoT’s Potential: 82% of companies and organizations will be using IoT hardware and software for their business within the next two years. Furthermore, two-thirds of all consumer-centric devices will be “connected” or IoT-ready [1]. That means, within the next few years, IoT technology will be in just about every home and business around the country, if not the world.As a developer, it's important to understand the vast implications of the systems and technology you're helping to flesh out. In the case of IoT, these devices have the potential to improve lives, work conditions and efficiency for most businesses.History: IoT or "Internet of Things" was initially introduced by Kevin Ashton in 1999 and has become the premiere term of related technologies. Before delving in, you should know why it came about, when and how it was created and, more importantly, where it's going.You’ll also want to stay on top of industry trends and patterns, which requires a familiarity with the industry’s history.Know the Players: The list of major brands and organizations involved in the IoT sector currently is huge. You'll want to keep up to date with what's going on with the big players like Amazon, Cisco, Google, AT&T, Oracle and IBM, iRobot, Qualcomm, and Samsung.If you're going to enter the market and work as a developer, you'll want to know the companies and teams you're going to be working alongside. Remember, IoT devices are also "connected" and require interconnectivity with other hardware and software.Understand Limitations: Technology, as we all know, is not perfect. IoT devices have limitations and restrictions, which you'll want to be familiar with as a developer. The "connected" functionality of a system or device, for instance, is useless when the internet is down. You'll want to create and develop the technology, so that it's as operational as possible when experiencing connectivity problems.Another often forgotten limitation relates to the average consumer, who is not tech-savvy or knowledgeable about complex technology. You'll need to consider this demographic when developing your systems and devices. How can you make your products user-friendly without losing functionality?Expand Your HorizonsThe good news is that once you understand most of the concepts and have the skills discussed here, you can dive right in and begin working in the IoT field. Because of how widespread and prevalent it is in today's world, there are many ways to get involved, not all of which are costly or time-consuming. It will be an added advantage if you already have development and coding experience.Overall, it's not a bad idea to expand your horizons and get involved with IoT and "connected" devices now. They are already taking the world by storm, and it won't be long before they're as common as the beloved smartphone.
Rated 4.0/5 based on 48 customer reviews
How To Break Into The Career Of IoT Development

How To Break Into The Career Of IoT Development

Blog
Being involved in IoT development means honing and maintaining a versatile skill set, which requires familiarity with both the hardware and software side of things. You should not only know the intric...
Continue reading

Machine Learning VS Data Science: What Is The Difference?

You may have heard of terms like data science and machine learning and may have thought what’s the difference between two, the two terms have very different meaning but are used interchangeably very often. In this article, We will discuss machine learning vs data science and would also explain what is deep learning and how are these connected to Big Data.Generally, when we are finding a solution to a problem using the traditional approach, we define a step by step approach for finding the solution. For example, let's say we are building a self-driving car we will define things like if the road is empty go at fast speed, when a vehicle is in front check whether vehicle in left or right and act accordingly, so basically, a whole lot of if-else statements. But when using machine learning, this approach is flipped, we don't give steps to solve a problem instead we give our inputs and expected output and our algorithm learns on it, and develop the solution to the problem itself and that's the part where the magic happens. Because sometimes for a very complex problem it may happen that we don't even understand what is the approach that our model has built to solve the problem, we just know that it's working. We will discuss this point further when we will talk about deep learning and big data.Machine Learning is divided into three different categories:1. Supervised Learning:We have already made a small program using the supervised algorithm in my last article, if you have not read it, check it out here. In supervised learning we have our data managed so that for every input we have well-defined outcome, as in my previous article by taking into consideration four different properties of flower, we determined its species. Another example could be we are giving pictures of some animals (say cat, dog and lion) to our model and labelling the picture with the corresponding animal, now we give a picture of the dog to our model, then we will get the output as the dog. In short, we are giving our model known information and we expect our model to be trained from it.2. Unsupervised Learning:In unsupervised learning, as opposed to supervised learning we feed unlabeled data to our model, for example, you are a professor and want to classify all your students into some groups based on their various characteristics like grades, hobbies, subjects studied. average study hours put by them per week, night owls or early risers. Say you have 60 students and you want to classify them into suitable study groups, you can provide this data to an unsupervised model that does this job for you. You will have groups of students based on their similar interests, same study hours etc. Unsupervised learning based models are used by sellers to classify their different types of customers to better cater to their needs.3. Reinforcement Learning:In this type of learning, feedback is not given at every stage instead model gets feedback only when it achieves a particular goal. For example, if we use reinforcement based bot which can beat a player at chess, it will only be given feedback if it won the game on the other hand if it was supervised algorithm, we would be giving feedback after every move.So, basically machine learning is a whole bunch of different algorithms divided into three categories which uses input data to evolve and find a way to solve a given problem.Now, what is Data Science?Although the buzz around data science has been going around for last 4-5 years, data science has been around in the market since 90’s. Only now it has got mass attention like this. Data Science is made of two words data and science, so what is data. Data is any fact, unorganized set of things that needs to be processed to have to mean, for example, an image is data, this article is data, on the other hand, information is what we generally mean when referring to data, information is when data is processed, organized, structured or presented in a given context so as to make it useful. So data science should be actually called information science, but whatever it’s too late to make any change now. The other word in data science is science, so to call it “science” we need to consider the Science and Scientific Method definition. According to this, Data Science is not only about the practical or empirical methods, it needs scientific foundations. Data science consists of wide variety of skill set:Mathematics: Probability, statics, calculus and other disciplines of maths.Computer Science: Knowledge of the command-line, understanding vectorized operations, thinking algorithmically these are the skills required by a data scientist.Machine Learning: As discussed above.Data science is having knowledge of different fields. Topics/tools that a person needs to understand or have some knowledge when working with Data Science:Nonlinear systemsLinear algebraAnalytical geometryOptimizationCalculusStatistics and probabilityProgramming language (R, Python, SAS, Javascript)Softwares: Excel, SPSS by IBMGeneral MLasS platforms: Watson Analytics by IBM, Azure Machine Learning, Google Cloud machine learning,Data visualizations: Power BI, Tableau, R/Python using plotly/ggplotMachine Learning (supervised, unsupervised and reinforcement learning)Big Data (MapR, RedShift, Snowflake, Cassandra, Hadoop and Spark)Hardware (CPU, GPU, TPU, FPGA, ASIC)Now we have talked about data science and machine learning let us know what is deep learning.A great video explaining the difference between machine learning and data science.I highly recommend to watch it.Deep learning is a sub field of machine learning algorithms which uses Artificial Neural Network (ANN) to solve a problem using trial and error, much like a human brain, like humans learn and adapt according to their environment, like that ANN start with some fixed weights and they are changed according to the output produced. I will discuss deep learning in my future articles so don’t worry if you still have not got a clear understanding of deep learning, we will implement a neural network in python. Terms like data analytics, data mining etc and how they contrast with machine learning and data science is explained here.Now the last question is what is Big data and how is it related to data science and machine learning. Data collected through various sources is complex and messy, these sources may include IOT devices or machines and human activities online, since this data is too complex and large to be maintained through traditional tools we have moved to Big data which basically includes advanced new tools to solve the problems posed by complex and large datasets.Since machine learning algorithms depend on data to evolve and find the solution to problems, so a data scientist should have some knowledge of Hadoop and such tools.I hope you like this article if you want to discuss something or want to know about something particular then drop a comment. Happy Learning.
Rated 4.0/5 based on 48 customer reviews
Machine Learning VS Data Science: What Is The Difference?

Machine Learning VS Data Science: What Is The Difference?

Blog
You may have heard of terms like data science and machine learning and may have thought what’s the difference between two, the two terms have very different meaning but are used interchangeably ...
Continue reading

Create Cross Browser Extensions

Browsers are made on a very simple logic — one size fits all. Browser extensions are the things that extend the browser functionality to suit everyone’s requirement. There are a number of extensions available on Firefox AMO, as well as Chrome web store, which modify browsers and add utilities like creating notes, blocking ads, manage passwords etc.With the WebExtensions API, building a cross-browser extension is a piece of cake, all it needs is a basic Javascript knowledge. And to our joy, the same piece of code will run across various browsers.WebExtensions APIWebExtensions APIs are simple APIs that can be used with Javascript to modify browser functionalities without touching the native browser code.Below are some APIs and how they can be used.1. Tabs APIThis API is used to deal with browser tabs. With the help of this API, you can create a new tab with specific URL, detach a tab, close a tab, you can also create hook functions on tab close, tab creates, tab activates etc events.2. Notification APIThis API is used for creating browser notifications. It accepts a simple object of options like 'title', 'message' and 'type' (template type).3. BrowserAction APIBrowser action is a button in a browser’s toolbar. This API is used to track toolbar button click. A browser action popup can be defined in the manifest file too.4. ContentScriptsThis API is used to register (JS)scripts at runtime, which can also be achieved by ‘content_script’ key in the manifest.5. ContextMenuThis API is for manipulating the right-click menu. With this API we can add right-click menu entry, get the right-click event, etc.More APIs and details about them can be found on MDNExcited?Let’s get started. In this article, we will be developing cross-browser extensions that will run seamlessly on Firefox as well as Chrome.What are we making?In this demo, we will help you to create your own browser extensions that create a toolbar button entry with an icon and a tooltip. When the button is clicked a toolbar popup opens with a random fun fact that is fetched from a third party API.                                                                                                       Fun Facts ExtensionStructure    - fun-facts     - manifest.json     - icons         - icon_48.png         - icon_64.png     - popup         - index.html         - style.css         - script.js         - cn.gif1. Manifest.json — This file is the entry point for extensions. It is the only file that every extension must contain. The Manifest file has metadata about the extension such as the name of the extension, description of the extension, version etc. This file also has details of the functional components such as content script, background scripts, browser actions etc. Moreover, it houses every permission that would be required by the extension.'manifest_version', 'name', 'version' are the only mandatory keys.'browser_action' will parse the HTML present at the address given in 'default_popup' and load it into the toolbar popup on click.2. Popup — This directory is an individual HTML setup with HTML files, CSS, JS, images, etc. This will make the UI of the popup as well as the logic that’s being executed.popup/index.htmlpopup/style.csspopup/script.jsFor this example, I am consuming an open API from icndb.com, which gives free random facts about Chuck Norris on each request.As soon as the toolbar button is clicked the Javascript will be executed and an HTTP request will be made to the icndb server, the response will be displayed as the text in the popup.It’s just exactly the same thing as what we have been doing in web pages!3. Icons — Lastly, we need to put the icons (in their required resolutions) that will be used as the toolbar button image, extension logo, extension listing image etc. in this folder.We should be all set by now.Installing and testing the extensionIn your Firefox Browser do the following:Type ‘about:debugging’ in the address barClick ‘Load Temporary Add-on’ buttonSelect the ‘manifest.json’ from the file select dialog                                                                                        Adding unpacked Add-on to FirefoxThat’s it, you will see a new toolbar button, click it and see your Add-on in action.Wait, wasn’t this supposed to be Cross Browser?Okay then,In your Chrome Browser do the following:Type ‘chrome://extensions’ in the address barCheck ‘developer mode’, to enable loading unpacked extensionClick ‘Load unpacked extension’ button.Select the complete folder from the file select dialog.                                                                                            Adding unpacked extension to ChromeYou will see a new toolbar button, click it and see your Chrome extension in action.Note —This is an unpacked extension and can not be distributed yet. If you want to make your extension public and be used by others, it must be submitted to Chrome store or Firefox AMO.SecurityThe WebExtensions APIs require permissions(mentioned in manifest.json and shown to the user at the time of installing the cross-browser extension). In addition to this, the published extension is reviewed by designated reviewers (in case of Firefox) to avoid any malware extension.ResourcesIf you want to learn more about how to make a cross-browser extension, the best resource to get started with extensions development is the Mozilla Developer Network.Find the source code of this example here.   
Rated 4.0/5 based on 45 customer reviews
Create Cross Browser Extensions

Create Cross Browser Extensions

Blog
Browsers are made on a very simple logic — one size fits all. Browser extensions are the things that extend the browser functionality to suit everyone’s requirement. There are a number...
Continue reading

Proxy Your Requests To The Backend Server With Grunt

If you are working on large projects, it is undoubtedly a good idea to have a build script or some task scripts to help to automate some of the repetitive parts of the development process. For JavaScript projects, Grunt serves the similar purpose. It is a JavaScript task/build runner that is written on top of NodeJS. Grunt can help you with automatically minifying your JavaScript or CSS files, reload your browser on every file change, can show you a comprehensive list of JavaScript errors, compile your SASS/LESS files into CSS files automatically and many other things.However, the most significant advantage of Grunt that I am going to discuss today is its ability to proxy your requests to the backend server. For example, when you are developing your backend with anything other than JavaScript, you will face difficulty in accessing the backend data in your frontend without having to compile and deploy the code every time, you make any changes. This is not possible with typical web server setup because XHR requests are not allowed to be cross-domain by browsers due to Cross-origin resource sharing (CORS) limitations.So, the problem here is as follows, you are developing the UI of your applications using some frontend JavaScript framework (say Angular) with Grunt as the build runner, and the backend of your application is being designed in some backend framework other than JavaScript/NodeJS (say Laravel), you might face problems accessing the backend while running Grunt server. It happens because the backend Laravel service runs on port 8000 and the front end development server runs on port 8080. The requests from front-end server to the backend-server will result in same-origin policy errors due to the port difference. To fix this issue, we can setup CORS through a proxy on Grunt. This proxy will stand in front of your frontend server and the backend server and get the required data from the backend and pass it to your frontend while letting your browser think that you are all in the same domain.Grunt has a module grunt-connect-proxy that exists to help to solve this issue. It delegates requests that match a given URL to the backend of your choice. So for example, you want to access your backend using the URL http://localhost:8080/api, you can write a proxy rule so that whenever your user tries to access this URL in a browser, the proxy will get the data from your backend and server it at this particular URL.The procedure to setup the Grunt proxy for backend requests is simple. First, you will have to add the proxy configuration to your Gruntfile.js. In this example, I am assuming that the backend server is running on the port 8000 and the Grunt server is running on the port 8080. This configuration will delegate all requests to http://localhost:8080/api to http://localhost:8000/backend.Now register your Grunt server task to run the proxy on Grunt execution.Let me explain, the above two scripts line by line. In the connect section of your Gruntfile, we add a new section called proxies. The options defined in the proxies section are explained below.context: This is the context against which the incoming requests will be matched. Matching requests will be proxied to the backend server.host: The host address where the backend server is running. The incoming requests will be proxied to this host.port: The port where the backend server is running.https: If your backend server is an https endpoint, then set this value to true.rewrite: This option allows rewriting of URL when proxying. What this means is that when trying to proxy http://localhost:8080/api to the backend server, the URL will be rewritten as http://localhost:8000/backend. The object's key serves as the regex used in the replacement operation, and the value is the context of your backend server's service.More options can be found in the documentation of grunt-connect-proxy.You will also need to setup the proxy's middleware in the options section of the connect. The relevant code is as follows.Finally, include your proxy task in the server task. It is necessary to append the proxy task before the connect task. Also, make sure to specify the connect target in the configureProxies section. In our case, the connect target is the server.Now you can start your Grunt server via this configured proxy by typing Grunt server in the command line. You should see something like this in the console.The above output confirms that the proxy is working fine. Some of the example URLs are:Grunt ServerBackend Serverhttp://127.0.0.1:8080/apihttp://127.0.0.1:8000/backendhttp://127.0.0.1:8080/api/x/yhttp://127.0.0.1:8000/backend/x/yThat's all. Now you will not face any problems getting data from any backend of your choice.
Rated 4.0/5 based on 43 customer reviews
Proxy Your Requests To The Backend Server With Grunt

Proxy Your Requests To The Backend Server With Grunt

Blog
If you are working on large projects, it is undoubtedly a good idea to have a build script or some task scripts to help to automate some of the repetitive parts of the development process. For JavaScr...
Continue reading

Sending Push Notification To Android Application From Your Own Django App Server

INTRODUCTIONHello everyone! In this blog post, I'm gonna talk about how you can actually send a push notification to your Android application with Django from your own app server. First, I'm gonna walk you through the details of how to actually write the code for the Android application from scratch and will help you on how to register your existing Android application to firebase. The main goal of this blog post will be to help you use the PYFCM package to set up push notifications for android with Django.It will be a two-step process the part 1 will revolve around creating an Android application, which will have following featuresGenerate a firebase instance ID. This instance ID will be used to identify a device uniquely.Send the ID generated in the previous step to our app server using Volley. Volley is quite useful in sending the small amount of data over the network.And finally, receive notification from our app server. So,  let's get to it.CREATING THE CLIENT APPLICATIONFirst, we will start by creating our own Android application. Now, this part is important because you actually need to have your own Android application to implement push notifications with Django. Creating a full-fledged application will be beyond the scope of the blog, but a simple hello world application will do.Shoot up your Android studio, and choose a New project. Fill up the basic information and click next till your application gets rendered on the studio. You can choose any activity you want, but I recommend using empty activity for this; just to keep things simple.I assume you've signed into your google account in your android studio. If not, do it right away. That will make our task a lot easier.Once your project is loaded, GotoTool->Firebase In the sidebar that gets opened, click on cloud messaging. Select Setup Firebase Cloud MessagingThis will open a sidebar with three options.Now, when your device gets started for the first time, the Firebase SDK generates a unique identification token, which is used to recognise every device uniquely. Much like your SSN number (for the USA) or Adhar Number (in case of India).What we need to do later, is to send this token to our app server. That’s where Volley will kick in. Now before we get to that, make sure you have created a new class that extends theFirebaseInstanceIdService. In this class, you will have to override the onTokenRefresh method. Look at the following example for referencepackage com.example.android.djangopushnotification; import android.content.Context; import android.content.SharedPreferences; import com.google.firebase.iid.FirebaseInstanceId; import com.google.firebase.iid.FirebaseInstanceIdService; // Created by batman on 3/4/18. public class FcmInstanceIdService extends FirebaseInstanceIdService {     public void onTokenRefresh()     {         String recent_token = FirebaseInstanceId.getInstance().getToken();         SharedPreferences sharedPreferences = getApplicationContext().getSharedPreferences(getString(R.string.FCM_PREF), Context.MODE_PRIVATE);         SharedPreferences.Editor editor = sharedPreferences.edit();         editor.putString(getString(R.string.FCM_TOKEN), recent_token);         editor.commit();     } }Also, we will need to extend the FirebaseMessagingService class. This class will be useful in case of handling messages beyond the level of receiving notifications on apps in the background. We will have to create an onMessageReceived method to handle incoming messages. Look at the illustration below for reference.package com.example.android.djangopushnotification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.support.v4.app.NotificationCompat; import com.google.firebase.messaging.FirebaseMessagingService; import com.google.firebase.messaging.RemoteMessage; // Created by batman on 3/4/18.   public class FcmMessagingService extends FirebaseMessagingService {     @Override     public void onMessageReceived(RemoteMessage remoteMessage) {         String title = remoteMessage.getNotification().getTitle();         String message = remoteMessage.getNotification().getBody();         Intent intent = new Intent(this, MainActivity.class);         intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);         PendingIntent pendingIntent =  PendingIntent.getActivity(this, 0, intent, PendingIntent.FLAG_ONE_SHOT);         NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this);         notificationBuilder.setContentTitle(title);         notificationBuilder.setContentText(message);         notificationBuilder.setSmallIcon(R.mipmap.ic_launcher);         notificationBuilder.setAutoCancel(true);         notificationBuilder.setContentIntent(pendingIntent);         NotificationManager notificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);         notificationManager.notify(0, notificationBuilder.build());     } }Till now, we have created our basic app, with the ability to generate a unique Firebase token, we have added a class to view that number also written a class to handle the message notifications.Now the final part is to integrate Volley into our application. Volley allows ease exchange of data over Http network for the android application.To start using Volley add the following line inside dependencies in your app level gradle file. compile 'com.mcxiaoke.volley:library:1.0.19'Now we need a class to implement the Request Queue methods. Look at the example below for help.  I’m calling it MySingleton.java. The class has to be a singleton because we will need only one instance of this class at any given time, and this will be achieved by making the constructor private. Keep in mind that you’ll need to edit the server address according to your application server.package com.example.android.djangopushnotification; import android.content.Context; import com.android.volley.Request; import com.android.volley.RequestQueue; import com.android.volley.toolbox.Volley; // Created by batman on 3/4/18. public class MySingleton {     private static MySingleton mInstance;     private static Context mCtx;     private RequestQueue requestQueue;     private MySingleton(Context context)     {         mCtx = context;         requestQueue = getReqeustQueue();     }     private RequestQueue getReqeustQueue()     {         if(requestQueue==null)         {             requestQueue = Volley.newRequestQueue(mCtx.getApplicationContext());         }         return requestQueue;     }     public static synchronized MySingleton getmInstance(Context context)     {         if(mInstance==null)         {             mInstance =  new MySingleton(context);         }         return mInstance;     }     public void addToRequestQueue(Request request)     {         getReqeustQueue().add(request);     } }Now, it’s time to put all of these things together. What we are going to do now is to create a simple button, that when clicked, will send the FCM token to our application server.I’ve actually attached the code which, when compiled creates a button. This button when clicked sends the token to the app server and displays the response to the user in the form of a dialogue box.package com.example.android.djangopushnotification; import android.content.Context; import android.content.DialogInterface; import android.content.SharedPreferences; import android.support.v7.app.AlertDialog; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.TextView; import android.widget.Toast; import com.android.volley.AuthFailureError; import com.android.volley.Request; import com.android.volley.Response; import com.android.volley.VolleyError; import com.android.volley.toolbox.StringRequest; import java.io.StringReader; import java.net.InetAddress; import java.net.UnknownHostException; import java.util.HashMap; import java.util.Map; public class MainActivity extends AppCompatActivity {     Button button;     TextView textView;     String app_server_url = "http://192.168.43.204:8000/insert/?fcm_token="; //change it to your server address     AlertDialog.Builder builder;     @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main);         button = (Button) findViewById(R.id.button2);         textView = (TextView) findViewById(R.id.textView);         button.setOnClickListener(new View.OnClickListener() {             @Override             public void onClick(View view)             {                 SharedPreferences sharedPreferences = getApplicationContext().getSharedPreferences(getString(R.string.FCM_PREF), Context.MODE_PRIVATE);                 final String token = sharedPreferences.getString(getString(R.string.FCM_TOKEN), "");                 builder = new AlertDialog.Builder(MainActivity.this);                 textView.setText(token);                 StringRequest stringRequest = new StringRequest(Request.Method.GET, app_server_url+token,                         new Response.Listener() {                             @Override                             public void onResponse(String response) {                                 builder.setTitle("Server Response");                                 builder.setMessage("Response: "+response);                                 builder.setPositiveButton("Ok", new DialogInterface.OnClickListener() {                                     @Override                                     public void onClick(DialogInterface dialogInterface, int i) {                                     }                                 });                                 AlertDialog alertDialog = builder.create();                                 alertDialog.show();                             }                         }, new Response.ErrorListener() {                     @Override                     public void onErrorResponse(VolleyError error) {                         Toast.makeText(MainActivity.this, "Error.."+error.getMessage(), Toast.LENGTH_SHORT).show();                         error.printStackTrace();                     }                 })                 {                   protected Map getParams() throws AuthFailureError                   {                       Map params = new HashMap();                       params.put("fcm_token", token);                       return params;                   }                 };                 MySingleton.getmInstance(MainActivity.this).addToRequestQueue(stringRequest);             }         });     } }That’s it. Our app is done now. Now it’s time to code our Django server.CODING THE DJANGO SERVERWe will be using the pyfcm module. To install the module run the command$ pip3 install pyfcmNote: You may have to replace pip3 with just pip for some distribution of Linux or windows. Once the module gets installed, we are ready to begin.Refer to my previous blog post on this link https://www.zeolearn.com/magazine/django-vs-flask-a-comparative-study for help in setting up a simple Hello world program.After you’ve managed to compile the Hello World in Django, the next step is to create a database to store the fcm ids. We will be using the default sqlite3 database to create a table with only one column currently; that will be called fcm_token.Open up your Models.py and make the required changes. Use the example shown below for help.Github gist for Modules.pySave the Modules.py, and apply the migrations.Now, I’m making certain maybe not so obvious assumptions here before taking you forward. First is, there is currently only one device running the app; that means there is only one Firebase Id currently being stored in the database. Second, to send the notifications to the device,  you may have to make few changes in this code; as per your own need like the exact URL which when visited should trigger the notification to the application. In my case, the notification gets sent to the application as soon as the user visits /send. The third assumption is, you’re performing these stunts under the adult supervision. (Just kidding! They are perfectly safe for you)But, on a sincere note, in case there are multiple devices you wanna send notifications to; I’ll add the required snippet as a comment.from django.shortcuts import render from django.http import HttpResponse from .models import fcm_info from pyfcm import FCMNotification # Create your views here. #this was the original home page, it declares the list of all the fcmid's in database def homePageView(request): prize = "Hello There" tokens = fcm_info.objects.all() for token in tokens: prize += token.fcm_token prize += '' return HttpResponse(prize) #inserting the token into database, after receiving it from Volley def fcm_insert(request): token = request.GET.get("fcm_token", '') a = fcm_info(fcm_token=token) a.save() return HttpResponse(token) #the method which sends the notification def send_notifications(request): path_to_fcm = "https://fcm.googleapis.com" server_key = 'insert your server keys here' reg_id = fcm_info.objects.all()[0].fcm_token #quick and dirty way to get that ONE fcmId from table message_title = "Zeo Learn" message_body = "Hi john, Zeo learn Rocks!" result = FCMNotification(api_key=server_key).notify_single_device(registration_id=reg_id, message_title=message_title, message_body=message_body) return HttpResponse(result) #in case you wanna send notifications to multitple ids in just replace the argument registration_id in the notify_single_device #function with registration_ids and provide it with the list of ids you wanna send notifications to. I’m also attaching the urls.py for your help Github gist for urls.py from django.urls import path from django.conf.urls import url from . import views #the home page urlpatterns = [ path('', views.homePageView, name='home'), #the address targetted at the MainActivity.java file. It was the destination where Volley has to send data path(r'insert/', views.fcm_insert, name='insert'), #for sending the notification path(r'send/', views.send_notifications, name='send') ]In conclusion, we were able to create push notification server for Android app using Django framework written in Python.
Rated 4.0/5 based on 45 customer reviews
Sending Push Notification To Android Application From Your Own Django App Server

Sending Push Notification To Android Application From Your Own Django App Server

Blog
INTRODUCTIONHello everyone! In this blog post, I'm gonna talk about how you can actually send a push notification to your Android application with Django from your own app server. First, I&#3...
Continue reading

Neural Style Transfer

We all are acquainted with the VGG network for image classification, that it uses multi-layered convolution network to learn the features required for classification. This article would focus on another property of these multi-layered convolution networks to migrate the semantic content of one image to different styles. This is the algorithm behind applications like Prisma, Lucid, Ostagram, NeuralStyler and The Deep Forger.The algorithm takes a content image:And a style image:And would provide you with a style-transferred image:This is called style transfer in the world of deep learning. This art generation with neural style transfer is all started with Gatys et al, 2015 who found that:The image content and style were separable from the image representation derived from CNNs.Higher layers in the network capture the high-level content in terms of objects and their arrangement in the input image. (Content Representation)By including the feature correlations of multiple layers, a stationary, multi-scale representation of the input image is obtained, which captures its texture information but not the global arrangement. (Style representation)Based on these findings they devised an algorithm for style transfer by starting from random noise as the initial result and then changing the values of pixels iteratively through backpropagation until the stylized image simultaneously matches the content representation of the content image and the style representation of the style image.Loss FunctionTheir loss function consists of two types of losses:1.Content loss:where l stands for a layer in the conv network, and F(i,j) is the representation of the ith filter at position j for the stylized image and P(i,j) is the representation of the ith filter at position j for the content image. See, this is just MSE over the corresponding representations in both the images.(Note: we are talking about representation from the hidden layers of CNNs, here VGG)2.Style loss:Here G is called the gram matrix and it contains the correlation between the filter responses. G(i,j) at layer l is the inner product between the vectorized feature map of the ith filter and the jth filter. This captures the texture information, for more detailed analysis on image style transfer using neural networks look into this paper by Gatys.Now, consider we have gram matrix A for the stylized image and gram matrix G for the image we want to capture style representation from. N(l) be the number of distinct filters in layer l and M(l) be the size of each filter (height times width), then the contribution of a layer l in the loss is:And now taking a weighted sum of all the layers, we have total style loss:where w(l) is the weight assigned to the style loss in layer l.3. Total LossThe total loss is a weighted sum of the content loss and the style loss.where p is the photograph from which we want to capture the content, a is the artwork from which we want to capture the style and x is the generated image. α and β are the weighting factors for content and style reconstruction respectivelyValues used in the paperIn the paper, they matched the content representation on layer conv4_2 of the VGG net, which is the second last convolutional layer. And the style representation on conv1_1, conv2_1, conv3_1, conv4_1 and conv5_1, i.e. correlations from all the layers.So, if you are planning for building your own neural artistic style transfer algorithm, for the content loss take the representation from the middle to last layers, and for the style loss do not ignore the starting layers.The content style trade-offAs we already know that the ratio α/β determines the amount of content and the effect of style in the generated image. Let’s see what happens on decreasing it (i.e. considering the style image more)Given:Starts with an image containing noise and reach to a decent stylized image:If we give more importance to style, i.e. decrease α/β. We would get:Further decreasing α/β. We would get:So you see, it is mandatory to manually tune the α/β ratio in the default Gatys el al algorithm for an aesthetic output.ImprovementsThough the original algorithm proposed uses VGG net for the representations, the same can be applied to other networks trained to perform object recognition task (e.g. ResNet)Gatys et al’s algorithm requires manually tuning the parameters, Risser et al (2017) automatically tune the parameters by the use of gradient information so as to prevent extreme values for gradients.Risser et al also introduce a new histogram loss for providing stability.Patch based style loss — For style loss Gatys et al. captures only the per-pixel feature correlations and does not constrain the spatial layout, but in case of images the local correlation between pixels is important for visual aesthetics. So they introduce patch based loss (Li and Wand).Fast Neural Style — Train an equivalent feedforward generator network for each specific style. Then at runtime, only one single pass is required (Johnson et al).Dumoulin et al train a conditional feedforward generator network, where a single network is able to generate multiple styles.
Rated 4.0/5 based on 43 customer reviews
Neural Style Transfer

Neural Style Transfer

Blog
We all are acquainted with the VGG network for image classification, that it uses multi-layered convolution network to learn the features required for classification. This article would focus on anoth...
Continue reading

What's New In Angular 5?

Angular, Google’s most popular JavaScript MVW framework frequently used by developers worldwide for developing desktop, mobile, and web applications finally has upgraded the version, Angular 5.0.0 on November 1, 2017, after 9 releases of beta versions. This Angular 5 release was mainly focused on making Angular framework smaller, faster, and easier to use. In this article, we will see what's new introduced in Angular 5.1. Progressive Web ApplicationProgressive Web Apps are generating much hype these days. In old versions of Angular, building PWAs were a bit complicated. But, it is now easier to build PWAs with the launch of Angular 5. With this latest version of Angular, it is now possible to get the feel like native apps with mobile web apps along with add-ons like offline experience and push notifications. This is made possible as Angular can create code and configuration on its own with Angular-CLI and it offers service workers through the @angular/service-worker.Use the following command to activate PWA support in your application:$ ng set apps.0.serviceWorker=true2. Build Optimizer toolIn Angular 5, production builds created with the Angular CLI will now apply the build optimizer by default. The build optimizer tool in Angular 5 makes the application faster and lighter by eliminating unnecessary additional parts and runtime code as well. This decreases the size of the JavaScript bundles and enhances the speed of the application.3. Angular Universal State Transfer API and DOMDomino is added to the platform-server by the Angular Universal team. It implies that more DOM manipulations are supported within server-side contexts, enhancing the support for third-party component libraries and JS libraries that aren’t server-side.  Moreover, the team has added BrowserTransferModule and ServerTransferStateModule. These modules enable you to transfer information between the server and client-side versions of the application. so that regeneration of the same information is dodged. This is helpful for developers when their application fetches data over HTTP. This means there is no need to make another HTTP request once the application reached client-side.4. HttpClientThe HttpClient was first introduced in 4.3 version and now the Angular team has made some improvements to this update with Angular 5.0. From this new version, developers are recommending HttpClient for all applications, and stop using the precious @angular/http library.To use the updated HttpClient, you should replace the HttpModule with HttpClientModule from @angular/common/http, inject the HttpClient service, and remove the map(res => rex.json()) calls that is no longer required.5. Compiler and Typescript improvementsThis new version brought a lot of improvements in Angular Compiler to make re-builds of the applications faster, mainly for AOT and production builds. And the TypeScript is also upgraded to the latest version i.e TypeScript 2.4, which allows connecting to the standard TypeScript compilation pipeline.You can use this by running:ng serve --aot6. Multiple Export AliasNow in Angular 5, while exporting you can give multiple names to your directives and components. Exporting a component/directive with multiple names can help users to migrate smoothly without breaking changes.7. Enhanced Decorator supportFor Lambdas, Angular 5 is delivered with expression lowering support in decorators, and the value of data, useFactory, and useValue are supported in object literals. Moreover, a lambda can be used as an alternative to the named function.  For example:Before Angular 5Component({    provider: [{provide: 'token', useValue: calculated()}]  })  export class MyClass {}In Angular 5Component({    provider: [{provide: 'token', useFactory: () => null}]  })  export class MyClass {}8. RxJS 5.5.2Angular 5 shipped with RxJS 5.5.2 allows you to dodge previous import mechanism side effects with a new approach to use RxJS called “lettable operators”. These new operators remove the tree-shaking/code splitting and side-effects problems that existed previously with the ‘patch’ method of importing operators.Furthermore, RxJS delivers a version that uses ECMAScript modules. By default, the new Angular CLI includes this version, saving considerably package size. You should focus on this new distribution even if you are not using the Angular CLI.9. CLI v1.5With this version of the Angular CLI, all the projects are generated by default on Angular 5. Moreover, the build optimizer is activated by default in this new version which can benefit developers from smaller packages.10. AnimationAngular 5 now came with some updates in Animations, where you can animate by using :increment and :decrement based on numerical value changes. In this version of Angular, you can activate and deactivate the animations using values that are associated with data binding. The .disabled attribute of the trigger value is constrained to do this.Conclusion:Angular 5 arrived with a bundle of new features and improvements. All these Angular 5 updates that came with very useful tools will definitely help you to build your application faster and more advanced. The Angular team has developed AngularJS to a great extent with the launch of Angular 5. Without a doubt, Angular is a superheroic framework that is valuable for both users and developers. Hope, this Angular 5 tutorial was helpful and clear for you to understand the major updates and breaking changes introduced in version 5 of AngularJS.Angular 6.0.0, the next version of Angular 5 is launched officially on May 4, 2018. This is a major release focused more on the tool chain, less on the underlying framework and on making it simpler to move rapidly with Angular in the future. We will discuss more about Angular 6 and its new features in our next blog.
Rated 4.0/5 based on 45 customer reviews
What's New In Angular 5?

What's New In Angular 5?

Blog
Angular, Google’s most popular JavaScript MVW framework frequently used by developers worldwide for developing desktop, mobile, and web applications finally has upgraded the version, Angular 5.0...
Continue reading

Five Important Techniques That You Should Know About Deep Learning

Deep Learning is a process of data mining which uses architectures of a deep neural network, which are specific types of artificial intelligence and machine learning algorithms that have become extremely important in the past few years. It's anticipated that may deep learning applications will influence your life soon. Deep learning allows us to teach machines how to complete complex tasks without explicitly programming them to do so. We're entering the era of artificial intelligence and machine learning. A future where machines will be doing many of the jobs that humans are currently doing today. In the past, we'd have to explicitly program a computer step by step how to solve a problem. This involved a lot of if-then statements, for loops, and logical operations. In the future, machines will teach themselves how to solve problems, we just need to provide the data. Let us learn about the techniques that allow deep learning to solve a variety of problems.1. Fully Connected Neural NetworksFully Connected Feedforward Neural Networks are the standard network architecture used in most basic neural network applicationsFully connected means that each neuron in the preceding layer is connected to every neuron in the subsequent layer. And feedforward means that neurons in any preceding layer are only ever connected to the neurons in a subsequent layer. Each neuron in a neural network contains an activation function that changes the output of a neuron given its input. These activation functions are:Linear function - It is a straight line that essentially multiplies the input by a constant value.Non-Linear Function Sigmoid function- It is an S-shaped curve ranging from 0 to 1.Hyperbolic tangent (tanH) function- It is an S-shaped curve ranging from -1 to +1Rectified linear unit(ReLU) function- It is a piecewise function that outputs a 0 if the input is less than a certain value, or linear multiple if the input is greater than a certain value.Each type of activation function has pros and cons, so we use them in various layers in a deep neural network based on the problem each is designed to solve. Non-linearity is what allows deep neural networks to model complex functions.We can create networks with various inputs, various outputs, various hidden layers, various neurons per hidden layer, and a variety of activation functions. These numerous combinations allow us to create a variety of powerful deep neural networks that can solve a wide array of problems. The more neurons we add to each hidden layer, the wider the network becomes. In addition, the more hidden layers we add, the deeper the network becomes. However, each neuron we add increases the complexity and thus the processing power necessary to train a neural network.2. Convolutional Neural NetworksConvolutional Neural Networks(CNN) is a type of deep neural network architecture designed for specific tasks like image classification. CNNs were inspired by the organization of neurons in the visual cortex of the animal brain. As a result, they provide some very interesting features that are useful for processing certain types of data like images, audio, and video.A CNN is composed of an input layer. However, for basic image processing, this input is typically a two- dimensional array of neurons which correspond to the pixels of an image. It also contains an output layer which is typically a one- dimensional set of output neurons.CNN uses a combination of sparsely connected convolution layers, which perform image processing on their inputs. In addition, they contain down sampling layers called pooling layers to further reduce the number of neurons necessary in subsequent layers of the network. And finally, CNNs typically contain one or more fully connected layers to connect our pooling layer to our output layer.Convolution is a technique that allows us to extract visual features from an image in small chunks. Each neuron in a convolution layer is responsible for a small cluster of neurons in the preceding layer.It contains filters or kernel that determines the cluster of neurons.Filters mathematically modify the input of a convolution to help it detect certain types of features in the image. They can return the unmodified image, blur the image, sharpen the image, detect edges etc. This is done by multiplying the original image values by a convolution matrix.Pooling, also known as subsampling or downsampling reduces the number of neurons in the previous convolution layer while still retaining the most important information. There are different types of pooling that can be performed. For example, taking the average of each input neuron, the sum, or the maximum value.We can also reverse this architecture to create what is known as a deconvolution neural network. These networks perform the inverse of a convolutional network i.e. Rather than taking an image and converting it into a prediction value, these networks take an input value and attempt to produce an image instead.CNNs work well for a variety of tasks including image recognition, image processing, image segmentation, video analysis, and natural language processing.3. Recurrent Neural NetworkThe recurrent neural network (RNN), unlike feedforward neural networks, can operate effectively on sequences of data with variable input length.This means that RNNs uses knowledge of its previous state as an input for its current prediction, and we can repeat this process for an arbitrary number of steps allowing the network to propagate information via its hidden state through time. This is essentially like giving a neural network a short-term memory. This feature makes RNNs very effective for working with sequences of data that occur over time, For example, the time-series data, like changes in stock prices, a sequence of characters, like a stream of characters being typed into a mobile phone.Let's imagine we're creating a recurrent neural network to predict the next letter a person is likely to type based on the previous letters they've already typed. The letter that a user just typed as well as all of the previous letters is important in predicting the next letter. First, the user types the letter h, so our network might predict that the next letter is i based on the previous training to predict hi. Then, the user types the letter e, so our network uses both the new letter e plus the state of the first hidden neuron in order to compute our next prediction y because of the high frequency of occurrences of the word hey in our training dataset. Adding the letter l might predict the word help, and adding another l would predict the letter o, which would match the word our user intended to type, which is hello.The two variants on the basic RNN architecture that help solve a common problem with training RNNs are Gated RNNs, and Long Short-Term Memory RNNs (LSTMs). Both of these variants use a form of memory to help make predictions in sequences over time. The main difference between a Gated RNN and an LSTM is that the Gated RNN has two gates to control its memory: an Update gate and a Reset gate, while an LSTM has three gates: an Input gate, an Output gate, and a Forget gate.RNNs work well for applications that involve a sequence of data that changes over time. These applications include natural language processing, speech recognition, language translation, image captioning, conversation modeling, and visual Q&A. 4. Generative Adversarial NetworksThe Generative Adversarial Network(GAN) is a combination of two deep learning neural networks: a Generator Network, and a Discriminator Network. The Generator Network produces synthetic data, and the Discriminator Network tries to detect if the data that it's seeing is real or synthetic.These two networks are adversaries in the sense that they're both competing to beat one another. The Generator is trying to produce synthetic data indistinguishable from real data, and the Discriminator is trying to become progressively better at detecting fake data. For example, imagine we want to create a neural network that generates synthetic images. First, we'd acquire a library of real-world images that we can use to provide real images for the image detector network. Next, we'd create an Image Generator network to produce synthetic images. This would typically be a deconvolution neural network.Then we'd create an Image Detector network to detect real images versus fake images. This would typically be a convolutional neural network. At first, the generator would essentially create random noise as it learns how to create images that can fool the detector. In addition, the detector would only have roughly 50/50 accuracy when predicting real versus fake images. However, with each training iteration, the generator gets progressively better at generating real images, and the detector gets progressively better at detecting fake images. If you let these networks compete with one another for long enough, the generator begins producing fake images that approximate real images. Generative Adversarial Networks have gained quite a bit of popularity in recent years. Some of their applications include image generation, image enhancement, text generation, speech synthesis, new drug discovery etc. 5. Deep Reinforcement LearningReinforcement learning involves an agent interacting with an environment. The agent is trying to achieve a goal of some kind within the environment. The environment has a state, which the agent can observe. The agent has actions that it can take, which modify the state of the environment, and the agent receives reward signals when it achieves a goal of some kind. The objective of the agent is to learn how to interact with its environment in such a way that allows it to achieve its goals.Deep reinforcement learning is the application of reinforcement learning to train deep neural networks. Like our previous deep neural networks, we have an input layer, an output layer, and multiple hidden layers. However, our input is the state of the environment. For example, a car trying to get its passengers to their destination the inputs are the position, speed, and direction; our output is a series of possible actions like speed up, slow down, turn left, or turn right. In addition, we're feeding our rewards signal into the network so that we can learn to associate what actions produce positive results given a specific state of the environment. This deep neural network attempts to predict the expected future reward for each action, given the current state of the environment. It then chooses whichever action's predicted to have the highest potential future reward, and performs that action.Some examples of Deep Reinforcement Learning Applications are games, including board games like chess and card games like poker. Autonomous vehicles, like self-driving cars, and autonomous drones. Robotics, like teaching robots how to walk, and how to perform manual tasks. Management tasks, including inventory management, resource allocation, and logistics; and financial tasks, including investment decisions, portfolio design, and asset pricing.Hope, this deep learning tutorial helped you to learn some valuable information and got an idea about various Deep Learning techniques. Each technique is useful in its own way and is put to practical use in various applications daily. I hope you will be able to put this information to great use.
Rated 4.0/5 based on 39 customer reviews
Five Important Techniques That You Should Know About Deep Learning

Five Important Techniques That You Should Know About Deep Learning

Blog
Deep Learning is a process of data mining which uses architectures of a deep neural network, which are specific types of artificial intelligence and machine learning algorithms that have become extrem...
Continue reading

Cracking The Dataset With Python

Unravel the subtleIt is quite perplexing to know that around 90% of the world’s data today was created in the last 2 years. With the advent of smartphones and wearables, we have allowed numerous agencies to collect data (behavioral or otherwise) on a mammoth scale. This data, on one hand, opens up a vast sphere of possibilities but on the other, poses a unique challenge of handling this information.Today, we need sophisticated techniques that can help us to arrange, analyze and interpret data, so as to draw meaningful insights. A Seemingly useless dataset can sometimes expose unexpected results if analyzed carefully. Therefore, dealing with data is easily one of the most sought-after skills in the present market. Learn how to crack dataset with Python programming language from this article.Python: A blessing in disguiseThe true potential of this language can be realized when it comes to Rapid Application Development (RAD). Python has a rich developer community who work continuously to churn out quality code. As a result, Python ecosystem has powerful libraries that can be effortlessly integrated into the code to ease a programmer’s life. Therefore, when it comes to dealing with data and drawing insights without losing too much focus, Python becomes a natural choice for most of the developers.In this post too, we will be using Python to infer insights from the given dataset and by the end of this post you will be impressed by the rich toolkit Python offers for the same.The dry dataset problem:Let’s have a look at the dataset that we will be using today. The problem dataset description is as follows:“Contributors viewed 10,000 Google maps images and marked whether they were residential areas. If they were, they noted which homes were most prevalent in the area (apartments or houses) and whether the area had proper sidewalks that are wheelchair friendly.”To be honest, this is not the best description that should have accompanies this dataset, you will realize this fact once you start looking at the dataset. But, here it is important to understand that, in the field of data analytics this is very common. You will be dealing with ill-structured data, missing values, redundant records etc and yet you will have to draw insights from it. So, let’s try to crack this dataset step by step with Python.The preparation:Let’s first set up our machine with the required environment. We will use Anaconda, it is one of the most python data science platform.We will be writing our code in Jupyter Notebook provided inside Anaconda package. Further, a lot of open source python libraries that will make our life simpler, you might have some of them, but you will need to import the ones that you don’t have. We will be using Python 2.7 over the course of this article.The tactic:When it comes to data interpretation you need to understand the data first inside your head. For instance, You can start by asking questions like:What kind of data is this?What is this all about?Are there any missing values?What are all attributes there?Is the data properly structured?Questions like these will help you to connect with data even before you write a single line of code. You just have to use your logical instincts and then you will see the path ahead.As we open the .csv file we can see that there are some missing values for certain attributes. Depending on your goal you can decide how to handle such cases. For instance, you can just pad them with some default value or you can ignore them. For the sake of simplicity, we will ignore the missing values here. In this fashion, we will be moving from some very basic conclusions to some deeper implicit inferences. We will be using some of the popular Python packages for the same. Let’s get started with the process of cracking dataset with Python!Insights:Have a look at the steps below that helps you to crack dataset with Python.1.  Curse of dimensionality:The given dataset has a lot of attributes( 21 attributes to be precise). So, one way to proceed is to think about the kinds of relationships these attributes might have with each other. Can we neglect some of the attributes to avoid Curse of Dimensionality? For example, if some attribute has a constant value( or too many missing values) throughout the dataset, we can safely ignore it. Also if some attributes have a very high correlation then we can replace those attributes with a single representative attribute. Let’s explore this correlation between the attributes.correlation.pyIf we try to plot various attributes with each other we see that house_types: confidence and sidewalk_ok: confidence yield a very high correlation coefficient (0.936602). This indicates that we can safely ignore one of these attributes to reduce the dimensionality of the dataset. In other words, we can say that when a person identifies what kind of house it is, they more or less identify whether the house has a sidewalk or not.2.  Comparing house types:It will be an interesting exercise to see how many houses of each type ( Private, Apartments and others) have sidewalks. Let’s see what the data has to say in this regard.house_type_sidewalks.pyWe observe that most of the sidewalks in the dataset are those associated with Private Houses while a very small fraction of the sidewalks is contributed by Apartments and other types of houses.3.  The fraction of residential and nonresidential houses having sidewalks per class:Let’s now try to plot how many houses of each class ( Private, Apartment and others) have sidewalks based on whether they are residential or non-residential.sidewalks_class.pyHere we observe that residential and non-residential private houses have almost the same fraction of SideWalks which are in turn similar to residential apartments.But when we talk about non-residential apartments we can see the fraction going up.This can be explained if we consider the larger number of visitors when non-residential apartments like Hospitals, Factories, and Schools are concerned, better facilities like having a sidewalk ( for say wheelchairs) become a must.4.  US state wise fractions of Private Houses:As we can see that most of the houses in the dataset are from the United States, hence we can also focus our attention on US-specific statistics.It is often said that a populous city will have more of apartments for accommodating large population, let’s see what our dataset has to say about this.statewise_private.py                                                                       Data speaks louder than assumptions.We observe that the result doesn’t comply with our initial hypothesis as a populous city like New York shows a high ratio of private houses. It is okay to arrive at contradictions like this, as it opens up space for improvisation of the obtained dataset or the proposed hypothesis as a whole.5.  US state wise fractions of houses having sidewalks:Let’s see how the distribution of sidewalks is, in different US states.statewise_sidewalks.pyWe can say that the states having large fractions of sidewalks have a higher level of civic life and are infrastructurally more advanced than those having lesser fractions of sidewalks.This fact is further reinforced by the data of GDP here. The states having higher GDP have a mostly higher ratio of sidewalks as well.6.  US state wise fractions of residential houses :Having a lot of attributes in a dataset means we need to experiment a lot to gain any tangible insights. Therefore let’s analyze the US state wise fractions of residential houses.statewise_residential.pyHere we can see the fraction of residential houses in each state. States having a large number of residential houses suggests fewer industries as well as less population.Also, the states having more residences will consequently have better opportunities and standard of living. For instance, North Dakota is seen to have lesser residences and lesser fractions of SideWalks as well.7.  Plotting houses on a map:The data set includes latitudes and longitudes of houses, so it is a natural thought to explore any geographical insight that this data might offer.House_plot.pyResidential houses are plotted with Green.Non Residential houses are plotted with Red.You can get the house_plot.html from here.We can observe that most of what the non-residential infrastructure is closely associated with the residential one. This makes sense as the people working for the non-residential institutes will come from the nearby residential homes.Further, the residential houses are found in colonies (closely stacked) this makes sense as most of the residences are built by the government under township schemes.Also mostly the buildings are situated near the coastline due to favorable conditions of living near a water body.Looking back:By this time, I hope you must be pretty convinced why data is one of the most important assets of this digital age. We started with a seemingly dry and ill-structured dataset and we were able to draw a lot of inferences from it. We also saw how Python along with its powerful libraries eased our analysis.So, where do we go from here? Knowing how to interpret data is one of the quintessential skills of today’s world and possibilities are endless. With a bit of practice and perseverance, we can move a lot forward. In fact, the sky's the limit.
Rated 4.0/5 based on 35 customer reviews
Cracking The Dataset With Python

Cracking The Dataset With Python

Blog
Unravel the subtleIt is quite perplexing to know that around 90% of the world’s data today was created in the last 2 years. With the advent of smartphones and wearables, we have allowed numerous...
Continue reading