top
Sort by :

DJANGO vs FLASK (a comparative study)

Choosing a better tool before you start building your own web application is one of the most important steps in the whole development process. Web frameworks have been around to make the jobs easier for the programmer for quite a long time, but being so many in number, they often put beginners in dilemma. Where on one hand, they give users a lot of option to choose from for developing their projects, they often tend to put the programmer in a dilemma when it comes to narrow down the final choice to just one. In this blog post, I'm gonna help you out in choosing a framework better suited to your needs. We will mainly be talking here about Django and Flask, but the selection criteria can be applied to other frameworks available in the market as well like Diesel, Tornado, Bottle etc too. We will start with trying to make a hello world program using both Django and Flask in tandem, this will be followed by trying to know how templating works in them and after that, we will delve into technical differences between the ways the two frameworks function. Bootstrapping in Flask A simple hello world program in flask looks somewhat like this: And that's it, we are done. Run this program from your terminal and browse to localhost:8000 and you will be greeted by the hello world message. That's the beauty of flask, it's quick, simple and also provides the user freedom to choose how the objects will be treated. (Okay! I will revisit the last point again.) Bootstrapping in Django Now we write a Hello World program in Django. Run the following commands in your terminal $ django-admin startproject HelloWorld $ cd HelloWorld You will obtain the following directory structure. . ├── HelloWorld │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py └── manage.py 1 directory, 5 files The "__init__.py" file basically tells python compiler that the particular directory is to be treated like a module. The "settings.py" file holds the configuration information of the web app like the database engine being used, the apps present in our web server etc. The "urls.py" file is like the index table file. It holds the information about the how the pages will be displayed once the user visits a particular sub-url. The next step is to start an application inside the web server. To do this run the following command. $ django-admin startapp HelloWorldApp You will obtain the following directory structure. . ├── HelloWorld │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── HelloWorldApp │ ├── admin.py │ ├── apps.py │ ├── __init__.py │ ├── migrations │ │ └── __init__.py │ ├── models.py │ ├── tests.py │ └── views.py └── manage.py 3 directories, 12 files Here there are two files of our interest. One is "views.py"; it will contain the logic of requests and response. And the second one is "models.py"; this file will contain the database information. The core code writing part will be a three-step process. STEP 1: Changing the views.py file It should look somewhat like this: STEP 2: Making an entry in the urls.py file. We need to tell the compiler that on visiting a particular url, the corresponding file must be displayed. STEP 3: Adding our app to the settings.py file. We will have to add our HelloWorldApp to the list of installed apps in the settings.py file. Make necessary changes to the file where the tuple with the name INSTALLED_APPS is located and edit it so that the final tuple looks like this. That's almost it, now switch to terminal and run the following command $ python manage.py runserver Now visit the url "http://localhost:8000/HelloWorldApp/" and you will be greeted by a hello world message. At this stage, you might feel that compared to the flask, writing a simple hello world program in Django is a little overkill. Moving on! Till now we have seen how to we can render simple texts using both Django and Flask. But frequently we will come across situations where you will need to display a webpage with contents being generated dynamically. This is called Templating. Templating allows you to juxtapose your HTML and CSS codes together so that you can provide beautiful pages with required data being generated on the fly. Templating in Django Django handles templating by providing "Django-Template Language" (DTL). DTL mainly relies on the Render function. The Render function takes in six parameters. Two compulsory and four optional. Compulsory Parameters: 1. Request: The object used to generate the response. 2. template_name: The location of your HTML template file to be loaded is given here. You can give multiple files here, in that case, Django will look for the first match and load it. Optional Parameters: 3. context: A key and value pairs (dictionary in python) of values to be inserted in the dictionary. Default value null. 4. content_type: The MIME type used for the resulting document. 5. status: The status code for the response. Default value is 200 6. using: The NAME of the template engine used to render the template A simple example to use templating in Django. We will display the date and time using django in this case. STEP 1: We will need to create an HTML page (template.html). STEP 2: Change the views.py to add the following function. That's it. This is how you use templating in Django. The additional options can be used according to your need. I took a simple example just to give you a gist of it. Templating in Flask Flask uses Jinja 2 engine to render the templates and uses the function render_template to instead of render function in Django. Here all we have to do to is to pass the template name along with the list of variables we are gonna use in our template. The signature of a render_template function looks somewhat like this: render_template("template_name.html", {"key1"="variable1", "key2"="variable2"...}) The second argument is a dictionary of key and value pair just like in the case of templating in Django. Now let's take a look at a sample code on how templating works in Flask. We will try to display the values of three variables in a table. It will just be a two-step process. STEP 1: Create a file flask_template.py: STEP 2:Create an HTML template to render the files (values.html). And viola! you're done. Visit "http://127.0.0.1:5000/template" and you will see the required variables displayed. Comparison Now! If you are still here with me then till now we have seen how to write hello world programs in both Django and Flask followed by how to execute basic templating operations using both the mentioned framework. Till now you must have noticed that the number of files created in case of Django was more than the total number of files created in Flask. Yeah, well that's where I'm gonna start telling you about the three major points where Django and Flask contrast with each other in the way they perform. 1. Django provides you with a ton of additional stuff that you may not have asked for, in case of your small projects, like an admin panel. Flask on the other hand "does as it is told", nothing more, nothing less. Django, along with an admin panel also includes Database interface, an Object Relation Model whether you need it or not. These features make Django an ideal choice when the user needs to develop for the production environment. This brings us to the second point 2. Flask provides you more control over how exactly you want the objects to be treated and the way things should be implemented. Flask is simple, flexible and as Gareth Dwyer says in his blog, provides fine-grained control. For learners, this is often a boon because at any point of time in the development cycle you have only those components which you require and know 3. The community size. Whenever we start learning anything new, we are bound to get trapped, stuck in some doubts or the other. This event also occurs after the learning curve is complete and the user is actually using the tool to develop something new. We rush around asking for suggestions, solutions on sites like StackOverflow etc. That's when the existing community size of the framework plays a critical role. Django has been around for over 11 years now and if what we have seen so far, is something to go by, then you can say that Django will continue to be one of the top choices for a foreseeable future. Flask is quite new and doesn't have as much as community support as Django. Finally, my personal opinion says that if you are new to the web development world, both are equally good, but I might just inch a bit in favour of Flask because of its simplicity and at any moment of time there won't be any files lying around whose task you don't know. In case you're going for a fully fledged production environment, Django will lift much load off your shoulders.
Rated 4.0/5 based on 0 customer reviews
DJANGO vs FLASK (a comparative study) 92 DJANGO vs FLASK (a comparative study) Blog
Devershi Chandra 19 Feb 2018
Choosing a better tool before you start building your own web application is one of the most important steps in the whole development process. Web frameworks have been around to make the jobs easier f...
Continue reading

Google announces that Cloud TPU is now available in beta on its GCP

On Monday, Google officially announced that Cloud TPUs (Tensor Processing Units) are now available in beta on GCP (Google Cloud Platform). These TPUs are specifically designed to scale up and speed up the machine learning workloads that are programmed with TensorFlow. The TPUs help to train the machine learning models quickly rather than exceeding days or weeks. Google initially declared its work with TPUs a few years back, however, released them recently to be utilized by its cloud clients. "We designed Cloud TPUs to deliver differentiated performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly," Google wrote in a Cloud Platform blog. These TPUs are built with application-specific integrated circuits (ASICs) and each cloud TPU can pack up to 64GB of high-bandwidth memory and 180 teraflops of floating-point performance onto a single board. All these boards can be connected together or used alone through a high-speed network to design multi-petaflop machine learning supercomputers called “TPU pods”. Google Compute Engine VM gives data scientists access to a network-attached cloud TPU instead of sharing a cluster. So, they can customize and control those to address the issues of their workload. “Cloud TPUs are available in limited quantities today and usage is billed by the second at the rate of $6.50 USD / Cloud TPU / hour,” Google said in a blog post. In addition, Google also announced that GPUs are also available in beta in the latest release of Kubernetes Engine which speeds up the compute-intensive applications like financial modeling, image processing, and machine learning. Source: Google Cloud Platform Blog
Rated 3.0/5 based on 0 customer reviews
Google announces that Cloud TPU is now available in beta on its GCP

Google announces that Cloud TPU is now available in beta on its GCP

What's New
On Monday, Google officially announced that Cloud TPUs (Tensor Processing Units) are now available in beta on GCP (Google Cloud Platform). These TPUs are specifically designed to scale up and speed up...
Continue reading

Amazon is building its own AI chips for Alexa

According to a report by The Information, Amazon is developing its own Artificial Intelligence(AI) chips for the Echo, which would allow Alexa-powered devices to respond faster to commands, by allowing speech recognition to the device directly. At the present time, whenever a user makes a request on an Alexa-powered gadget, there is a time lag while the chatbot approaches the cloud in order to understand the inquiry. While Echo devices would keep on depending on the cloud for complex requests, allowing speech recognition would enable Alexa to execute simple tasks much faster, without wasting more time like the cloud delay. It is also trusted that Amazon may employ the Artificial Intelligence chip for its AWS (Amazon Web Services) platform to enhance its machine learning endeavors. Currently, the company has 450 employees who have a strong command over chip and it is developing its in-house chip endeavors. In the year 2015, it has obtained chip designer Annapurna Labs whose ARM-based Alpine chips are built for connected home devices and routers. Utilizing its own chips implies Amazon would be able to build new AI-driven abilities for Alexa without depending on chips designed by third-party organizations. Its contenders, including Google and Apple, have built their own AI chips in recent years and Google is already using its chips to support services such as Photos, Translate, Search, and Street View. Now, Amazon is the latest company that is showing interest in building AI chips.  
Rated 3.0/5 based on 0 customer reviews
Amazon is building its own AI chips for Alexa

Amazon is building its own AI chips for Alexa

What's New
According to a report by The Information, Amazon is developing its own Artificial Intelligence(AI) chips for the Echo, which would allow Alexa-powered devices to respond faster to commands, by allowin...
Continue reading

10 Latest Artificial Intelligence Technologies for enterprises to consider

Latest developments in the field of artificial intelligence technology are growing in a vigorous way. Many internet giants and startups are moving swiftly and investing a lot to adopt them. According to Narrative Science survey, nearly 38% of businesses are already using AI, and the number is expected to reach 62% by 2018. Forrester Research predicted that the growth in investment in AI is more than 300% in 2017 when compared with 2016. IDC reported that AI market is expected to reach USD 47 billion by 2020 from USD 8 billion in 2016. Artificial intelligence today comprises of numerous tools and technologies, some time-tested and others are considerably new. Forrester published a TechRadar report on important artificial intelligence technologies that companies should adopt to support human decision-making. Here is a list of 10 latest technologies in artificial intelligence based on Forrester’s analysis. Natural Language Generation- Converts data into text NLG (Natural Language Generation), a subfield of AI has become more popular in 2017. It is a software that automatically processes data into human-friendly text. It can be applied anywhere where there is a need to generate text from data. NLG is currently being used in different spaces such as summarizing business intelligence insights, report generation, and customer service. Some of the companies offering these types of services include Automated Insights, Digital Reasoning, Yseop, SAS, Attivio, Narrative Science, Cambridge Semantics, and Lucidworks. NLP (Natural Language Processing) and Text Analytics- Allows communication between human languages and computers NLP focuses on the interaction between human languages and computers. It uses text analytics to understand sentence meaning and structure via statistical and machine learning methods. NLP and text analytics are used for system security and fraud detection. They are also used extensively in applications to remove unstructured data and in automated assistants. Some of the companies offering this technology include Expert System, Sinequa, Synapsify, Stratifyd, Coveo, Linguamatics, Basis Technology, Knime, Mindbreeze, Indico, and Lexalytics. Speech Recognition- Converts human speech into useful format for computer apps Today, a wide range of systems supports the transformation and transcription of human speech into a useful form that is appropriate for computer applications. Speech recognition is currently used in mobile apps and voice response interactive systems. Some of the companies offering these services include Verint Systems, NICE, OpenText, and Nuance Communications. Machine Learning Platforms- Develops techniques that allow computers to learn Nowadays computers can also learn and it could be extremely smart and intelligent. Machine learning is a part of artificial intelligence and a subfield of computer science. Its objective is to develop more and more techniques that enable computers to learn. Machine learning platforms are gaining ground every day by providing APIs, algorithms, big data, development and training tools along with designing and deploying models into applications and other machines. These platforms are currently being used in various business activities, mostly for the classification or prediction. Some of the companies offering machine learning platforms include Microsoft, Google, Amazon, SAS, H2O.ai, Adext, Leverton, Skytree and Fractal Analytics. AI Optimized Hardware- Speeds up the next generation of applications Companies are investing in large amounts in AI optimized hardware to speed up the next generation of applications. Appliances and graphic processing units (GPU) are especially designed to run artificial intelligence oriented computational jobs efficiently. Some of the companies offering these services include Intel, Google, IBM, Alluviate, Cray and Nvidia. Deep Learning Platform- Comprises artificial neural networks with different levels of abstraction Deep learning is a special type of machine learning comprising artificial neural networks with different levels of abstraction. It is currently used in the classification of applications and pattern recognition that are supported by large-scale data sets. Some of the companies offering deep learning platform services include Peltarion, Ersatz labs, Fluid AI, MathWorks, Saffron technology, Ersatz labs, and Deep instinct. Virtual Agents- Interacts with humans efficiently Forrester says virtual agents are "The current darling of the media," that can interact with humans in an efficient way.  Chatbots are the best example of this type of technology. They are currently used as a smart home manager and in customer service and support as well. Some of the companies offering this technology include IBM, Amazon, Google, Apple, Microsoft, Creative Virtual, Satisfi, IPsoft, Artificial Solutions, and Assist AI. Decision Management- Performs automated decision making Decision management is a set of processes that insert logic and rules artificial intelligence systems that are utilised for initial setup and its services conduct ongoing research for better maintenance and tuning. It is currently being used in executing automated decision making and various enterprise applications. Some of the companies offering these services include Pegasystems, Advanced Systems Concepts, UiPath, Maana, and Informatica. Robotic Process Automation- Automates human tasks RPA is a software that uses scripts and other methods to automate human tasks or processes to support standard business processes. It is used to execute specific tasks or process where it is too costly or humans are inefficient to perform. Some of the companies offering RPA include WorkFusion, Blue Prism, Automation Anywhere, UiPath, and Pegasystems. Biometrics- Enables communication between humans and machines Biometrics technology deals with the measurement, analysis, and identification of people’s biological information. It allows a wide range of interactions between machines and humans, including but not restricted to body language, speech, touch and image recognition. Currently, this technology is widely used in market research. Some of the companies offering this technology include FaceFirst, 3VR, Sensory, Tahzoo, Synqera, Agnitio, Affectiva. Conclusion Artificial intelligence technologies today show a huge impact on business activities across the globe. It is not only made available for large enterprises but also for small businesses that want to have remarkable appearance on the internet. But according to Forrester survey, different companies expressed that there are some drawbacks in implementing AI to their businesses. Forrester concludes that the companies can gain from artificial intelligence technologies once these companies overcome all the drawbacks.
Rated 3.5/5 based on 0 customer reviews
10 Latest Artificial Intelligence Technologies for enterprises to consider

10 Latest Artificial Intelligence Technologies for enterprises to consider

Blog
Latest developments in the field of artificial intelligence technology are growing in a vigorous way. Many internet giants and startups are moving swiftly and investing a lot to adopt them. A...
Continue reading

ReactJS 16 Brings Major Advancements to the Server-Side Rendering

ReactJS is a JavaScript library that has gained increased adoption by organizations and developers across the globe because of its reactive model and efficiency. React v16.0 was released on September 26, 2017, with some of the major advancements from scratch without changing the public API. Server-side rendering(SSR) is one of the main features of ReactJS 16. Let us see how the SSR works in React 15 and its changes and improvements in React v16.0. Server-side rendering in React 15 To do SSR in React 15, we mostly run a Node-based web server like Express, Koa, or Hapi and call renderToString in order to render the root component to a string. Then, instruct the client-side renderer in the client bootstrap code using render() to “rehydrate” the server-generated HTML and the same method would be used in a client-side rendered application too. If this is done effectively, then the client-side renderer can simply utilize the existing server-generated HTML without upgrading the DOM. Improvements of server-side rendering in ReactJS 16 SSR is totally rewritten in ReactJS 16 and is really very fast i.e about 3x faster than React 15. ReactJS 16 supports streaming which makes the transfer of data to the client more faster. ReactJS 16 offers two methods hydrate() and render() for rendering on the client side. As we know very well, render() is used for rendering the content exclusively on the client side, hydrate() is used for rendering over server-side rendered markup. render() can even now be utilized to render over server-side rendered markup, but it is suggested to utilize hydrate() now for that sort of rendering in React 16. Moreover, ReactJS 16 is more useful at hydrating server-rendered HTML and it doesn’t require the initial render to precisely match the outcome from the server. Rather, it will try to reuse as much of the current DOM as possible. Why React 16 server-side rendering is much faster than React 15? In ReactJS 15, the client and server rendering paths were pretty much a similar code. This means that most of the data structures required to keep up a vDOM were being set up when server rendering, despite the fact that vDOM was discarded when the call to renderToString returned. This shows that there was a lot of work wasted on the server render path. Whereas in ReactJS 16, the server renderer was completely rewritten from scratch by the core team because of which it doesn’t do any virtual DOM work at all. Hence, it is much faster than ReactJS 15. Some of the important features of React 16 SSR are: It is completely backward compatible It supports client-side renderer and its segments to return fragment, string, and array of elements. It creates more effective HTML document size. It supports non-standard DOM attributes which means that both server and client renderer can go through non-standard attributes adding to HTML components. React 16 SSR supports streaming. To avail ReactJS 16’s render to stream, it is necessary to call one of two methods renderToStaticNodeStream or react-dom/server: renderToNodeStream. It doesn’t support portals or error boundaries. It doesn't need to be compiled for effective performance. Benefits of ReactJS Server-Side Rendering Nowadays, server-side rendering has turned into an essential feature specifically for substantial client-side applications and now the majority of the client side frameworks supports it. It helps you: When you need SEO on Yahoo, Bing, Google, Baidu, or DuckDuckGo. When you already have a proper functioning React application, require the ideal execution, and are ready to pay for the additional server resources. SEO is not the only potential benefit of server-side rendering, but it also displays pages faster. With SSR, server's reaction to the web browser will be the HTML of the particular page that is prepared to be rendered. So, the web browser can begin rendering without waiting for the JavaScript to be installed and implemented. There's no "white page" while the web browser installs and implements the JavaScript but different resources are required to render the page, which is the thing that may occur in a complete client-rendered React website. Wrapping Up ReactJS is a great front-end library to utilize in building your UIs. It is quicker now since it keeps running on React Fiber and enables you to build more smooth, performant UIs for your local and web applications. It has been used by more than 30% of developers and adopted by 40 percent of the JavaScript projects. ReactJS 16 came stacked with heaps of new features and improvements. Utilizing React on the server can be difficult, particularly while fetching data from Application Programming Interfaces. Fortunately, the React team is flourishing and building plenty of useful tools.
Rated 3.0/5 based on 0 customer reviews
ReactJS 16 Brings Major Advancements to the Server-Side Rendering

ReactJS 16 Brings Major Advancements to the Server-Side Rendering

Blog
ReactJS is a JavaScript library that has gained increased adoption by organizations and developers across the globe because of its reactive model and efficiency. React v16.0 was released on September ...
Continue reading

Apple and Cisco team up with Allianz and Aon to offer cyber risk management solution

On Monday, Apple, Cisco, Allianz, and Aon announced a new cybersecurity insurance solution for all businesses, consisting of options for improved cyber insurance coverage from Allianz, the secure technology from Apple and Cisco, and cyber strength evaluation services from Aon. The new solution is developed to help a wide variety of businesses better protect and manage themselves from the most commonly faced cyber threats like ransomware and other malware-related risks. “The choice of technology providers plays a critical role in any company’s defense against cyber attacks. That’s why, from the beginning, Apple has built products from the ground up with security in mind, and one of the many reasons why businesses around the world are choosing our products to power their enterprise,” said Apple CEO Tim Cook, in a statement. Cybersecurity is not a small problem for today’s businesses. Some of the companies like Target and Equifax have faced large-scale hacks and addressing those hacks also costs high. Reuters reported that according to National Association of Insurance Commissioners, U.S cybersecurity premiums were approximately 1.35 billion dollars in the year 2016. For Cisco and Apple, the new solution could help them engage more business clients to their products, possibly helping them to beat contenders to win these large customers. The new cyber risk management solution will include Aon’s cybersecurity experts analyzing potential clients' current cybersecurity circumstance and give suggestions on the best way to enhance their security. And participating businesses will have access to Aon and Cisco Incident Response groups in the malware attack event. “At Cisco, security is foundational to everything we do.  As the leading enterprise security company, we know that in a digital world security must come first, and our integrated security architecture reduces customers’ overall risk of exposure to ransomware and malware attacks,” said Chuck Robbins, Chairman and CEO, Cisco. “Cisco Security technology is central to the new holistic risk management solution and we are excited to bring another important benefit to our customers with greater options for cyber insurance.” Source: Apple official newsroom
Rated 3.0/5 based on 0 customer reviews
Apple and Cisco team up with Allianz and Aon to offer cyber risk management solution

Apple and Cisco team up with Allianz and Aon to offer cyber risk management solution

What's New
On Monday, Apple, Cisco, Allianz, and Aon announced a new cybersecurity insurance solution for all businesses, consisting of options for improved cyber insurance coverage from Allianz, the secure tech...
Continue reading

0 to npm module, how to publish your open source code

In this article, we are going to explore how to publish our open source (OS) code into the npm registry as a module. npm is currently the largest software repository in the world with over 600 thousand modules published (As of January 2018). It’s used to publish from very small modules (1 line functions) to complete frameworks (like React) and even full P2P clients (like WebTorrent). Introduction Before we continue, this is the list of things we should have beforehand: Node.js and npm installed (LTS recommended) GitHub account An IDE (I’ll be using WebStorm), a few options: Atom Visual Studio Code WebStorm (Free for OS & Education) Terminal (I’ll be using iTerm with zsh in MacOS) 1. Creating an npmjs.com account Our first step is to create an npm account. Now, there are two ways of doing that. The first is through the website and the second is through using npm’s CLI (which you should have installed already) with the command adduser. If you had already created your user or you just did through the website, you must now use npm login in the terminal to store your credentials. adduser command should have stored the credentials automatically (login is just an alias) To test what we just did, type npm whoami to ensure that our credentials were stored. 2. Creating our project Our first step is to create an new GitHub repository, for this tutorial I will be publishing a simple module which wraps async functions with debug (a logger) logging for express (a web framework). I’ll call it express-debug-async-wrap for now. But is it available in the public scope? Just search it or go directly to the package. If you searched for a simple word, there’s a chance a module that has it in its name or in its keywords and will show up, even then, your package name might be free to take. The safe your is to go directly with this url expression: https://www.npmjs.com/package/<package name here> Great, we can proceed with the name express-debug-async-wrap in mind. Note that we can use any name under our personal scope @username/my-package even if it’s already taken in the public scope. Naming our modules is an important thing. Recently, npm updated it’s naming rules to prevent name hijacking. You can read more about it in this blog post. We can use A-Z, a-z, 0-9 and punctuations like dots, dashes, and underscores. Creating the GitHub repository With our name in mind, we continue to create the GitHub repository. On GitHub go to the top right corner and click the “+“ with an arrow by its side: We add our project’s name and description. Select “Initialize this project with a README”. And finally, we add Node’s .gitignore and MIT License. You are free to select the license that best suits you. Learn more here Once we click on “Create repository”, we should have the following: 3. Adding our code Cloning the GitHub Repository To add our code we must first clone the GitHub repository in our machine. To do so, we must click on “Clone or download” green button in our repo and copy the url: Next, we must clone the repository in our machine using git: Initializing the npm package Once we have the repository, we should “get into” it and initialize the directory with a package.json, to do so, we use npm init and fill the blanks: Version should start in 1.0.0 unless we are doing alpha/beta release Default entry point index.js works for our this first case, but keep in mind if the entry point is different, maybe in lib/ or dist/ Test command is completely up to you, I use and recommend standard code style for linting, ava for tests for nyc for code coverage. (These must be installed with as dev dependencies) Leave git as default, it should be the GitHub repo we just created. Add as much relevant tags as needed Match the license chosen in the repo The package.json contents will be shown and the CLI tool will ask for confirmation: Our directory should look like this in our IDE: package.json is in red (Under WebStorm) because is untracked by git Creating index.js We now continue with the important part, the code, in a new file called index.js: We must use exports or the real deal: module.exports. We can export anything: a function, an object, an array, a class, a string, etc.  Note: To learn the difference between exports and module.exports we can read more here. Committing our files to git Last, but not least, we must add our files to our git repository. A quick git status shows us the following: We can see the recently created package.json and index.js. In my case, because I’m using WebStorm, I also see a .idea folder, which is appended to the .gitignore file to be ignored. Once we’ve ignored the unnecessary files, we add everything and commit: 4. Publishing our module Although our npm modules don’t have to live in a public repository, I personally think they should, and they should match whatever is stored in the npm registry. So we tag this first version (new releases should be done with npm version which automatically tags them), push to GitHub, push tags and finally publish to npm: That’s it! Our module lives in the npm registry. 5. Completing our README Our module can be used by anyone, but sadly, the page for it looks like this: It’s empty and sad, no explanation on how to use it, nothing. We should definitely add a few things: basic badges, how to install, how to use, related packages and a footer stating our copyright and the license. For our module it looks like this: # express-debug-async-wrap [![npm](https://img.shields.io/npm/v/express-debug-async-wrap.svg?style=flat-square)]() [![npm](https://img.shields.io/npm/dm/express-debug-async-wrap.svg?style=flat-square)]() [![JavaScript Style Guide](https://img.shields.io/badge/code_style-standard-brightgreen.svg?style=flat-square)](https://standardjs.com) [![npm](https://img.shields.io/npm/l/express-route-autoloader.svg?style=flat-square)](LICENSE) express async wrapper that passes custom debug instance and returns 400 as default error code ## Install ``` npm i -S express-debug-async-wrap or npm install --save express-debug-async-wrap ``` ## Use Require and initialize with `debug` instance: ```js const debug = require('debug')('backend:routes:doc') const wrapper = require(`express-debug-async-wrap`)(debug) const express = require('express') const router = express.Router() router.get('/', wrapper(async (req, res) => {   await ...   await ...   res.send('OK') })) module.exports = router ``` ## Related - [express-route-autoloader](https://github.com/DiegoRBaquero/express-route-autoloader) - [sequelize-express-findbyid](https://github.com/DiegoRBaquero/sequelize-express-findbyid) ## License MIT Copyright © [Diego Rodríguez Baquero](https://diegorbaquero.com) Once it’s done, we must create a patch version, push to GitHub and publish to npm. After committing our changes: That’s it! 6. Final touches and homework Important: Add tests and setup CI! (Travis recommended) Add more badges: travis build, security related, etc. Here’s an example: Share your module in social media, forums, IRC/Gitter channels Tell me how you did and share me your module. Learn more about npm
Rated 4.0/5 based on 0 customer reviews
0 to npm module, how to publish your open source code

0 to npm module, how to publish your open source code

Blog
In this article, we are going to explore how to publish our open source (OS) code into the npm registry as a module. npm is currently the largest software repository in the world with over 600...
Continue reading

Building a CRUD application using Python and Django

Introduction I’ve been meaning to write a series on Django which is a web application framework written in Python. To follow this tutorial you don’t need to be a pro in python and have to know it inside-out. Just some basics will get you through it. Before we start writing applications, we must know a little about what is Django. Django is a web application framework that acts more of an MTV pattern instead of MVC. Think it this way: Model remains model View has been replaced by Templates Controller gets replaced by View A simple Hello-world application is just a few lines of code in Django! But moving from this simple thing a full fledged application can be a daunting task in itself. There are other concepts that can help proceed further such as ORM, migrations etc that help in building a bigger application. But for this tutorial we’ll be building a simple CRUD( Create, Retrieve, Update and Delete ) application. To get started with you need to have python and virtualenv installed on your machine. Python is already installed on the linux systems. But you'll need to install virtualenv. To install virtualenv follow the command: Application Structure Before we actually start writing code we need to get a hold of the application structure. We'll first execute several commands that are essential in django project development. After installing virtualenv, we need to set the environment up. We are setting a virtual environment of the name venv here. Now we need to activate it. Now that it has been activated. We need to start our project. Feed in the following command to start a project. The first line installs Django v1.11.8 and creates a directory named app in the parent directory. the third line starts a project named crudapp in the app directory. The directory tree should look like We'll see the meaning of each file and what it does one by one. But first, to test if you are going in the right directoion, run the following command. If you get an output like below then you're doing it right. Let's see what exactly the different files that we created mean. __init__.py : Acts as an entry point for your python project. settings.py : Describes the configuration of your Django installation and lets Django know which settings are available. urls.py : used to route and map URLs to their views. wsgi.py : contains the configuration for the Web Server Gateway Interface. The Web Server Gateway Interface (WSGI) is the Python platform standard for the deployment of web servers and applications. Writing the Application Now this is where we start coding our app. For this operation we'll consider blog post as our entity. We'll be applying CRUD operations to blog posts. The app in our project will be called blog_posts. This will create the necessary files that we require. First and foremost create the Model of our application. Now that we have our model ready, we'll need to migrate it to the database. Now we create our Views where we define each of our CRUD definition. Now that we have our Views, we create mappings to URL in our /crudapp/blog_posts/urls.py file. Make a note that the following is our app specific mappings. Now we create project specific mappings in /crudapp/crudapp/urls.py. Now almost everything is done and all we need to do is create our templates to test the operations. Go ahead and create a templates/blog_posts directory in crudapp/blog_posts/. templates/blog_posts/post_list.html: templates/blog_posts/post_form.html templates/blog_posts/post_delete.html Now we have all the necessary files and code that we require. The final project tree should look like following: Execute python manage.py runserver and voila!! You have your Django app ready.
Rated 4.0/5 based on 0 customer reviews
Building a CRUD application using Python and Django

Building a CRUD application using Python and Django

Blog
Introduction I’ve been meaning to write a series on Django which is a web application framework written in Python. To follow this tutorial you don’t need to be a pro in python and have ...
Continue reading

Google collaborated with MobileIron to buy and sell apps in the cloud

Google has collaborated with MobileIron, a once venture-backed company that opened up to the world in 2014, to open a digital store that will provide cloud-based software to different organizations. Google says the partnership effort is intended to unite MobileIron's application distribution, security and analytics abilities with Google Cloud's Orbitera commerce platform. The white-label providing will come up with a handful of options together with custom designed bundles, says Google, to enable operators to build bundles of products and services by means of customer segments, voice and third-party cloud products and services, customized branding for both customers and operators of the marketplace, secure cloud access to make sure that only trusted users who are availing authorized apps get access to the cloud products and services. Google had gained the startup Orbitera in 2016 that developed a platform for trading cloud-based software to help Google enhance the way it contends with Microsoft, Salesforce and Amazon’s AWS, all of which offer their firm products and services in the cloud. Google said the partnership goal is to help businesses more quickly and easily build an integrated marketplace where their trusted partners and customers can access application authentication and entitlement, apps and streamline billing, plus use Google Cloud Platform for storage and backend compute services. Through this partnership, enterprises, resellers, ISVs, and OEMs will get a flexible, scalable way to mobile app procurement, delivery, and secure management through an integrated platform that enables many operating systems and devices. Source: Google Official Blog
Rated 3.0/5 based on 0 customer reviews
Google collaborated with MobileIron to buy and sell apps in the cloud

Google collaborated with MobileIron to buy and sell apps in the cloud

What's New
Google has collaborated with MobileIron, a once venture-backed company that opened up to the world in 2014, to open a digital store that will provide cloud-based software to different organizations. G...
Continue reading

Writing maintainable code using state machines in Python

Writing backend systems Backend contains data models -- which is how your data looks. This can be your Django models or database tables. More often than not, we run into a problem where the model behavior changes faster than the actual data model. Changing requirements or additional features make us change our data models to account for new features that we introduce into the backend. Over a period of time, we also remove some of the old features and fix some bugs but the data models don’t change very often with these changes. We then run into problems where the additional features built upon old features break existing functionalities in certain edge cases because a certain flag was not reset. Sounds familiar? Writing stable backend systems For example, take the following scenario: a payment is “initiated” by the user, we then “capture” the payment from the payment gateway and then “complete” the payment process. Any payment now goes through the three phases: started, captured, completed. Solution 1 -- Database flags This can very easily be captured via few flags: is_started, is_captured, is_completed This solution is really straightforward, but it has a few obvious drawbacks: As your number of states increases in the system, it becomes increasingly difficult to handle edge cases and conditions like you see in can_capture, we have to check if the proper flags are set. The complexity to maintain the model increases exponentially as the number of flags increase, as we have to check all our methods and see how they behave when each of the flags is set or reset. Solution 2 -- using a separate column for state Since in this example all the flags are used to indicate a single state of the system, we can just use a state variable: state which can have a value of either started, captured or completed and remove all the different flags. This seems like a much cleaner way of implementing the functionality, and we have gotten out of the situation where we need to check multiple flags. Since the object can only be in one state at a time, this approach greatly simplifies the system for us. This is awesome!  But now we are losing money because few of our transactions are failing while contacting the payment gateway. Luckily, our payment gateway allows us to retry an incomplete transaction, so now we can mark certain transactions as incomplete and then later retry them. We can make the following changes to our model: Wow! Not very complicated. This solution allows for much more flexibility over the previous solution because now you don’t need to alter any database tables if your states change in the future. But at the same time, this solution does have a drawback similar to the previous solution. Based on any new states added we have to update some our existing methods to take into account the new incomplete state and make sure that we are considering all the edge cases. Can we minimize refactor required to the existing logic and still implement the functionality? Every time we update an existing method, we have to update their tests and also make sure that there are no regressions and the system is working as expected. Also, note that this solution will only work in cases where the data cannot be in more than one state at the same time. In which case, it is better to have separate columns for a separate group of logical states like in the previous solution implementation. Data has different states Now it must be clear to you that data and its behavior changes very frequently as business requirements or system status changes with time. It is very easy to demonstrate the data state changes in a diagram. State diagram If we add retry feature into the system, this is how it will look as: Is there a way where we can model our database model (no pun intended) in a similar way and still perform all of your business logic in the same way? State Machines Crash Course For those not familiar with state machines (commonly known as “finite state automata” in computer science textbooks) have 2 major components: States Every state machine consists of a finite number of states, which is the state in which your data can be. There are some special kinds of states, like: start/initial state: This is the initial state for your data, also the state from which your machine starts from. end/final state: This is the state which is considered to be the end goal or final state of the machine, after which no further transitions are possible. There can be multiple final states for a state machine. Transitions Transitions indicate how a state machine moves from one state to another. Every transition has a source and an end state. There may be transitions which don’t change the state of a machine, these transitions have the same source and end state. When to use state machines The complexity of the program increases when the number of flags in your model starts increasing, if you have more than 4 flags, which are being used for similar states, then modeling it into a state machine is worth considering. If you are using an attribute to store the current state of the model, then this model can be modeled as a state machine. If your business logic has a lot of flags based on some of your model attributes, you can try and find ways to store data in your model and then re-model it as a state machine to simplify your business logic. When there are complicated transactions in your codebase, we can run into issues because of distributed processing. If you are storing an intermediate state of a single or distributed transaction, you should definitely try to use state machines in this case. There are a lot of libraries in Python that implements a finite state machine. I will present an example using,pytransitions which is fairly easy to setup and extend. Example using pytransitions $ pip install transitions Implementing the machine As you can notice, there are no additional code and edge case checks over here, you can use the capture(), complete() methods in the same way as we were using them in the above examples. The real beauty of state machines is evident when we have to make changes to them. For example, if we have to implement the retry mechanism in the above state machine. It will look like this: As you can see now, our model logic is now exactly same as the state diagram above, and we have not done any major changes to any of our existing functionalities. The new states and transitions are mostly independent of the existing ones Conclusion We have used the power of state machines to implement logic that is modular, robust, testable self-documenting and easy to maintain. It will require lesser amount of refactor for future updates as long as the data has mutually exclusive states. We should bear in mind that even though state machines are powerful tools to solve certain kinds of problems, it is not a panacea for all your database modelling problems and not all problems can be modeled using state machines, but nonetheless, it is a good tool to have in your toolbox. Best practices Never use set_state or transition to a state directly in your logic, always use state transitions. Make changes to your existing state machine and data models based on new requirements and features. Extra reading Pytransitions Finite state machine
Rated 4.0/5 based on 0 customer reviews
Writing maintainable code using state machines in Python

Writing maintainable code using state machines in Python

Blog
Writing backend systems Backend contains data models -- which is how your data looks. This can be your Django models or database tables. More often than not, we run into a problem where the model b...
Continue reading

Create an API Using Flask in Python

In this article, we are going to learn how to create an API using Flask in Python. Before getting into coding the API, I would like to give a brief introduction of the Flask framework for Web services and what an API is. Flask It is a microframework for Python based on Werkzeug and Jinja2. Basically, it's a lightweight framework for creating web services using Python. Whereas Django is heavy compared to Flask. Flask is super easy to setup and gets things ready. API (Application Programming Integration) API is a code that allows two software programs to communicate with each other. API defines the correct way for a developer to write software that requests for services from other applications. APIs are generally implemented using function calls. API needs to be documented after coding so that developers can read and understand what kind of operation they would do on each function call. We are going to implement RESTful API using Flask as an example. REST architecture was designed to fit HTTP protocols. Resources are represented by URIs and each resource may be changed or queried on client’s request. ​Source: miguelgrinberg.com Implementation So we have a basic idea of what Flask and API are. We would follow a project-based approach to create an API using the Flask Framework. We would create an API that would give all the image links from the Zeolearn magazine section. You guessed right. We would need to write a bit of scraping code but for this exercise it's minimal. Instead of giving links you can return any other resource as a response to request.  Above code snippet implements a simple API endpoint at index URI i.e if this web service is hosted at http://example.com and we call the endpoint ‘/’ which is preceded by hosting URL, which finally looks like  http://example.com/  then we would get output as shown in the image below index() function is routed to ‘/’ using app.route(<URI>, <HTTP method>) URI is relative to base URL. Routing is done by using app.route decorator and the decorator simply maps URL to a resource. jsonify() turns json output into a Response object with the application/json mimetype. We can check HTTP verbs of the incoming request in the following way We can create Response objects for various mime types in the following way Output: Now we will implement more complex resource URIs. Let's create an endpoint that takes an integer number from request and returns that many number of image urls as a response to the request. As I told you above, we are going to write a little bit of scraping code to extract image URLs from Zeolearn magazines section. First route function to desired URI, for example, I am routing to ‘/magazine/photos/<number_of_images>’, observe how I put number_of_images inside angular brackets and that's because it a variable, everything variable needs to be put inside angular brackets as they would be changing with different client’s request. Scraping code used is really simple to understand in this example. I just requested the magazine section of zeolearn and extracted all the image links from img tags in the html page. I used BeautifulSoup module for web scraping. Let's not dive too deep into scraping, the only thing to understand is that you obtained the required data that you want to send as response. Now jsonify the data and send. Response of above endpoint is shown in image below - We have seen GET requests in the above examples, now let’s see how do we handle POST requests. But before that let’s see how Wikipedia defines these methods- GET - Requests a representation of a specified resource. POST - Submits data be processed (e.g. from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updation of existing resource or both. In the example below, we created a POST method that checks the type of data received whether it is of json type or plain text and returns the data after appending its type. Generally, the database is updated or a new entry is created in the database with the information received in the POST request. All the arguments used in the curl tool can be referred here. In general, for the implementations of API, the web service has access to database and queries database based on client’s request. This is, in fact, one of the reasons why REST architecture is preferred. Suppose you want to make Application which stores users data somewhere and the application has different clients like Android, web app, iOS app, etc. So instead of creating a separate backend for each client, we prefer to create an API which serves the data to all the clients. API does this by exposing some endpoints like we created in the example above. We created two end points- one at ‘/’ and other at ‘/magazines/photos/<number of images>’. Clients can make a request to any endpoint to get the response. Similarly, we can make other endpoints of the API and document them for developers to use it in client-side applications.   Now you would be understanding why documentation of API is necessary. So that the other developers can look at the documentation and understand what type of data each API endpoint requires and what type of data it returns in response to client’s request.
Rated 4.0/5 based on 0 customer reviews
Create an API Using Flask in Python

Create an API Using Flask in Python

Blog
In this article, we are going to learn how to create an API using Flask in Python. Before getting into coding the API, I would like to give a brief introduction of the Flask framework for Web services...
Continue reading