top

Docker vs. Kubernetes

Containers are a virtualization technology; however, they do not virtualize a physical server. Instead, a container is operating-system-level virtualization. What this means is that containers share the operating system kernel provided by the host among themselves along with the host.Container ArchitectureThe figure shows all the technical layers that enable containers. The bottommost layer provides the core infrastructure in terms of network, storage, load balancers, and network cards. At the top of the infrastructure is the compute layer, consisting of either a physical server or both physical as well as virtual servers on top of a physical server. This layer contains the operating system with the ability to host containers. The operating system provides the execution driver that the layers above use to call kernel code and objects to execute containers.What is DockerDocker is used to manage, build and secure business-critical applications without the fear of infrastructure or technology lock-in.Docker provides management features to Windows containers. It comprises of two executables:Docker daemonDocker clientThe Docker daemon is the workhorse for managing containersImportant features of docker :Application isolationSwarm (Clustering and Scheduling tool)ServicesSecurity ManagementEasy and Faster configurationIncrease ProductivityRouting MeshWhy use docker for Development?Easy Deployment.Use any editor/IDE of your choice.You can use a different version of the same programming language.Don’t need to install a bunch of language environment on your system.The development environment is the same as in production.It provides a consistent development environment for the entire team.What is Kubernetes:Kubernetes is a container orchestration system for running and coordinating containerized application across clusters of Machine. It also automates application deployment, scaling and management.Important features of Kubernetes :Managing multiple containers as one entityContainer replicationContainer Auto-ScalingSecurity ManagementVolume managementResource usage monitoringHealth checksService DiscoveryNetworkingLoad BalancingRolling Updates Kubernetes ArchitectureWhat is Docker swarm:Docker swarm is a tool for clustering and scheduling docker containers. It is used to manage a cluster of docker nodes as a single virtual system. It uses the standard docker application programming interface to interface with other tools. Swarm uses the same command line from Docker.This can be understood with the following diagram :It uses three different strategies to determine which nodes each container should run:SpreadBinPackRandomImportant features of Kubernetes :Tightly integrated into docker ecosystemUses its own APIFilteringLoad BalancingService DiscoveryMulti-host NetworkingScheduling systemHow are Kubernetes and Docker Swarm related :Both provide load balancing features.Both facilitate quicker container deployment and scaling.Both have a developer community for help and support.Docker Swarm vs Kubernetes :One needs to understand that Docker and Kubernetes are not competitors. The two systems provide closely related but separate functions.Docker SwarmKubernetesContainer LimitLimited to 95,000 containersLimited to 3,00,000 containersNode SupportSupport 2000+ nodesSupport up to 5000 nodesScalabilityQuick container deployment and scaling even in large containersProvides string guarantees to the cluster states at the expense of speed.Developed ByDocker IncGoogleRecommended Use CaseSmall clusters, Simple architectures, No Multi-User, for small teamsProduction-ready, recommended for any type of containerized environments, big or small, very feature richInstallationSimple installation but the resultant cluster is not comparatively strongComplex installationBut a strong resultant cluster once set upLoad BalancingCapability to execute auto load balancing of traffic between containers in the same clusterManual load balancing is often needed to balance traffic between different containers in different podsGUIThere is no dashboard which makes management complexIt has an inbuilt dashboardRollbacksAutomatic rollback facility available only in docker 17.04 and higherAutomatic rollback with the ability to deploy rolling updatesNetworkingDaemons are connected by overlay networks and the overlay network driver is usedOverlay network is used which lets pods communicate across multiple nodesAvailabilityContainers are restarted on a new host if host failure is encounteredHigh availability, Health checks are performed directly on the podsLet’s understand the differences category wise for the following points :Installation/Setup:Docker Swarm: It only requires two commands to set up a cluster, one at the manager level and another at the worker end.Following are the commands to set up, open terminal and ssh into the machine :$ docker-machine ssh manager1 $ docker swarm init --advertise-addr <MANAGER-IP>Kubernetes: In Kubernetes, there are five steps to set up or host the cluster.Step 1: First run the commands to bring up the cluster.Step 2: Then Define your environment.Step 3: Define Pod network.Step 4: Then bring up the dashboard.Step 5: Now, the cluster can be hosted.GUI:Docker Swarm: Docker Swarm is a command line tool. No GUI dashboard is available. One needs to be comfortable with console cli, to fully operate docker swarm.Kubernetes: Kubernetes has a Web-Based kubernetes user interface. It can be used to deploy the containerized application to a kubernetes cluster.NetworkingDocker Swarm: User can encrypt container data traffic while creating an overlay network.Lots of cool things are happening under the hood in docker swarm container networking which makes it easy to deploy production application on multi-host networks. The Node joining a swarm cluster generates an overlay network for services that span every host in the docker swarm and a host-only docker bridge network for containersKubernetes: In Kubernetes, we create network policies which specify how the pods interact with each other. The networking model is a flat network, allowing all pods to interact with one another The network is implemented typically as an overlay. The model needs two CIDRs: one for the services and the other from which pods acquire an IP address.ScalabilityDocker Swarm: After the release of Docker 1.12, we now have orchestration built in which can scale to as many instances as your hosts can allow.Following are the steps to follow:Step 1: Initialize SwarmStep 2: Creating a ServiceStep 3: Testing Fault ToleranceStep 4: Adding an additional manager to Enable Fault toleranceStep 5: Scaling Service with fault toleranceStep 6: Move services from a specific nodeStep 7: Enabling and scaling to a new nodeKubernetes: In Kubernetes we have masters and workers. Kubernetes master nodes act as a control plane for the cluster. The deployment has been designed so that these nodes can be scaled independently of worker nodes to allow for more operational flexibility.Auto-ScalingDocker Swarm: There is no easy way to do this with docker swarm for now. It doesn’t support auto-scaling out of the box. A promising cross-platform autoscaler tool called “Orbiter” can be used that supports swarm mode task auto scaling.Kubernetes: Kubernetes make use of Horizontal Pod Autoscaling , It automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization.This can be understood with the following diagram :AvailabilityDocker Swarm: we can scale up the number of services running in our cluster. And even after scaling up, we will be able to retain high availability.Use the following command to scale up the service$ docker service scale Angular-App-Container=5in a Docker Swarm setup if you do not want your manager to participate in the proceedings and keep it occupied for only managing the processes, then we can drain the manager from hosting any application.$ docker node update --availability drain Manager-1Kubernetes: There are two different approaches to set up a highly available cluster using kubeadm along with the required infrastructures such as three machines for masters, three machines for workers, full network connectivity between all machines, sudo privileges on all machines, Kubeadm and Kubelet installed on machines and SSH Access.With stacked control plane nodes. This approach requires less infrastructure. The etcd members and control plane nodes are co-located.With an external etcd cluster. This approach requires more infrastructure. The control plane nodes and etcd members are separated.Rolling Updates and Roll BacksDocker Swarm: If you have a set of services up and running in swarm cluster and you want to upgrade the version of your services.The common approach is to set your website in Maintenance Mode, If you do it manually.To do this in an automated way by means of orchestration tool we should make use of the following features available in Swarm :Release a new version using docker service update commandUpdate Parallelism using update—parallelism and rollback—parallelism flags.Kubernetes:To rollout or rollback a deployment on a kubernetes cluster use the following steps :Rollout a new versionkubectl patch deployment $DEPLOYMENT \      -p'{"spec":{"template":{"spec":{"containers":[{"name":"site","image":"$HOST/$USER/$IMAGE:$VERSION"}]}}}}'Check the rollout statuskubectl rollout status deployment/@DEPLOYMENTRead the Deployment historykubectl rollout history deployment/$DEPLOYMENT kubectl rollout history deployment/$DEPLOYMENT --revision 42Rollback to the previously deployed versionkubectl rollout undo deployment/$DEPLOYMENTRollback to a specific previously deployed versionkubectl rollout undo deployment/$DEPLOYMENT --to-revision 21 Load BalancingDocker Swarm: The swarm internal networking mesh allows every node in the cluster to accept connections to any service port published in the swarm by routing all incoming requests to available nodes hosting service with the published port.Ingress routing, the load balancer can be set to use the swarm private IP addresses without concern of which node is hosting what service.For consistency, the load balancer will be deployed on its own single node swarm. Kubernetes: The most basic type of load balancing in Kubernetes is load distribution, easy to implement at dispatch level.The most popular and in many ways the most flexible way of load balancing is Ingress, which operates with the help of a controller specialized in pod (kubernetes). It includes an ingress resource which contains a set of rules for governing traffic and a daemon which applies those rules.The controller has an in-built feature for load balancing with some sophisticated capabilities.The configurable rules contained in an Ingress resource allow very detailed and highly granular load balancing, which can be customized to suit both the functional requirements of the application and the conditions under which it operates.Data Volumes:Docker Swarm: Volumes are directories that are stored outside of the container’s filesystem and which hold reusable and shareable data that can persist even when containers are terminated. This data can be reused by the same service on redeployment or shared with other services.Swarm is not as mature as Kubernetes. It only has one type of volume natively, which is a volume shared between the container and its docker host, but It won’t do the job in a distributed application. It is only helpful locally.Kubernetes: At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the volume type used.There are many volume types :LocalNode-Hosted Volumes (emptyDir, hostpath and duh)Cloud hostedgcePersistentDisk (Google Cloud)awsElasticBlockStore (Amazon Cloud – AWS)AzureDiskVolume ( Microsoft Cloud -Azure)Logging and MonitoringDocker Swarm: Swarm has two primary log destinations daemon log (events generated by docker service) and container logs(generated by containers). It appends its own data to existing logs.Following commands can be used to show logs per container as well as per service basis.Per Container :docker logs <container name>Per Service:docker service logs <service name>Kubernetes: In Kubernetes, as requests get routed between services running on different nodes, it is often imperative to analyze distributed logs together while debugging issues.Typically, three components make up a logging system in Kubernetes :Log Aggregator: It collects logs from pods running on different nodes and routes them to a central location. It should be efficient, dynamic and extensible.Log Collector/Storage/Search: It stores the logs from log aggregators and provides an interface to search logs as well. It also provides storage management and archival of logs.Alerting and UI: The key feature of log analysis of distributed applications is virtualization. A good UI with query capabilities, Custom Dashboard makes it easier to navigate through application logs, correlate and debug issues.ContainersPackaging software into standardized units for shipment, development and deployment is called a container. It included everything to run an application be it code, runtime, system tools, settings and system libraries. Containers are available for both Linux and windows application.Following are the architecture diagram of a containerized application :Benefits of Containers :Great EfficiencyBetter Application DevelopmentConsistent OperationMinimum overheadIncreased PortabilityContainer Use Cases :Support for microservices architectureEasier deployment of repetitive jobs and tasksDevOps support for CI(Continuous Integration) and CD(Continuous Deployment)Existing application refactoring for containers.Existing application lift and shift on cloud.Containers vs Virtual Machines :One shouldn’t get confused container technology with virtual machine technology. Virtual Machine runs on a hypervisor environment whereas container shares the same host OS and is lighter in size compared to a virtual machine.Container takes seconds to start whereas the virtual machine might take minutes to start.Difference between Container and Virtual Machine Architecture :Build and Deploy Containers with Docker:Docker launched in 2013 and revolutionized the application development industry by democratizing software containers.  In June 2015 docker donated docker specification and runc to OCI (Open container Initiative)Manage containers with KubernetesKubernetes(K8s) is a popular open-source container management system. It offers some unique features such as Traffic Load Balancing, Scaling, Rolling updates, Scheduling and Self-healing(automatic restarts)Features of Kubernetes :Automatic binpackingSelf-HealingStorage OrchestrationSecret and Configuration ManagementService discovery and load balancingAutomated rollouts and rollbacksHorizontal ScalingBatch ExecutionCase Studies :IBM offers managed kubernetes container service and image registry to provide a fully secure end to end platform for its enterprise customers.NAIC leverages kubernetes which helps their developer to create rapid prototypes far faster than they used to.Ocado Technology leverages kubernetes which help them speeding the idea to the implementation process. They have experienced feature go the production from development in a week now. Kubernetes give their team the ability to have more fine-grained resource allocation.First, we must create a docker image and then push it to a container registry before referring it in a kubernetes pod.Using Docker with Kubernetes:There is a saying that Docker is like an airplane and Kubernetes is like an airport. You need both.Container platform is provided by a company called Docker.Following are the steps to package and deploy your application :Step 1: Build the container imagedocker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .Verify that the build process was successfuldocker imagesStep 2: Upload the container imagedocker push gcr.io/${PROJECT_ID}/hello-app:v1Step 3: Run your container locally(optional)docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/hello-app:v1Step 4: Create a container clusterIn case of google GCPgcloud container clusters create hello-cluster --num-nodes=3 gcloud compute instances listStep 5: Deploy your applicationkubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port 8080 kubectl get podsStep 6: Expose your application on the internetkubectl expose deployment hello-web --type=LoadBalancer --port 80 --target-port 8080Step 7: Scale up your applicationkubectl scale deployment hello-web --replicas=3 kubectl get deployment hello-webStep 8: Deploy a new version of your app.docker build -t gcr.io/${PROJECT_ID}/hello-app:v2 .Push image to the registrydocker push gcr.io/${PROJECT_ID}/hello-app:v2Apply a rolling update to the existing deployment with an image updatekubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2Finally cleaning up using:kubectl delete service hello-webKubernetes high level Component Architecture.ConclusionDocker swarm is easy to set up but has less feature set, also one needs to be comfortable with command lines for dashboard view. This is recommended to use when you need less number of nodes to manage.However Kubernetes has a lot of features with a GUI based Dashboard which gives a complete picture to manage your containers, scaling, finding any errors. This is recommended for a big, highly scalable system that needs 1,000s of pods to be functional. 
Rated 4.5/5 based on 11 customer reviews
Normal Mode Dark Mode

Docker vs. Kubernetes

Zeolearn Author
Blog
13th Mar, 2019
Docker vs. Kubernetes


Containers are a virtualization technology; however, they do not virtualize a physical server. Instead, a container is operating-system-level virtualization. What this means is that containers share the operating system kernel provided by the host among themselves along with the host.

Container Architecture

The figure shows all the technical layers that enable containers. The bottommost layer provides the core infrastructure in terms of network, storage, load balancers, and network cards. At the top of the infrastructure is the compute layer, consisting of either a physical server or both physical as well as virtual servers on top of a physical server. This layer contains the operating system with the ability to host containers. The operating system provides the execution driver that the layers above use to call kernel code and objects to execute containers.

What is Docker

Docker is used to manage, build and secure business-critical applications without the fear of infrastructure or technology lock-in.

Docker provides management features to Windows containers. It comprises of two executables:

  • Docker daemon
  • Docker client

The Docker daemon is the workhorse for managing containers

Important features of docker :

  • Application isolation
  • Swarm (Clustering and Scheduling tool)
  • Services
  • Security Management
  • Easy and Faster configuration
  • Increase Productivity
  • Routing Mesh

Why use docker for Development?

  • Easy Deployment.
  • Use any editor/IDE of your choice.
  • You can use a different version of the same programming language.
  • Don’t need to install a bunch of language environment on your system.
  • The development environment is the same as in production.
    It provides a consistent development environment for the entire team.

What is Kubernetes:

Kubernetes is a container orchestration system for running and coordinating containerized application across clusters of Machine. It also automates application deployment, scaling and management.

Important features of Kubernetes :

  • Managing multiple containers as one entity
  • Container replication
  • Container Auto-Scaling
  • Security Management
  • Volume management
  • Resource usage monitoring
  • Health checks
  • Service Discovery
  • Networking
  • Load Balancing
  • Rolling Updates

 Kubernetes Architecture


What is Docker swarm:

Docker swarm is a tool for clustering and scheduling docker containers. It is used to manage a cluster of docker nodes as a single virtual system. It uses the standard docker application programming interface to interface with other tools. Swarm uses the same command line from Docker.

This can be understood with the following diagram :


It uses three different strategies to determine which nodes each container should run:

  1. Spread
  2. BinPack
  3. Random

Important features of Kubernetes :

  • Tightly integrated into docker ecosystem
  • Uses its own API
  • Filtering
  • Load Balancing
  • Service Discovery
  • Multi-host Networking
  • Scheduling system

How are Kubernetes and Docker Swarm related :

  • Both provide load balancing features.
  • Both facilitate quicker container deployment and scaling.
  • Both have a developer community for help and support.

Docker Swarm vs Kubernetes :

One needs to understand that Docker and Kubernetes are not competitors. The two systems provide closely related but separate functions.


Docker Swarm

Kubernetes
Container Limit

Limited to 95,000 containers
Limited to 3,00,000 containers
Node Support

Support 2000+ nodes
Support up to 5000 nodes
Scalability
Quick container deployment and scaling even in large containers

Provides string guarantees to the cluster states at the expense of speed.
Developed By

Docker Inc
Google
Recommended Use Case
Small clusters, Simple architectures, No Multi-User, for small teams
Production-ready, recommended for any type of containerized environments, big or small, very feature rich

Installation
Simple installation but the resultant cluster is not comparatively strong
Complex installation
But a strong resultant cluster once set up

Load Balancing
Capability to execute auto load balancing of traffic between containers in the same cluster

Manual load balancing is often needed to balance traffic between different containers in different pods
GUI
There is no dashboard which makes management complex

It has an inbuilt dashboard
Rollbacks
Automatic rollback facility available only in docker 17.04 and higher

Automatic rollback with the ability to deploy rolling updates
Networking
Daemons are connected by overlay networks and the overlay network driver is used
Overlay network is used which lets pods communicate across multiple nodes

Availability
Containers are restarted on a new host if host failure is encountered

High availability, Health checks are performed directly on the pods


Let’s understand the differences category wise for the following points :

Installation/Setup:

Docker Swarm: It only requires two commands to set up a cluster, one at the manager level and another at the worker end.

Following are the commands to set up, open terminal and ssh into the machine :

$ docker-machine ssh manager1
$ docker swarm init --advertise-addr <MANAGER-IP>

Kubernetes: In Kubernetes, there are five steps to set up or host the cluster.

Step 1: First run the commands to bring up the cluster.

Step 2: Then Define your environment.

Step 3: Define Pod network.

Step 4: Then bring up the dashboard.

Step 5: Now, the cluster can be hosted.

GUI:

Docker Swarm: Docker Swarm is a command line tool. No GUI dashboard is available. One needs to be comfortable with console cli, to fully operate docker swarm.

Kubernetes: Kubernetes has a Web-Based kubernetes user interface. It can be used to deploy the containerized application to a kubernetes cluster.

Networking

Docker Swarm: User can encrypt container data traffic while creating an overlay network.

Lots of cool things are happening under the hood in docker swarm container networking which makes it easy to deploy production application on multi-host networks. The Node joining a swarm cluster generates an overlay network for services that span every host in the docker swarm and a host-only docker bridge network for containers

Kubernetes: In Kubernetes, we create network policies which specify how the pods interact with each other. The networking model is a flat network, allowing all pods to interact with one another The network is implemented typically as an overlay. The model needs two CIDRs: one for the services and the other from which pods acquire an IP address.

Scalability

Docker Swarm: After the release of Docker 1.12, we now have orchestration built in which can scale to as many instances as your hosts can allow.

Following are the steps to follow:
Step 1: Initialize Swarm

Step 2: Creating a Service

Step 3: Testing Fault Tolerance

Step 4: Adding an additional manager to Enable Fault tolerance

Step 5: Scaling Service with fault tolerance

Step 6: Move services from a specific node

Step 7: Enabling and scaling to a new node


Kubernetes: In Kubernetes we have masters and workers. Kubernetes master nodes act as a control plane for the cluster. The deployment has been designed so that these nodes can be scaled independently of worker nodes to allow for more operational flexibility.

Auto-Scaling

Docker Swarm: There is no easy way to do this with docker swarm for now. It doesn’t support auto-scaling out of the box. A promising cross-platform autoscaler tool called “Orbiter” can be used that supports swarm mode task auto scaling.

Kubernetes: Kubernetes make use of Horizontal Pod Autoscaling , It automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization.
This can be understood with the following diagram :

Availability

Docker Swarm: 
we can scale up the number of services running in our cluster. And even after scaling up, we will be able to retain high availability.

Use the following command to scale up the service

$ docker service scale Angular-App-Container=5

in a Docker Swarm setup if you do not want your manager to participate in the proceedings and keep it occupied for only managing the processes, then we can drain the manager from hosting any application.

$ docker node update --availability drain Manager-1

Kubernetes: There are two different approaches to set up a highly available cluster using kubeadm along with the required infrastructures such as three machines for masters, three machines for workers, full network connectivity between all machines, sudo privileges on all machines, Kubeadm and Kubelet installed on machines and SSH Access.

  1. With stacked control plane nodes. This approach requires less infrastructure. The etcd members and control plane nodes are co-located.
  2. With an external etcd cluster. This approach requires more infrastructure. The control plane nodes and etcd members are separated.

Rolling Updates and Roll Backs

Docker Swarm: If you have a set of services up and running in swarm cluster and you want to upgrade the version of your services.

The common approach is to set your website in Maintenance Mode, If you do it manually.
To do this in an automated way by means of orchestration tool we should make use of the following features available in Swarm :

  • Release a new version using docker service update command
  • Update Parallelism using update—parallelism and rollback—parallelism flags.

Kubernetes:To rollout or rollback a deployment on a kubernetes cluster use the following steps :

  • Rollout a new version
    kubectl patch deployment $DEPLOYMENT \    
     -p'{"spec":{"template":{"spec":{"containers":[{"name":"site","image":"$HOST/$USER/$IMAGE:$VERSION"}]}}}}'
  • Check the rollout status
    kubectl rollout status deployment/@DEPLOYMENT
  • Read the Deployment history
    kubectl rollout history deployment/$DEPLOYMENT
    kubectl rollout history deployment/$DEPLOYMENT --revision 42
  • Rollback to the previously deployed version
    kubectl rollout undo deployment/$DEPLOYMENT
  • Rollback to a specific previously deployed version
    kubectl rollout undo deployment/$DEPLOYMENT --to-revision 21
    
    

Load Balancing

Docker Swarm: The swarm internal networking mesh allows every node in the cluster to accept connections to any service port published in the swarm by routing all incoming requests to available nodes hosting service with the published port.

Ingress routing, the load balancer can be set to use the swarm private IP addresses without concern of which node is hosting what service.

For consistency, the load balancer will be deployed on its own single node swarm. 

Kubernetes: The most basic type of load balancing in Kubernetes is load distribution, easy to implement at dispatch level.
The most popular and in many ways the most flexible way of load balancing is Ingress, which operates with the help of a controller specialized in pod (kubernetes). It includes an ingress resource which contains a set of rules for governing traffic and a daemon which applies those rules.

The controller has an in-built feature for load balancing with some sophisticated capabilities.

The configurable rules contained in an Ingress resource allow very detailed and highly granular load balancing, which can be customized to suit both the functional requirements of the application and the conditions under which it operates.

Data Volumes:

Docker Swarm: Volumes are directories that are stored outside of the container’s filesystem and which hold reusable and shareable data that can persist even when containers are terminated. This data can be reused by the same service on redeployment or shared with other services.

Swarm is not as mature as Kubernetes. It only has one type of volume natively, which is a volume shared between the container and its docker host, but It won’t do the job in a distributed application. It is only helpful locally.

Kubernetes: At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the volume type used.

There are many volume types :

  • Local
  • Node-Hosted Volumes (emptyDir, hostpath and duh)
  • Cloud hosted
    • gcePersistentDisk (Google Cloud)
    • awsElasticBlockStore (Amazon Cloud – AWS)
    • AzureDiskVolume ( Microsoft Cloud -Azure)

Logging and Monitoring

Docker Swarm: Swarm has two primary log destinations daemon log (events generated by docker service) and container logs(generated by containers). It appends its own data to existing logs.

Following commands can be used to show logs per container as well as per service basis.

  1. Per Container :
    docker logs <container name>
  2. Per Service:
    docker service logs <service name>

Kubernetes: In Kubernetes, as requests get routed between services running on different nodes, it is often imperative to analyze distributed logs together while debugging issues.

Typically, three components make up a logging system in Kubernetes :

  1. Log Aggregator: It collects logs from pods running on different nodes and routes them to a central location. It should be efficient, dynamic and extensible.
  2. Log Collector/Storage/Search: It stores the logs from log aggregators and provides an interface to search logs as well. It also provides storage management and archival of logs.
  3. Alerting and UI: The key feature of log analysis of distributed applications is virtualization. A good UI with query capabilities, Custom Dashboard makes it easier to navigate through application logs, correlate and debug issues.

Containers

Packaging software into standardized units for shipment, development and deployment is called a container. It included everything to run an application be it code, runtime, system tools, settings and system libraries. Containers are available for both Linux and windows application.

Following are the architecture diagram of a containerized application :

Benefits of Containers :

  • Great Efficiency
  • Better Application Development
  • Consistent Operation
  • Minimum overhead
  • Increased Portability

Container Use Cases :

  • Support for microservices architecture
  • Easier deployment of repetitive jobs and tasks
  • DevOps support for CI(Continuous Integration) and CD(Continuous Deployment)
  • Existing application refactoring for containers.
  • Existing application lift and shift on cloud.

Containers vs Virtual Machines :

One shouldn’t get confused container technology with virtual machine technology. Virtual Machine runs on a hypervisor environment whereas container shares the same host OS and is lighter in size compared to a virtual machine.

Container takes seconds to start whereas the virtual machine might take minutes to start.

Difference between Container and Virtual Machine Architecture :

Build and Deploy Containers with Docker:
Docker launched in 2013 and revolutionized the application development industry by democratizing software containers.  In June 2015 docker donated docker specification and runc to OCI (Open container Initiative)

Manage containers with Kubernetes
Kubernetes(K8s) is a popular open-source container management system. It offers some unique features such as Traffic Load Balancing, Scaling, Rolling updates, Scheduling and Self-healing(automatic restarts)

Features of Kubernetes :

  • Automatic binpacking
  • Self-Healing
  • Storage Orchestration
  • Secret and Configuration Management
  • Service discovery and load balancing
  • Automated rollouts and rollbacks
  • Horizontal Scaling
  • Batch Execution

Case Studies :

  • IBM offers managed kubernetes container service and image registry to provide a fully secure end to end platform for its enterprise customers.
  • NAIC leverages kubernetes which helps their developer to create rapid prototypes far faster than they used to.
  • Ocado Technology leverages kubernetes which help them speeding the idea to the implementation process. They have experienced feature go the production from development in a week now. Kubernetes give their team the ability to have more fine-grained resource allocation.

First, we must create a docker image and then push it to a container registry before referring it in a kubernetes pod.

Using Docker with Kubernetes:

There is a saying that Docker is like an airplane and Kubernetes is like an airport. You need both.

Container platform is provided by a company called Docker.

Following are the steps to package and deploy your application :

Step 1: Build the container image

docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .

Verify that the build process was successful

docker images

Step 2: Upload the container image

docker push gcr.io/${PROJECT_ID}/hello-app:v1

Step 3: Run your container locally(optional)

docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/hello-app:v1

Step 4: Create a container cluster
In case of google GCP

gcloud container clusters create hello-cluster --num-nodes=3
gcloud compute instances list

Step 5: Deploy your application

kubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port 8080
kubectl get pods

Step 6: Expose your application on the internet

kubectl expose deployment hello-web --type=LoadBalancer --port 80 --target-port 8080

Step 7: Scale up your application

kubectl scale deployment hello-web --replicas=3
kubectl get deployment hello-web

Step 8Deploy a new version of your app.

docker build -t gcr.io/${PROJECT_ID}/hello-app:v2 .

Push image to the registry

docker push gcr.io/${PROJECT_ID}/hello-app:v2

Apply a rolling update to the existing deployment with an image update

kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2

Finally cleaning up using:

kubectl delete service hello-web

Kubernetes high level Component Architecture.

Conclusion

Docker swarm is easy to set up but has less feature set, also one needs to be comfortable with command lines for dashboard view. This is recommended to use when you need less number of nodes to manage.

However Kubernetes has a lot of features with a GUI based Dashboard which gives a complete picture to manage your containers, scaling, finding any errors. This is recommended for a big, highly scalable system that needs 1,000s of pods to be functional. 

Zeolearn

Zeolearn Author

Senior Project Manager

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE OUR BLOG

Follow Us On

Share on