Containers are a virtualization technology; however, they do not virtualize a physical server. Instead, a container is operating-system-level virtualization. What this means is that containers share the operating system kernel provided by the host among themselves along with the host.
Container Architecture
The figure shows all the technical layers that enable containers. The bottommost layer provides the core infrastructure in terms of network, storage, load balancers, and network cards. At the top of the infrastructure is the compute layer, consisting of either a physical server or both physical as well as virtual servers on top of a physical server. This layer contains the operating system with the ability to host containers. The operating system provides the execution driver that the layers above use to call kernel code and objects to execute containers.
Docker is used to manage, build and secure business-critical applications without the fear of infrastructure or technology lock-in.
Docker provides management features to Windows containers. It comprises of two executables:
The Docker daemon is the workhorse for managing containers
Kubernetes is a container orchestration system for running and coordinating containerized application across clusters of Machine. It also automates application deployment, scaling and management.
Kubernetes Architecture
Docker swarm is a tool for clustering and scheduling docker containers. It is used to manage a cluster of docker nodes as a single virtual system. It uses the standard docker application programming interface to interface with other tools. Swarm uses the same command line from Docker.
This can be understood with the following diagram :
It uses three different strategies to determine which nodes each container should run:
One needs to understand that Docker and Kubernetes are not competitors. The two systems provide closely related but separate functions.
Docker Swarm | Kubernetes | |
---|---|---|
Container Limit | Limited to 95,000 containers | Limited to 3,00,000 containers |
Node Support | Support 2000+ nodes | Support up to 5000 nodes |
Scalability | Quick container deployment and scaling even in large containers | Provides string guarantees to the cluster states at the expense of speed. |
Developed By | Docker Inc | Google |
Recommended Use Case | Small clusters, Simple architectures, No Multi-User, for small teams | Production-ready, recommended for any type of containerized environments, big or small, very feature rich |
Installation | Simple installation but the resultant cluster is not comparatively strong | Complex installation But a strong resultant cluster once set up |
Load Balancing | Capability to execute auto load balancing of traffic between containers in the same cluster | Manual load balancing is often needed to balance traffic between different containers in different pods |
GUI | There is no dashboard which makes management complex | It has an inbuilt dashboard |
Rollbacks | Automatic rollback facility available only in docker 17.04 and higher | Automatic rollback with the ability to deploy rolling updates |
Networking | Daemons are connected by overlay networks and the overlay network driver is used | Overlay network is used which lets pods communicate across multiple nodes |
Availability | Containers are restarted on a new host if host failure is encountered | High availability, Health checks are performed directly on the pods |
Let’s understand the differences category wise for the following points :
Docker Swarm: It only requires two commands to set up a cluster, one at the manager level and another at the worker end.
Following are the commands to set up, open terminal and ssh into the machine :
$ docker-machine ssh manager1 $ docker swarm init --advertise-addr <MANAGER-IP>
Kubernetes: In Kubernetes, there are five steps to set up or host the cluster.
Step 1: First run the commands to bring up the cluster.
Step 2: Then Define your environment.
Step 3: Define Pod network.
Step 4: Then bring up the dashboard.
Step 5: Now, the cluster can be hosted.
Docker Swarm: Docker Swarm is a command line tool. No GUI dashboard is available. One needs to be comfortable with console cli, to fully operate docker swarm.
Kubernetes: Kubernetes has a Web-Based kubernetes user interface. It can be used to deploy the containerized application to a kubernetes cluster.
Docker Swarm: User can encrypt container data traffic while creating an overlay network.
Lots of cool things are happening under the hood in docker swarm container networking which makes it easy to deploy production application on multi-host networks. The Node joining a swarm cluster generates an overlay network for services that span every host in the docker swarm and a host-only docker bridge network for containers
Kubernetes: In Kubernetes, we create network policies which specify how the pods interact with each other. The networking model is a flat network, allowing all pods to interact with one another The network is implemented typically as an overlay. The model needs two CIDRs: one for the services and the other from which pods acquire an IP address.
Docker Swarm: After the release of Docker 1.12, we now have orchestration built in which can scale to as many instances as your hosts can allow.
Following are the steps to follow:
Step 1: Initialize Swarm
Step 2: Creating a Service
Step 3: Testing Fault Tolerance
Step 4: Adding an additional manager to Enable Fault tolerance
Step 5: Scaling Service with fault tolerance
Step 6: Move services from a specific node
Step 7: Enabling and scaling to a new node
Kubernetes: In Kubernetes we have masters and workers. Kubernetes master nodes act as a control plane for the cluster. The deployment has been designed so that these nodes can be scaled independently of worker nodes to allow for more operational flexibility.
Docker Swarm: There is no easy way to do this with docker swarm for now. It doesn’t support auto-scaling out of the box. A promising cross-platform autoscaler tool called “Orbiter” can be used that supports swarm mode task auto scaling.
Kubernetes: Kubernetes make use of Horizontal Pod Autoscaling , It automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization.
This can be understood with the following diagram :
Docker Swarm:
we can scale up the number of services running in our cluster. And even after scaling up, we will be able to retain high availability.
Use the following command to scale up the service
$ docker service scale Angular-App-Container=5
in a Docker Swarm setup if you do not want your manager to participate in the proceedings and keep it occupied for only managing the processes, then we can drain the manager from hosting any application.
$ docker node update --availability drain Manager-1
Kubernetes: There are two different approaches to set up a highly available cluster using kubeadm along with the required infrastructures such as three machines for masters, three machines for workers, full network connectivity between all machines, sudo privileges on all machines, Kubeadm and Kubelet installed on machines and SSH Access.
Docker Swarm: If you have a set of services up and running in swarm cluster and you want to upgrade the version of your services.
The common approach is to set your website in Maintenance Mode, If you do it manually.
To do this in an automated way by means of orchestration tool we should make use of the following features available in Swarm :
Kubernetes:To rollout or rollback a deployment on a kubernetes cluster use the following steps :
kubectl patch deployment $DEPLOYMENT \ -p'{"spec":{"template":{"spec":{"containers":[{"name":"site","image":"$HOST/$USER/$IMAGE:$VERSION"}]}}}}'
kubectl rollout status deployment/@DEPLOYMENT
kubectl rollout history deployment/$DEPLOYMENT kubectl rollout history deployment/$DEPLOYMENT --revision 42
kubectl rollout undo deployment/$DEPLOYMENT
kubectl rollout undo deployment/$DEPLOYMENT --to-revision 21
Docker Swarm: The swarm internal networking mesh allows every node in the cluster to accept connections to any service port published in the swarm by routing all incoming requests to available nodes hosting service with the published port.
Ingress routing, the load balancer can be set to use the swarm private IP addresses without concern of which node is hosting what service.
For consistency, the load balancer will be deployed on its own single node swarm.
Kubernetes: The most basic type of load balancing in Kubernetes is load distribution, easy to implement at dispatch level.
The most popular and in many ways the most flexible way of load balancing is Ingress, which operates with the help of a controller specialized in pod (kubernetes). It includes an ingress resource which contains a set of rules for governing traffic and a daemon which applies those rules.
The controller has an in-built feature for load balancing with some sophisticated capabilities.
The configurable rules contained in an Ingress resource allow very detailed and highly granular load balancing, which can be customized to suit both the functional requirements of the application and the conditions under which it operates.
Docker Swarm: Volumes are directories that are stored outside of the container’s filesystem and which hold reusable and shareable data that can persist even when containers are terminated. This data can be reused by the same service on redeployment or shared with other services.
Swarm is not as mature as Kubernetes. It only has one type of volume natively, which is a volume shared between the container and its docker host, but It won’t do the job in a distributed application. It is only helpful locally.
Kubernetes: At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the volume type used.
There are many volume types :
Docker Swarm: Swarm has two primary log destinations daemon log (events generated by docker service) and container logs(generated by containers). It appends its own data to existing logs.
Following commands can be used to show logs per container as well as per service basis.
docker logs <container name>
docker service logs <service name>
Kubernetes: In Kubernetes, as requests get routed between services running on different nodes, it is often imperative to analyze distributed logs together while debugging issues.
Typically, three components make up a logging system in Kubernetes :
Packaging software into standardized units for shipment, development and deployment is called a container. It included everything to run an application be it code, runtime, system tools, settings and system libraries. Containers are available for both Linux and windows application.
Following are the architecture diagram of a containerized application :
Benefits of Containers :
Container Use Cases :
One shouldn’t get confused container technology with virtual machine technology. Virtual Machine runs on a hypervisor environment whereas container shares the same host OS and is lighter in size compared to a virtual machine.
Container takes seconds to start whereas the virtual machine might take minutes to start.
Build and Deploy Containers with Docker:
Docker launched in 2013 and revolutionized the application development industry by democratizing software containers. In June 2015 docker donated docker specification and runc to OCI (Open container Initiative)
Manage containers with Kubernetes
Kubernetes(K8s) is a popular open-source container management system. It offers some unique features such as Traffic Load Balancing, Scaling, Rolling updates, Scheduling and Self-healing(automatic restarts)
Features of Kubernetes :
Case Studies :
First, we must create a docker image and then push it to a container registry before referring it in a kubernetes pod.
There is a saying that Docker is like an airplane and Kubernetes is like an airport. You need both.
Container platform is provided by a company called Docker.
Following are the steps to package and deploy your application :
Step 1: Build the container image
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
Verify that the build process was successful
docker images
Step 2: Upload the container image
docker push gcr.io/${PROJECT_ID}/hello-app:v1
Step 3: Run your container locally(optional)
docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/hello-app:v1
Step 4: Create a container cluster
In case of google GCP
gcloud container clusters create hello-cluster --num-nodes=3 gcloud compute instances list
Step 5: Deploy your application
kubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port 8080 kubectl get pods
Step 6: Expose your application on the internet
kubectl expose deployment hello-web --type=LoadBalancer --port 80 --target-port 8080
Step 7: Scale up your application
kubectl scale deployment hello-web --replicas=3 kubectl get deployment hello-web
Step 8: Deploy a new version of your app.
docker build -t gcr.io/${PROJECT_ID}/hello-app:v2 .
Push image to the registry
docker push gcr.io/${PROJECT_ID}/hello-app:v2
Apply a rolling update to the existing deployment with an image update
kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2
Finally cleaning up using:
kubectl delete service hello-web
Kubernetes high level Component Architecture.
Docker swarm is easy to set up but has less feature set, also one needs to be comfortable with command lines for dashboard view. This is recommended to use when you need less number of nodes to manage.
However Kubernetes has a lot of features with a GUI based Dashboard which gives a complete picture to manage your containers, scaling, finding any errors. This is recommended for a big, highly scalable system that needs 1,000s of pods to be functional.
Leave a Reply
Your email address will not be published. Required fields are marked *