Docker enhances the native Linux containerization technologies which enables:
MORE IDEAS FROM The Power of Docker
DockerFile - every container has one and states the instructions for building the container’s image. It consists in command-line instructions;
Docker images - contains executable source code, libs and dependencies the code needs; Read-only file; A Docker image can be build from scratch, most developers reuse common ones from repositories;
Docker containers - the live and running instance of the Docker image; Ephemeral, users can interact with them, run commands and adjust settings using docker commands
Docker registry - open source distribution system and versioning using git
Think like a Virtual Machine but:
Where containers differ from VMs is the fact that they take advantage of the Linux kernel which comes with process isolation and virtualization capabilities.
These capabilities are control groups(allocating resources), and namespaces(restricting a processes access to other areas of the system).
Some real world examples would be the fact that a container has a specific place on the ship and a container lock, which keeps everything secure inside it.
Containers don’t come with the weight of a full OS image and a hypervisor.
Containers include only OS processes and dependencies necessary to execute the code inside them.
Containers occupy less space(MBs not GBs) which means you can run multiple copies of the same application on the same hardware.
Containers are faster and easier to deploy, provision and restart. This make them ideal for CI/CD pipelines and a better fit for Agile development teams.
On the same host:
On different hosts:
You need container orchestration tools.
Docker is an open source containerization toolkit which enables developers to package applications into containers.
As the name suggests, Docker is like a container ship which provides means to sort, link, manage and ship containers from a development setup into production.
It’s a cluster comprised of a group of physical and virtual machines running the Docker Engine, these are called nodes.
One of the nodes is elected to be the Leader by using the Raft consensus algorithm. He make all the orchestration decisions for the swarm. In case of failure, a new leader it’s automatically elected.
Manager nodes assign tasks to Worker nodes. They can do some other managerial tasks. You want to have an odd number of managers, for reliability purposes. Having to many managers can degrade performance, it’s recommended to have at most 7.
Fear of death is the most common fear among everyone.
But the question is why do we fear death?
❤️ Brainstash Inc.