DevOps.dev

Devops.dev is a community of DevOps enthusiasts sharing insight, stories, and the latest…

Follow publication

Docker Swarm: Creating a Master-Slave Cluster for Container Deployment Using Docker Stack

For launching the container minimum requirement is hardware resources & operating system. If we run a website on a container it is completely dependent on the host system’s hardware & if there is a possibility that our operating system goes down then the entire website goes down.
Here we can say our operating system is a single point of failure Adding more resources (CPU/RAM/HD) to one operating
the system is called vertical scaling
One of the limitations of vertical scaling is downtime
Adding additional machines to our infrastructure is called horizontal scaling
In master-slave architecture, one device or system (Master) controls the other devices (Slave)
Master-slave cluster setup

🚀Step 1:- Launching 3 Amazon Linux instances on the AWS cloud

🚀Step 2:- Install Docker in all the 3 instances.

🚀Step 3:-Starting Docker services. Don't forget to start the service on other nodes also.

🚀Step 4:- ON MASTER Edit the inbound rule or create a new rule from the security section of your instance

🚀Step 5:- Pinging from slave/worker node to Master node (Connectivity check).

🚀Step 6:- ON MASTER node Initialize cluster

👨‍💻Command

docker swarm init --advertise-addr <master_ip>

🚀Step 7:- Listing Nodes in the cluster

👨‍💻Command

docker node ls

🚀Step 8:- For adding a worker node they give a pre-created command at the time of initializing the cluster

🚀Step 9:- Copy the token and paste it to your worker node

Run Command on both 2 remaining instances to make a worker/slave node.

Now Listing the node in the cluster

👨‍💻ON Master

docker node ls

If we run the docker info command on the master and worker node we can see swarm is active and managing worker nodes.

docker info

We can launch the container with
👉Docker run command
👉Docker compose command
👉Swarm cluster
The docker engine will give the runtime to the container with a program called run c
By default, docker gives networking in such a way that all the containers running on the same docker host can connect to each other. This kind of networking is called the bridge network
One issue we face in networking is if we try to connect containers running on a different host, we don’t have connectivity
In multi-tier architecture, we are running our different services on the containers which are running on different host
The overlay network & overlay driver has the capability to work as a single network on the distributed host
As soon as we create the cluster in Swarm it automatically creates the overlay network for us & as we join more nodes automatically our distributed network will expand.
The master node also works as a worker node
If we list the network on the node, we can see swarm has created the overlay network

When creating integrations using Docker Compose, Docker Swarm, and Docker Stack, it’s important to note that Docker Compose has a limitation and can only run on a single node or system. To run across multiple nodes, Docker Swarm is utilized. In scenarios where Docker Compose doesn’t work, Docker Stack is used to deploy the Docker Compose file and launch it within the Docker Swarm.

Docker Stack is a command-line tool provided by Docker that allows users to deploy a Docker Compose file to a Docker Swarm cluster.

For creating an app.py file

you can use any editor nano vim, or whatever you are comfortable with.

👨‍💻Commands

mkdir /code
cd /code
vim app.py

Creating requirement.txt

vim requirement.txt

Create a file called Dockerfile

Now Create the Image from Dockerfile

In my case, I am giving the image name : mypy:v1

docker build -t mypy:v1 .

Install docker-compose first

Now create a file called docker-compose.yml

vim docker-compose.yml

In this docker-compose file, two services are defined: web and redis

web: This service is built using the image, which is fetched from a private registry or repository located at 127.0.0.1:5000. The build context is set to the current directory (.), allowing the service to be built using a local Dockerfile if present. The service is exposed on port 8000 on the host and container, enabling access to the web application.

redis: This service uses the official redis:alpine image, which is pulled from the Docker Hub repository. It sets up a Redis server, which can be accessed by other services within the Docker network.

Now If you run the docker-compose file, it will work correctly; however, the containers launched will run locally. To deploy our containers in a Docker swarm or utilize the docker-compose file, it is imperative to employ the “docker stack” command for a professional deployment approach.

docker stack deploy --compose-file docker-compose.yml mypyapp

docker stack ls command is used to list the stack.

docker stack ls

Docker stack has the capability to launch in Kubernetes(K8s) 🚢also.

Accessing the website from the browser. To access the website, you can utilize either the Master node or worker public IP on port 🌐8000.

http://<public ip>:8000

docker service ls” is used to list the services running within a Docker Swarm.

docker service ls

✨Summary:

This blog discusses the process of creating a master-slave cluster using Docker Swarm to deploy containers. It highlights the importance of overcoming single points of failure and scaling limitations in containerized applications. The blog covers the step-by-step procedure, from launching Amazon Linux instances on the AWS cloud to utilizing Docker Compose and Docker Stack for professional deployment. It also provides insights into networking challenges and the benefits of overlay networks in a distributed host environment.

All the code used in this blog 🔗sachin12msd/Docker_Swarm (github.com)

📩Email:- sachin12msd@gmail.com
🔗LinkedIn:-
https://www.linkedin.com/in/sachin-digarse/

Thank you for reading this blog. I hope you like it if you have any queries or suggestions do let me know in the comment section. 😊

Published in DevOps.dev

Devops.dev is a community of DevOps enthusiasts sharing insight, stories, and the latest development in the field.

No responses yet

Write a response