“Docker” the Container-Based Application

Amalanathan Thushanthan
DevOps.dev
Published in
6 min readMay 31, 2022

--

What is Docker?

Docker is a software platform for developing applications based on containers, which are tiny and lightweight execution environments that share the operating system kernel but execute in isolation.

Let’s break it down into containers and learn more about it.

To begin, we must comprehend why container-based development has become so popular.

The basic software deployment is straightforward: we have a computer with hardware and a specified operating system installed. On top of the operating system, we may install our program and libraries. Boom. It works perfectly. But the machine has lots of unused resources (CPU, Ram) to run that application. Also if we need to make our application higher available, we need to buy another machine to do the same process. It cost a lot and consumes more time to do the setup process.

The age of virtualization has begun to sort the issue. We have a computer or a virtual server with specific hardware and an operating system. A Virtualization layer (hypervisor, Oracle virtual box, etc ) sits on top of the operating system and helps to virtualize the host (CPU, RAM, and Network) and demarcate virtual machines. Then we install the relevant full-blown operating system on the virtual machines, install the libraries on top of the virtual OS, and execute the application as if it is in traditional deployment.

Microservice Architecture

Installing a full-blown operating system into the virtual machine would take up a lot of space. We also need to start the virtual machine OS and launch the program if we wish to use it. Both actions need more space and time.

Now the Container-based application comes into play and becomes famous. Here the basic structure is the same. We have a computer or a virtual server with specific hardware and an operating system. Instead of a Virtualization layer here we have a Container runtime engine. On top of it, we can create our containers with our operating system and run the application in that Operating system.

Docker Architecture

What? Are we going to install the Operating system again within the container? What is the difference between Microservice and Container?

Yes. Within the container, we’re working on the operating system. However, we did not create a full-blown operating system. Instead, it virtualized the operating system’s underlying kernel and added the essential operating system features. Let’s keep things simple. The Docker container should indeed run the Ubuntu operating system. Ubuntu’s basic kernel is Linux. When Docker creates a container, it virtualizes the Linux kernel and adds Ubuntu features. The container only takes around 100MB to develop the OS, but if we use the microservice, the OS will take at least 2GB to construct.

What is Docker Image and why do we need that?

Docker Image is a file containing a collection of instructions for constructing a Docker container. The image contains instructions such as the operating system going to use by the container, a list of comments that helps to install basic functionality to run the service into that container, and so on. Once we run the docker image, It helps to create the container for us to build our application.

What is docker hub

Docker Hub is a large-scale library management service run by the Docker team, with thousands of pre-built public and private images to make our jobs simpler. It also facilitates in the management and sharing of container images across our team.

Let's get to know some basic comments

Installing Docker

System requirement :

  • 64-bit kernel and CPU support for virtualization
  • 4GB system RAM

Find more: Docker system requirement

Installation:

Ubuntu introduced the graphical interface for most of the OS to easier our work

Check the installation of Docker

Check the version

> docker -v

Checking docker working with docker hub

> docker run hello-world

Useful comments

Find the list of docker images

> docker images 

Find the list of docker container

>docker container lsGetting all containers 
> docker container ls -a

Stat and attach with the docker container

> docker start <container name>
> docker attach <container name or container id>
OR
> docker start -ai <container name>

Run docker image

When we exit from container the entire container exit.
> docker run -it <image name> /bin/bash
When we exit from container, exec help to exit from the control of the container but the container still exist.
> docker exec -it <image name> /bin/bash
Add New Container
> docker run -it --name <Container Name> <Image Name> /bin/bash

Remove docker image

> docker image rm <image id>

Remove docker containers

> docker container rm <container id>Remove all container 
> docker container rm $(docker container ls -a -q)
Remove all stopped contailers
> docker container prune

Exit from a docker container

> #exit 

Get docker container statistics (CPU Usage, Memory, etc)

> docker stats <Container name>Instand statistics
> docker stats --no-stream <Container name>
Instand statistics for all Container
> docker stats --no-stream --all

Get all information about Container

> docker inspect <Container name>

Search pre-define public images from Docker Hub

> docker search ubuntu

Get pre-define public images from Docker Hub

> docker pull ubuntu

Upload the locally created images to Docker Hub

> docker build -t <image name> //create image
> docker push <image name> //push the image into docker hub

Creating a docker image for an existing container

  • Manual way (Commit)

Once we create the image, We can re-use the image and build our new containers with all available facilities installed in the earlier available container. We don't want to install it again and again. Also, it's not recommended because it does not execute the old container’s runtime comments ( services).

> docker commit <available container name> <new image name>
  • Automatic way (Build)

Here we create a file (Dockerfile) that contains all the instructions and build that file to create an image. Using this Build we can able to overcome the problem mentioned manually.

sample Dockerfile File :

FROM ubuntu
RUN apt-get update -y
RUN apt-get install nginx -y
ENTRYPOINT bash && service nginx start

When we create an image using the code mentioned below, the first three lines execute from Dockerfile while creating the image and saving the last line in the memory. The last comment (ENTRYPOINT ) executes only when we create a container.

> docker build -t <new image Name> <Sample Dockerfile Location>Without Cache
> docker build --no-cache -t <new image Name> <Sample Dockerfile Location>

Create docker volume

Docker volume helps to create a persistent volume for a container, at the same time it helps to attach (mount) a specific volume with multiple containers. Even if we delete all the containers that use volume, not affect the data in the Docker volume.

  • Create new volume
> docker volume create <volume name>
  • List volume
> docker volume ls
  • View all details of the volume
> docker inspect <volume name>
  • Mount the volume with multiple containers
> docker run -it --name <Container Name> -v <volume name>:/<volume name> <Image Name>> docker run -it --name <Container 2 Name> -v <volume name>:/<volume name> <Image Name>

Accessing containers using external entities by creating a linkage between server port and container port

Step1: We need to add the running port in our Dockerfile while creating the image using EXPOSE

EXPOSE 8080

Step2: Creating a linkage between server port and container port while creating the container

> docker run -itd -p 23422:8080 --name  <container name> <image name>

Here the image runs in port 8080. when we access the port number 23422, It will redirect and display whatever runs in the container with the port 8080

--

--