Implement Monitoring, Alerting, and Logging on Kubernetes.

Harshal Jethwa
DevOps.dev
Published in
6 min readJul 15, 2023

--

#10WeeksOfCloudOps

Introduction :
Monitoring, alerting, and logging are critical components of managing and maintaining a Kubernetes cluster effectively. They help you track the health and performance of your applications and infrastructure, identify issues, and troubleshoot problems in real time. In this context, ELK (Elasticsearch, Logstash, and Kibana) is a popular stack for log management and analysis.

Implementing Monitoring, Alerting, and Logging on Kubernetes typically involves the following steps:

Monitoring:

  • Choose a monitoring solution: There are various monitoring solutions available for Kubernetes, such as Prometheus, Datadog, and New Relic. Select a solution that fits your requirements.
  • Deploy monitoring agents: Install the monitoring agents or exporters on your Kubernetes cluster. These agents collect metrics from various components, including nodes, pods, and containers.
  • Configure service discovery: Set up service discovery mechanisms to automatically discover and monitor new services and pods as they are added or removed from the cluster.
  • Create monitoring dashboards: Define custom dashboards to visualize the collected metrics and gain insights into the performance and behavior of your cluster and applications.

Alerting:

  • Define alerting rules: Establish rules based on the collected metrics to define conditions that trigger alerts. For example, you might want to receive an alert when CPU utilization exceeds a certain threshold.
  • Configure alerting channels: Set up alerting channels such as email, Slack, or PagerDuty to receive notifications when alerts are triggered.
  • Test and refine: Continuously test and refine your alerting rules to ensure they are effective and provide timely notifications for critical events.

Logging:

  • Deploy a log management stack: ELK (Elasticsearch, Logstash, and Kibana) is a popular choice for log management. Deploy these components on your cluster or use managed services like Elasticsearch Service on Elastic Cloud.
  • Configure log collection: Configure log collectors or agents (like Filebeat or Fluentd) on your cluster nodes or as sidecar containers to gather logs from application containers.
  • Stream logs to ELK: Set up log shipping to stream logs from your Kubernetes cluster to Elasticsearch using Logstash or a log shipper like Filebeat. Ensure that logs are properly parsed, indexed, and stored in Elasticsearch.
  • Visualize and analyze logs: Utilize Kibana to create visualizations, dashboards, and queries to search, analyze, and monitor your logs effectively. Kibana offers powerful search capabilities and visualizations to help you identify patterns, troubleshoot issues, and gain insights.

Steps to Create Implement Monitoring, Alerting, and Logging on Kubernetes :

Setup 1: Set up Kubernetes Cluster
Install and configure a Kubernetes cluster using your preferred method.
Ensure you have access to the Kubernetes cluster and the necessary permissions to deploy and manage resources.
And Then run this command to assure about Kubernetes running.

kubectl get pods

Setup 2: Deploy Prometheus
Install Prometheus-operator (Note: You should have helm preinstalled in your system)

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo update

Above command will add the Prometheus repo to the helm and update it with the new repo.
install the helm chart with a fixed version by the below command

helm install prometheus prometheus-community/kube-prometheus-stack --version "9.4.1"

Step 3: Deploy Grafana
Deploy Grafana using Helm by the below command.

helm install grafana stable/grafana --namespace grafana

Now, run the below command to see if it is deployed

kubectl get pod
kubectl get deployment
kubectl get service

Obtain the admin password for Grafana by using the below command to replace the name of kube-pod with your pod.

kubectl logs [name of kube-pod]

And deploy it using below command

kubectl port-forward deployment/prometheus-grafana 3000

The default cred will be username: admin and password: prom-operator or admin.

Setup 4: Configure Prometheus as a data source in Grafana
Log in to Grafana using the admin username (default is “admin”) and the password obtained in the previous step.

  • Add Prometheus as a data source in Grafana:
  • Go to “Configuration” -> “Data Sources” -> “Add data source”.

Setup 5: Create a custom monitoring dashboard in Grafana

  • Customize the dashboard to include CPU, memory, disk, and error code metrics based on your specific requirements.
  • Add panels or graphs to visualize the metrics and arrange them as desired.
  • Save the dashboard.

Set up metrics alerts in Prometheus/Grafana:

  • Define the metrics you want to monitor and set alerting rules in Prometheus using Prometheus Query Language (PromQL).
  • Configure alerting in Grafana:
  • Go to "Alerting" -> "Notification channels" to add a notification channel for alert notifications.
  • Go to your custom dashboard, click on a panel, and select "Edit".
  • In the panel settings, go to the "Alert" tab and create alert rules based on your metrics.
  • Configure notification channels for each alert rule.

Setup 6: Set up logging with ELK (Elasticsearch, Logstash, Kibana):

Install and Configure Elasticsearch:

  • Download and install Elasticsearch from the official website: https://www.elastic.co/downloads/elasticsearch
  • Configure Elasticsearch by editing the elasticsearch.yml file. Set the cluster name, network host, and other settings as per your requirements.

Install and Configure Logstash:

  • Download and install Logstash from the official website: https://www.elastic.co/downloads/logstash
  • Create a Logstash configuration file (e.g., logstash.conf) to define input, filter, and output settings. Ensure that you specify the Elasticsearch output configuration in the file.

Install and Configure Kibana:

  • Download and install Kibana from the official website: https://www.elastic.co/downloads/kibana
  • Configure Kibana by editing the kibana.yml file. Set the Elasticsearch URL to point to your Elasticsearch instance.

Verify ELK Stack:

  • Start Elasticsearch, Logstash, and Kibana services.
  • Access Kibana using the web interface to ensure it’s running and connected to Elasticsearch.

Below is a Grafana dashboard showing the overview of cluster statistics:

Follow me :

Linkedin: https://www.linkedin.com/in/harshaljethwa/

GitHub: https://github.com/HARSHALJETHWA19/

Twitter: https://twitter.com/harshaljethwaa

Thank You!!!

--

--

DevOps | Docker | Linux | Jenkins | AWS | Git | Terraform | Technical Blogger