No Comments

Monitoring Containers using ELK Stack

Ajit Gadge | Senior Database Consultant, Ashnik
Singapore, 15 May 2019
Ajit-FImg

by , , No Comments

15-May-2019

Is Container Monitoring on your priority list? I might have an elucidation to this.

Companies these days are increasingly adopting a microservice strategy to deploy their applications. When they deploy these services/applications in a container, their first choice is always a Docker container technology. While we all know that container is stateless, it is very important to store their logs from a disk to a Centralized logging system.

To understand the importance of a centralized logging system, say if you’re using Kubernetes as an orchestration layer to run your workloads, there’s no easy way to find the correct log files on one of the many worker nodes in your cluster. Kubernetes will reschedule your pods between different physical servers or cloud instances. Pod logs can be lost or if a pod crashes, the logs may also get deleted from the disk. Centralized logging is an important component of any production-grade infrastructure and furthermore critical in a containerized architecture. Without a centralized logging solution, it is practically impossible to find a log file located somewhere on one of the hundreds or thousands of worker nodes.

For this reason, any production-grade cluster should have its log collector agents configured on all nodes and use centralized storage such as Elasticsearch for all log data. Analyzing log data can help in debugging issues with your deployed applications and services, such as determining the reason for container termination, application crash, etc. If you are centralizing these logging using ELK stack with Docker STDERR, STDOUT then it is easy to move these log lines in Elasticsearch and analyze the same in real-time.

How ELK Stack helps in monitoring Docker & Kubernetes?

ELK stack (Elastic search, Logstash, and Kibana) comes with default Docker and Kubernetes monitoring beats and with its auto-discovery feature in these beats, it allows you to capture the Docker and Kubernetes fields and ingest into Elasticsearch. ELK Stack also has a default Kibana template to monitor this infrastructure of Docker and Kubernetes. These beats not only capture the logs from Docker and Kubernetes but also capture metrics and metadata which allows you to monitor the system and application level performance.

Here is one of the examples of how you can set up this system to monitors Docker and Kubernetes pods.

img1-15052019

Containerizing infra and application monitoring has its own limitations of monitoring Docker and Kubernetes and also monitoring different layers which includes host to applications/services that are running. This raises several questions like:

  • Where the agent/beats get installed?
  • Is it for every container/pods/host/services?
  • Do we need to install different agent/beats for different metadata such as metrics/logs/ different services etc.?

Let’s see some of the scenarios to understand this better –

  • ELK beats have modules that not only can monitor system (host/node) where your pods/container are running but can also monitor your orchestration layer, Kubernetes metadata, and Docker container. You can install a single beat on one host to monitor all these layers. So, no need to install different beats for different pods on Docker. Of course, you need to install metric beats for metric data and file beat for log analysis on each host. Here is a classic overview of Kubernetes deployment along with ELK stack. Beats & Docker images are available here to deploy & monitor these.

img2-15052019

If you are running a container on Docker, it sends its standard output logs to the host system and the default path is “/var/lib/docker/*/.log “. Beats maps this standard path from where it inputs Docker logs and parse the logs in meaningful fields. You can specify ‘Docker id’ if you like to monitor a specific container or “*” for all.

img3-15052019

You can also pull a Docker image and run with Docker and map its log volumes as stated below. Details on how to configure Docker with file beat is here.

Dcokerrun -d \

–name=filebeat \

–user=root \

–volume=”$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro” \

–volume=”/var/lib/docker/containers:/var/lib/docker/containers:ro” \

–volume=”/var/run/docker.sock:/var/run/docker.sock:ro” \

docker.elastic.co/beats/filebeat:6.5.4 filebeat -e -strict.perms=false \

-E output.elasticsearch.hosts=[“elasticsearch:9200”]

Do remember that when shipping these Docker logs from container infrastructure to ELK, it is also important to shift metadata of Kubernetes and correlate these logs. This is even more important when you are managing multiple containers with Kubernetes. Correlation of container logs and Kubernetes metadata for metrics & logs can help you to drill down to the issue in your container-based infrastructure.

img4-15052019

  • Beats also has the capability to monitor Kubernetes metadata and enrich these to use with container logs. It provides pod name, container id, container name, image name/id, Kubernetes, label, etc. so that we understand what is happening and then we can filter out this information based on what we would want to drill down to. You can find out how to add Kubernetes metadata along with beats here.
  • ELK beats also has a feature called “Auto Discover“ which goes in one layer above to detect what services are running and start monitoring these services. There are standard services like MySQL, Postgres, system, etc, NGINX, etc. which can monitor dynamically through auto-discovery. Since the container is dynamic in nature, it is very important that the beat agent can auto-discover these services and start monitoring these.

There are templates available for Docker as well as Kubernetes where we can define the conditions. You can define configuration templates for different containers. The auto-discover subsystem will use them to monitor services as they start running.

metricbeat.autodiscover:

providers:

– type: docker

templates:

– condition:

equals.docker.container.image: mysql

config:

– module: mysql

metricsets: [“info”, “keyspace”]

hosts: “${data.host}:${data.port}”

Above is an example of auto-discovery configuration of Docker that initiate the metric beat MySQL module. When MySQL image-based container starts in your container infra, the above configuration detects this automatically and start taking input events through beats.

This is similarly available for Kubernetes configuration as well. Check below for the configuration in metric beat auto-discovery which launches a Prometheus module for all container of pods annotated “prometheus.io.scrape”. More details on the configuration about the auto-discovery can be found here.

metricbeat.autodiscover:

providers:

– type: kubernetes

include_annotations: [“prometheus.io.scrape”]

templates:

– condition:

contains:

kubernetes.annotations.prometheus.io.scrape: “true”

config:

– module: prometheus

metricsets: [“collector”]

hosts: “${data.host}:{data.port}”

Once you configure these beats for Docker as well as Kubernetes for metric as well as logs (optionally you can configure for network packet also with packet beat), you will start seeing the event data in Kibana discovery. You can then, drill down and filter these log lines into Kibana logs section. We can build different kind of visualizations based on this data and unify them on the dashboard to see as a single view.

img5-15052019

 

img6-15052019

You can then see the container metrics in a drilled down format in Kibana under infra section.

img7-15052019

Conclusion

There are tons of tools available today in the market to monitor your container based infra (which may be either based on cloud or VM/physical). But, as demonstrated, the Elastic stack is the best and reliable monitoring tool in container infrastructure in different layers. The best scalable solution or an add-on I see here is that with ELK based solution you can customize it to use Machine learning, graph correlation, set different types of alerts based on your needs. In fact, if you are already using ELK stack for other use cases like search, business analytics, etc., you should definitely leverage your ELK stack for the same.

For any other queries or a detailed discussion or a demo, you can write to us on success@ashnik.com or punch in your queries on Ask Ashnik.

0
0

  • Ajit brings over 16 years of solid experience in solution architecting and implementation. He has successfully delivered solutions on various database technologies including PostgreSQL, MySQL, SQLServer. His derives his strength from his passion for database technologies and love of pre-sales engagement with customers. Ajit has successfully engaged with customers from across the globe including South East Asia, USA and India. His commitment and ability to find solutions has earned him great respect from customers. Prior to Ashnik he has worked with EnterpriseDB for 7 years and helped it grow in South East Asia and India

More From Ajit Gadge | Senior Database Consultant, Ashnik :
15-May-2019