Logging with Loki

Mayur Kumar
3 min readAug 26, 2021

As a part of my Job, I get requests from several Dev and QA Teams for logs. And one of my day to day activities include providing logs to required people. So in order to provide logs I need to fetch the logs from the ‘Kubectl logs’ as our apps are deployed in a kubernetes cluster. In this blog I will talk about how to easily fetch and retain the application logs deployed on a k8’s cluster.

Challenges faced with ‘kubectl logs’ method :

  • Kubectl logs have many limitations. It does not guarantee the complete pod logs.
  • The logs fetched by kubectl command are limited to a certain number of logs.
  • It depends on the pod lifetime, logs density, etc
  • If a pod is deleted, there is no way to fetch the logs.
  • You cannot pull out historical logs.
  • It does not give us a flexible way to choose the required timestamp of logs.
  • Kubectl logs command can only be used if we have the appropriate permissions to that resource of k8’s cluster.

It is rightly said that “Every problem has a solution”. So in order to address the above problems Grafana Labs has introduced Loki.

Loki: like Prometheus, but for logs.

Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.

Loki is an amazing solution when you want to discover and consume logs alongside Prometheus and Kubernetes for microservices, and it provides a great file and application endpoint logging aggregation system.

Compared to other log aggregation systems, Loki:

  • Does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run.
  • Indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus.
  • Is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed.
  • Has native support in Grafana (needs Grafana v6.0 or higher).
  • Filterings on logs can be easily done and it provides us with better visualizations of logs.
  • Grafana dashboards can be prepared with Loki, for e.g to stream logs of particular apps, which becomes a boon for the development team or the operation team.
  • It uses less resources compared to other log aggregation systems.

Now I will walk you through the simplest architecture of loki.

Our stack with contains below components:

  1. Loki is the main server, responsible for storing logs and processing queries.
  2. Promtail is the agent, responsible for gathering logs and sending them to Loki.
  3. Grafana for querying and displaying the logs and even displaying the logging dashboards.

Loki Installation with Helm on K8’s

Prerequisites

Step 1: helm repo add grafana https://grafana.github.io/helm-charts

Step 2: To update the chart repository, run:

helm repo update

Step 3: Deploying Loki to your cluster

helm upgrade — install loki grafana/loki-stack

Step 4: Deploy in a custom namespace

helm upgrade — install loki — namespace=monitoring grafana/loki-stack

Here we go ! Loki is now installed on our kubernetes clusters.

--

--