본문 바로가기

Cloud-native/Kubernetes

[Kubernetes]Install EFK for Kubernetes logging with helm chart in Kubernetes

Purpose

Install kubernetes logging system with EFK(Elasticsearch + Fluentd + Kibana)


1. Install elasticsearch helm chart

How to use helm chart

2022.05.24 - [Container/Kubernetes] - [Kubernetes]Understand how to use helm chart in Kubernetes

helm repo add elastic https://helm.elastic.co
helm fetch elastic/elasticsearch

Create pv first for elasticsearch

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efk-elasticsearch
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /k8s-nas
    server: 10.50.20.40
    #hostPath:
    #  path: /k8s-nas/efk
---

Elasticksearch를 replicas: 2 로 가져갈 경우 storgaeclass 나 label 사용하여 PV를 maaping 시킨다.

Edit value.yaml

mapping volumeName to pv name

volumeClaimTemplate:
  #storageClassName: join-nfs-storageclass
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 20Gi
  volumeName: efk-elasticsearch
helm install elasticsearch . -n efk

Readiness probe failed: Waiting for elasticsearch cluster to become ready

위와 같은 에러로 pod 생성이 안될 경우 아래 내용 참고
2022.05.28 - [Container/TroubleShooting] - [TroubleShooting]Readiness probe failed: Waiting for elasticsearch cluster to become ready

Check elasticsearch

[~/workspace/yaml/efk/elasticsearch]curl 10.111.12.152:9200
{
  "name" : "elasticsearch-master-0",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "WnHSjspTRz61gJ7dVwWsLw",
  "version" : {
    "number" : "7.12.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "78722783c38caa25a70982b5b042074cde5d3b3a",
    "build_date" : "2021-03-18T06:17:15.410153305Z",
    "build_snapshot" : false,
    "lucene_version" : "8.8.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

2. Install Fluentd

helm fetch stable/fluentd-elasticsearch
helm install fluentd -n efk . --dry-run

Edit value.yaml

elasticsearch:
  host: 'elasticsearch-master' ## elasticserach svc 명과 일치

mount 경로 지정 - HostPath (Option)

/var/log 경로(default) 이외에 마운트된 로그 수집을 위해 변경한다.

Edit templates/daemonset.yaml

volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: dockerlog
          mountPath: /data/docker/containers
          readOnly: true
        - name: libsystemddir
          mountPath: /host/lib
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d

volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: dockerlog
        hostPath:
          path: /data/docker/containers

value.yaml 의 source 부분을 수정하지 않아도

/var/log/container/log 가 /data/docker/containers/log 로 링크가 걸려있어 수집이 가능하다

불필요한 로그 수집하지 않기

<source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      exclude_path ["/var/log/containers/fluentd-fluentd-*.log",
                "/var/log/containers/weave-net*.log",
                "/var/log/containers/calico-*.log",
                "/var/log/containers/kube-*.log",
                "/var/log/containers/etcd-*.log",
                "/var/log/containers/kibana-kibana*.log",
                "/var/log/containers/prometheus*.log",
                "/var/log/containers/harbor-*.log",
                "/var/log/containers/ingress-nginx-controller*.log",
                "/var/log/containers/metrics-server-*.log",
                "/var/log/containers/dashboard-metrics*.log"
                ]
      pos_file /var/log/fluentd-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      tag raw.kubernetes.*
      format json
      read_from_head true
    </source>

https://docs.fluentd.org/input/tail#exclude_path

message filed에 log field 내용 복사

kibana logs 메뉴에서 편하게 보기 위해 설정

    <filter raw.kubernetes.**>
      @type record_transformer
      <record>
        message ${record["log"]}
      </record>
    </filter>

+) input tail usage

tail usage link: docs.fluentd.org

3. Install Kibana

helm fetch elastic/kibana

Edit value.yaml

ingress:
  enabled: true
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: kibana.ta.com
      paths:
        - path: /
helm install kibana . -n efk

4. Using EFK

Define index pattern

Stack Management → Index Patterns

bash logstash-*

Use Discover

Analytics → Discover

Use Logs

  • [indices] 한번 적용해줘야 그 뒤에 Logs에서 확인이 가능하다.