沪ICP备2021032517号-1

Kubernetes 日志

  |   0 评论   |   0 浏览

标准日志

在 Kubernetes(简称 k8s)中,所有应用在 Pod(k8s 管理容器最小单位)中运行,标准处理方式为将日志打印到标准日志输出和标准错误输出,这样我们可以通过 kuberctl logs 关键字获取容器运行时日志,根据容器运行时的类型不同,日志保存路径也不同,以 Docker 为例,所有真实日志均在 /var/lib/docker/containers 路径下

/var/lib/docker/containers 目录下有很多长字符串组做的目录

这些长字符串是容器的UUID 如下 Container ID:

image.png

以 Container ID 命名的json格式的日志

image.png

在了解了标准日志处理,j接下来来看下节点级别的日志处理

节点级别日志

如果我们不想将应用日志都输出到标准输出,想将日志打印到 /var/log/ 下的自定义路径下,们可以在 Pod 启动时挂载一个 Volume,这个 Volume 就是Pod 所在的节点真实路径,这样我们在容器中就可以直接将日志打印到该路径下,哪怕 Pod 被销毁,日志也会一直存在。

[root@node1 blog]# cat log.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: counter
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: helloworldanilhostpath
    spec:
      containers:
      - name: count
        image: busybox
        args: [/bin/sh, -c,
              'i=0; while true; do echo "$i: $(date)" >> /var/log/yiran/test.log; i=$((i+1)); sleep 1; done']
        volumeMounts:
        - name: yiran-test-log
          mountPath: /var/log/yiran
      volumes:
      - name: yiran-test-log
        hostPath:
          path: /var/log/yiran
          type: Directory

创建完成后,我们来查看下该 Pod 的日志:

[root@node1 blog]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
counter-76b584fd8f-7fq99   1/1     Running   0          2m30s
testlog-rs-lwxts           1/1     Running   0          2d1h
[root@node1 blog]# kubectl logs counter-76b584fd8f-7fq99

此时在 /var/lib/docker/containers 目录下不会有标准日志,且日志文件为空,docker logs 也不显示日志内容,这里是符合预期的,因为我们将所有的日志输出到指定其他日志目录

k8s 提供了 ReplicaSet/DeploymentSet 这类可以自动缩扩容,自动 ha 的资源,我们如果仅仅通过节点级别的日志管理,集群规模小还好,当集群规模变大之后,对于使用日志的同学简直是灾难。

缺点:不能使用 kubectl logs;节点多的话不适用

集群级别日志

节点级别日志代理

从前面的介绍我们已经了解到,k8s 每个节点都将容器日志统一存储到了 /var/log/containers/ 目录下,因此可以在每个节点安装一个日志代理,将该目录下的日志实时传输到日志存储平台。

由于需要每个节点运行一个日志代理,因此日志代理推荐以 DaemonSet 的方式运行在每个节点。官方推荐的日志代理是 fluentd,当然也可以使用其他日志代理,比如 filebeat,logstash 等。

image.png

k8s 标准配置中推荐该方案,无论是从资源使用还是从配置管理上都是最佳方案

节点级别日志代理配合伴生容器

我们知道,标准容器日志应该输出到 stdout 和 stderr 中,那么如果我们一个 Pod 中输出多份日志怎么办?虽然这种情况是我们应该极度避免的,我们应该始终保证一个 Pod 只做一件事情。但是我们有时候迫于代码结构或者其他因素,导致我们会遇到这种情况,那么此时我们需要伴生容器配合使用。

这种方法是极度不推荐的,如果我们配置了 EFK,那么我们 1 份日志相当于写了 3 份,如果我们 ES 的后端存储是一个副本机制的分布式存储,那么我们 1 份日志相当于写了 3 * 2(或 3,存储副本数)份,这是极大的浪费了存储资源的,且会大大影响 SSD 磁盘寿命。

Pod 级别日志代理

即在应用程序的 pod 中包含专门记录日志 sidecar 容器

有两种使用 sidecar 容器的方式:

  • sidecar 容器重定向日志流
  • sidecar 容器作为日志代理

sidecar 容器重定向日志流

这种方式基于节点级日志代理方案,sidecar 容器和应用容器在同一个 Pod 运行,这个容器的作用就是读取应用容器的日志文件,然后将读取的日志内容重定向到 stdout 和 stderr,然后通过节点级日志代理统一收集。这种方式不推荐使用,缺点就是日志重复存储了,导致磁盘使用会成倍增加。比如应用容器的日志本身打到文件存储了一份,sidecar 容器重定向又存储了一份(存储到了 /var/lib/docker/containers/ 目录下)。这种方式的应用场景是应用本身不支持将日志打到 stdout 和 stderr,所以才需要 sidecar 容器重定向下。

image.png

sidecar 容器作为日志代理

这种方式不需要节点级日志代理,和应用容器在一起的 sidecar 容器直接作为日志代理方式运行在 Pod 中,sidecar 容器读取应用容器的日志,然后直接实时传输到日志存储平台。很显然这种方式也存在一个缺点,就是每个应用 Pod 都需要有个 sidecar 容器作为日志代理,而日志代理对系统 CPU、和内存都有一定的消耗,在节点 Pod 数很多的时候这个资源消耗其实是不小的。另外还有个问题就是在这种方式下由于应用容器日志不直接打到 stdout 和 stderr,所以是无法使用 kubectl logs 命令查看 Pod 中容器日志

image.png

若以该形式部署,则我们的应用程序配置不仅要配置应用自身,还要考虑日志处理策略;节点计算能力现在大幅提升,每个节点的 Pod 数量很大,浪费了大量的计算资源。

集群级别日志代理方案的实践

该方案关键是 filebeat收集日志的方式以及logstash如何过滤日志的方式。下面给出filebeat和logstash配置参考。

filebeat.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    app: filebeat
    version: 6.6.0
data:
  filebeat.yml: |
    filebeat.inputs:
    - type: log 
      enabled: true
      paths:
       - '/var/lib/docker/containers/*/*-json.log' 
      fields_under_root: true
      json.keys_under_root: true
      json.overwrite_keys: true
      json.message_key: message
      close_older: 30m
      force_close_files: true
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' 
      #multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:' 
      multiline.negate: true
      multiline.match: after
      processors:
      - add_kubernetes_metadata:
      - drop_fields:
          fields: ["offset","kubernetes.labels.service.istio.io/canonical-revision"]
    output.kafka:
      hosts: ["10.10.0.30:9092"]
      topic: test
      required_acks: 1


---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    app: filebeat
    version: 6.6.0
spec:
  selector:
    matchLabels:
      app: filebeat
      version: 6.6.0
  template:
    metadata:
      name: filebeat
      labels:
        app: filebeat
        version: 6.6.0
    spec:
      serviceAccountName: filebeat
      volumes:
      - name: filebeat-config
        configMap:
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /data/docker/containers # 注意我的宿主机容器和日志默认路径非/var/lib/docker/containers
      - name: syslog
        hostPath:
          path: /var/log
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.6.0
        imagePullPolicy: IfNotPresent
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 1500m
            memory: 1000Mi
          requests:
            cpu: 500m
            memory: 1000Mi
        volumeMounts:
        - name: filebeat-config
          mountPath: /usr/share/filebeat/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: syslog
          mountPath: /var/log/host-log
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    app: filebeat

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

kubernetes filebeat错误合并

apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    app: filebeat
    version: 6.6.0
data:
  filebeat.yml: |
    filebeat.inputs:
    - type: docker
      exclude_lines: ['DEBUG','WARN']
      containers.ids:
      - '*'
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after
      #ignore_older: 168h
      processors:
      - add_kubernetes_metadata:
          in_cluster: true
      - drop_fields:
          fields: ["offset","kubernetes.labels.service.istio.io/canonical-revision"]
    output.kafka:
      hosts: ["10.13.3.6:9092"]
      topic: k8s-pre-logs
      compression: none
      required_acks: 1
      broker_timeout: 10s
      channel_buffer_size: 1024
      keep_alive: 120
      max_message_bytes: 10485760
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    app: filebeat
    version: 6.6.0
spec:
  selector:
    matchLabels:
      app: filebeat
      version: 6.6.0
  template:
    metadata:
      name: filebeat
      labels:
        app: filebeat
        version: 6.6.0
    spec:
      serviceAccountName: filebeat
      volumes:
      - name: filebeat-config
        configMap:
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /opt/docker/containers
      - name: syslog
        hostPath:
          path: /var/log
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.6.0
        imagePullPolicy: IfNotPresent
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 1500m
            memory: 1000Mi
          requests:
            cpu: 500m
            memory: 1000Mi
        volumeMounts:
        - name: filebeat-config
          mountPath: /usr/share/filebeat/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: syslog
          mountPath: /var/log/host-log
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    app: filebeat
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

nginx 日志格式

log_format aka_logs
    '{"@timestamp":"$time_iso8601",'
    '"host":"$hostname",'
    '"server_ip":"$server_addr",'
    '"client_ip":"$remote_addr",'
    '"xff":"$http_x_forwarded_for",'
    '"domain":"$host",'
    '"url":"$uri",'
    '"referer":"$http_referer",'
    '"args":"$args",'
    '"upstreamtime":"$upstream_response_time",'
    '"responsetime":"$request_time",'
    '"request_method":"$request_method",'
    '"status":"$status",'
    '"size":"$body_bytes_sent",'
    '"request_body":"$request_body",'
    '"request_length":"$request_length",'
    '"protocol":"$server_protocol",'
    '"upstreamhost":"$upstream_addr",'
    '"file_dir":"$request_filename",'
    '"http_user_agent":"$http_user_agent"'
  '}';

Logstash解析以上日志格式

grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{HOSTNAME:hostname} %{IP:server_ip} %{IP:client_ip} %{DATA:xff} %{HOSTNAME:domain} %{NOTSPACE:url} %{NOTSPACE:http_referer} %{NOTSPACE:args} (-|\"%{NOTSPACE:upstream_response_time}\") (-|\"%{NOTSPACE:request_time}\") %{NOTSPACE:request_method} %{NOTSPACE:status} %{NOTSPACE:size} %{NOTSPACE:request_body} %{NOTSPACE:request_length} %{NOTSPACE:protocol} %{NOTSPACE:upstreamhost} %{NOTSPACE:file_dir} %{NOTSPACE:http_user_agent}" }
        overwrite => [ "message" ]
    }

上面 (-|"%{NOTSPACE:request_time}") 外层的 (-|"%{NOTSPACE:request_time}") 用于消除 数值多余的 ""

logstash 修改字段类型

  mutate {
     convert => ["upstream_response_time","float"]
     convert => ["request_time","float"]
    }

Java日志格式

%d{yyyy-MM-dd HH:mm:ss.SSS Z} [%tid] [%thread] %-5level %logger{36} - %msg
timestamp %d{yyyy-MM-dd HH:mm:ss.SSS Z}
trace_id [%tid]
thread_name [%thread]
level %-5level
logger_name %logger{36}
message %msg

logstash解析java日志格式字段

mutate {
        gsub => ["message","\s?(Z|[+-]\d+:?\d+)", "\1"]
    }

grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[TID:%{NOTSPACE:trace_id}\] \[%{DATA:thread_name}\] %{LOGLEVEL:log_level}\s+%{DATA:logger} - %{GREEDYDATA:message}" }
        overwrite => [ "message" ]
    }

上面 gsub 把时区前面的空格删除

logstash-sample.conf

input{
      kafka{
        bootstrap_servers => ["10.10.0.17:9092"]
        client_id => "test"
        group_id => "k8s-log"
        auto_offset_reset => "latest"
        consumer_threads => 5
        decorate_events => true
        topics => ["test"]
        auto_commit_interval_ms => "5000"
        auto_offset_reset => "latest"
        connections_max_idle_ms => "60000"
        enable_auto_commit => "true"
        fetch_max_wait_ms => "2000"
        max_partition_fetch_bytes => "10000000"
        max_poll_records => "10"
        poll_timeout_ms => "10000"
        session_timeout_ms => "60000"
        request_timeout_ms => "70000" 
      }
}

filter {
 json{
        source => "message"
    }

  if [kubernetes][namespace] == "default" or [kubernetes][namespace] == "kube-system" {

  mutate {
remove_field => "[kubernetes][labels][pod-template-generation]"
remove_field => "[kubernetes][labels][k8s-app]"
#remove_field => "[kubernetes][labels][app]"
remove_field => "[kubernetes][labels][version]"
remove_field => "[kubernetes][labels][pod-template-hash]"
remove_field => "[kubernetes][labels][field][cattle][io/podName]"
remove_field => "[kubernetes][pod][name]"
remove_field => "[kubernetes][replicaset][name]"
remove_field => "[host][name]"
remove_field => "[beat][name]"
remove_field => "[beat][hostname]"
remove_field => "[beat][version]"
remove_field => "[prospector][type]"
remove_field => "[input][type]"
remove_field => "stream"
remove_field => "message"
remove_field => "[kubernetes][labels][release]"
remove_field => "[log][flags]"
remove_field => "source"
   }
  }
}
output {
if [kubernetes][namespace] == "default" {

#stdout{
 #      codec=>rubydebug
 #}

#}

elasticsearch {
hosts => ["10.10.0.17:9200","10.10.0.18:9200","10.10.0.19:9200"]
index => "%{[kubernetes][container][name]}-test-%{+YYYY.MM.dd}"
}
}


if [kubernetes][namespace] == "kube-system" {

elasticsearch {
hosts => ["10.10.0.17:9200","10.10.0.18:9200","10.10.0.19:9200"]
index => "%{[kubernetes][container][name]}-dev-%{+YYYY.MM.dd}"
}
}
}

其他配置文件调整

jvm.options 中的 -Xms -Xms 需要根据系统内存配置情况定义

logstash.yml 中的 pipeline.batch.size: 需要根据系统资源配置情况定义

该参数能显著提高logstash 吞吐量

logsstash配置文件校验

logstash  -f  logstash.conf  -t

elasticsearch.yml

data节点

cluster.name: xd-tsp-log-es
node.name: ${HOSTNAME}

node.master: false
node.data: true
node.ingest: true

path.data: /data/elk7.6
path.logs: /data/elk7.6/log
network.host: 10.20.11.2
http.port: 9200

#discovery.seed_hosts: ["10.20.11.2","10.20.11.16","10.20.11.3"]
#cluster.initial_master_nodes: ["xd-tsp-log-es-01","xd-tsp-log-es-02","xd-tsp-log-es-03"]

discovery.zen.ping.unicast.hosts: ["10.20.1.4","10.20.1.6","10.20.1.8"]
discovery.zen.minimum_master_nodes: 2

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

thread_pool.get.queue_size: 2000
thread_pool.write.queue_size: 2000

xpack:
  security:
    authc:
      realms:
        active_directory:
           beantechs:
             order: 0
             domain_name: domain.com
             url: ldap://ad.domain.com:3268
             bind_dn: rancher
             bind_password: 123456
        native:
          native1:
            order: 1

discovery.zen.ping.unicast.hosts: 上面的IP为 master,该模式方便data节点的调整。

master节点

cluster.name: xd-tsp-log-es
node.name: ${HOSTNAME}

node.master: true
node.data: false
node.ingest: false

path.data: /data/elk7.6
path.logs: /data/elk7.6/log
network.host: 10.20.1.4
http.port: 9200

discovery.zen.ping.unicast.hosts: ["10.20.1.4","10.20.1.6","10.20.1.8"]
discovery.zen.minimum_master_nodes: 2

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

xpack:
  security:
    authc:
      realms:
        active_directory:
           beantechs:
             order: 0
             domain_name: beantechs.com
             url: ldap://ad.damoin.com:3268
             bind_dn: rancher
             bind_password: 123456
        native:
          native1:
            order: 1

其他配置文件调整

jvm.options 中 -Xms -Xmx 即堆内存 需根据系统配置情况调整,一般设置总内存的一半。

Kibana API 创建和删除索引模式

Create

curl --fail --user httpuser:httppass --request POST --header 'Content-Type: application/json' --header 'kbn-xsrf: this_is_required_header' 'https://kibana.example.org/api/saved_objects/index-pattern/apache-*?overwrite=true' --data '{"attributes":{"title":"apache-*","timeFieldName":"@timestamp"}}' 'https://kibana.example.org/api/saved_objects/index-pattern/apache-*?overwrite=true'

Delete

curl --fail --user httpuser:httppass --request DELETE --header 'kbn-xsrf: this_is_required_header' 'https://kibana.example.org/api/saved_objects/index-pattern/index-*'

Index索引名称少8小时

# 1. 增加一个字段,计算timestamp+8小时
ruby { 
    code => "event.set('index_date', event.get('@timestamp').time.localtime + 8*60*60)" 
} 
# 2. 用mutate插件先转换为string类型,gsub只处理string类型的数据,在用正则匹配,最终得到想要的日期
mutate { 
    convert => ["index_date", "string"] 
    gsub => ["index_date", "T([\S\s]*?)Z", ""] 
    gsub => ["index_date", "-", "."] 
}  
# 3.output配置
elasticsearch { 
  hosts => ["localhost:9200"] 
  index => "myindex_%{index_date}" 
}

Logstash & k8s容器化

logstash.conf

configmap 配置文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-conf-piple
  namespace: ops
  labels:
    app: logstash
    version: 7.5.1
data:
  logstash.conf: |
    input{
          kafka{
            bootstrap_servers => ["10.10.3.6:9092"]
            topics => ["k8s-pre-logs"]
            group_id => "k8s-pre-logs"
            auto_offset_reset => "latest"
            consumer_threads => 1
            decorate_events => true
            auto_commit_interval_ms => "5000"
            auto_offset_reset => "latest"
            connections_max_idle_ms => "60000"
            enable_auto_commit => "true"
            fetch_max_wait_ms => "2000"
            max_partition_fetch_bytes => "10000000"
            max_poll_records => "10"
            poll_timeout_ms => "10000"
            session_timeout_ms => "60000"
            request_timeout_ms => "70000"
          }
    }

    filter {

        json{
            source => "message"
        }

        mutate {
            gsub => ["message","\s?(Z|[+-]\d+:?\d+)", "\1"]
        }

        grok {
            match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[TID:%{NOTSPACE:trace_id}\] \[%{DATA:thread_name}\] %{LOGLEVEL:log_level}\s+%{DATA:logger} - %{GREEDYDATA:message}" }
        }

        mutate {
            convert => ["upstream_response_time","float"]
            convert => ["request_time","float"]
        }

        ruby {
            code => "event.set('index_date', event.get('@timestamp').time.localtime + 8*60*60)"
        }
	  
        mutate {
            convert => ["index_date", "string"]
            gsub => ["index_date", "T([\S\s]*?)Z", ""]
            gsub => ["index_date", "-", "."]
        }

        mutate {
          remove_field => "[kubernetes][labels][pod-template-generation]"
          remove_field => "[kubernetes][labels][k8s-app]"
          remove_field => "[kubernetes][labels][version]"
          remove_field => "[kubernetes][labels][pod-template-hash]"
          remove_field => "[kubernetes][labels][field][cattle][io/podName]"
          #remove_field => "[kubernetes][pod][name]"
          remove_field => "[kubernetes][replicaset][name]"
          remove_field => "[host][name]"
          remove_field => "[beat][name]"
          remove_field => "[beat][hostname]"
          remove_field => "[beat][version]"
          remove_field => "[prospector][type]"
          remove_field => "[input][type]"
          remove_field => "[stream]"
          remove_field => "[kubernetes][labels][release]"
          remove_field => "[kubernetes][pod][uid]"
          remove_field => "[log][flags]"
          remove_field => "[log]"
          remove_field => "[source]"
          remove_field => "[kubernetes][labels][qcloud-app]"
          remove_field => "[kubernetes][labels][app]"
          remove_field => "[kubernetes][labels][date]"
          remove_field => "[kubernetes][labels][center]"
          remove_field => "[kubernetes][labels][deus][deployment][name]"
          remove_field => "[kubernetes][labels][deus][managed-by]"
          remove_field => "[kubernetes][labels][workload][user][cattle][io][workloadselector]"
          remove_field => "[log][file][path]"
        }

    }
    output {

        if [kubernetes][namespace] == "app-dev" or [kubernetes][namespace] == "ap-test"{
            elasticsearch {
            hosts => ["10.10.5.60:9200"]
            index => "%{[kubernetes][container][name]}-pre-%{index_date}"
            user => "elastic"
            password => "123456"
            }
        }
	  
        if [kubernetes][namespace] == "ingress-nginx" {
            elasticsearch {
            hosts => ["10.10.5.60:9200"]
            index => "%{[kubernetes][container][name]}-pre-%{+YYYY.MM.dd}"
            user => "elastic"
            password => "123456"
            }
        }

    }

logstash.yml

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-yml
  namespace: ops
  labels:
    app: logstash
    version: 7.5.1
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    pipeline.workers: 8
    pipeline.batch.size: 2000
    xpack.monitoring.enabled: true
    xpack.monitoring.elasticsearch.username: elastic
    xpack.monitoring.elasticsearch.password: 123456
    xpack.monitoring.elasticsearch.hosts: ["10.10.5.60:9200"]

jvm.options

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-jvm
  namespace: ops
  labels:
    app: logstash
    version: 7.5.1
data:
  jvm.options: |
    -Xms4g
    -Xmx8g
    -XX:+UseConcMarkSweepGC
    -XX:CMSInitiatingOccupancyFraction=75
    -XX:+UseCMSInitiatingOccupancyOnly
    -Djava.awt.headless=true
    -Dfile.encoding=UTF-8
    -Djruby.compile.invokedynamic=true
    -Djruby.jit.threshold=0
    -Djruby.regexp.interruptible=true
    -XX:+HeapDumpOnOutOfMemoryError
    -Djava.security.egd=file:/dev/urandom
    -Dlog4j2.isThreadContextMapInheritable=true

logstash-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pre-logstash
  namespace: ops
  labels:
    app: logstash
    version: 7.5.1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: logstash
      version: 7.5.1
  template:
    metadata:
      name: logstash
      labels:
        app: logstash
        version: 7.5.1
    spec:
      volumes:
      - name: logstash-conf-piple
        configMap:
          name: logstash-conf-piple
      - name: logstash-jvm
        configMap:
          name: logstash-jvm
      - name: logstash-yml
        configMap:
          name: logstash-yml
      containers:
      - name: pre-logstash
        image: elastic/logstash:7.5.1
        imagePullPolicy: IfNotPresent
        securityContext:
          runAsUser: 0
        resources:
          limits:
            cpu: 2000m
            memory: 4096Mi
          requests:
            cpu: 1000m
            memory: 2048Mi
        volumeMounts:
        - name: logstash-conf-piple
          mountPath: /usr/share/logstash/pipeline/logstash.conf
          readOnly: true
          subPath: logstash.conf
        - name: logstash-jvm
          mountPath: /usr/share/logstash/config/jvm.options
          readOnly: true
          subPath: jvm.options
        - name: logstash-yml
          mountPath: /usr/share/logstash/config/logstash.yml
          readOnly: true
          subPath: logstash.yml

特别注意: logstash.conf 需要挂载到 /usr/share/logstash/pipeline/ 否则读不到配置文件。

Kubernetes Event log

apiVersion: v1
kind: ServiceAccount
metadata:
  name: eventrouter 
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eventrouter 
rules:
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: eventrouter 
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: eventrouter
subjects:
- kind: ServiceAccount
  name: eventrouter
  namespace: kube-system
---
apiVersion: v1
data:
  config.json: |-
    {
      "sink": "kafka",
      "kafkaBrokers": "10.10.3.6:9092",
      "kafkaTopic": "K8S_dev_eventlog"
    }
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: gcr-eventrouter
    meta.helm.sh/release-namespace: kube-system
  labels:
    app.kubernetes.io/managed-by: Helm
  name: eventrouter-cm
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eventrouter
  namespace: kube-system
  labels:
    app: eventrouter
spec:
  replicas: 2
  selector:
    matchLabels:
      app: eventrouter
  template:
    metadata:
      labels:
        app: eventrouter
        tier: control-plane-addons
    spec:
      containers:
        - name: kube-eventrouter
          image: fastop/eventrouter:kafka
          imagePullPolicy: IfNotPresent
          volumeMounts:
          - name: config-volume
            mountPath: /etc/eventrouter
      serviceAccount: eventrouter
      volumes:
        - name: config-volume
          configMap:
            name: eventrouter-cm

标题:Kubernetes 日志
作者:zifuy
地址:https://www.zifuy.cn/articles/2019/12/14/1576329536344.html