`
maosheng
  • 浏览: 565170 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

Kubernetes1.16.3下部署node-exporter+alertmanager+prometheus+grafana 监控系统

阅读更多
准备工作

建议将所有的yaml文件存在如下目录:
# mkdir /script/prometheus -p && cd /script/prometheus

NFS搭建见: Linux NFS搭建与配置(https://www.iteye.com/blog/maosheng-2517254)


一、部署node-exporter

为了能够采集集群中各个节点的资源使用情况,我们需要在各节点中部署一个Node Exporter实例。与Prometheus的部署不同的是,对于Node Exporter而言每个节点只需要运行一个唯一的实例,此时,就需要使用Kubernetes的另外一种控制 器Daemonset。顾名思义,Daemonset的管理方式类似于操作系统中的守护进程。Daemonset会确保在集群中所有 (也可以指定)节点上运行一个唯一的Pod实例,这样每一个节点都会运行一个Pod,如果我们从集群中删除或添加节点后,也会进行自动扩展。

# cat >>prometheus-node-exporter.yaml<<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    name: node-exporter
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      name: node-exporter
  template:
    metadata:
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9100'
        prometheus.io/path: 'metrics'
      labels:
        name: node-exporter
        app: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:v0.16.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9100
        resources:
          requests:
            cpu: 0.15
        securityContext:
          privileged: true
        args:
        - --path.procfs
        - /host/proc
        - --path.sysfs
        - /host/sys
        - --collector.filesystem.ignored-mount-points
        - '"^/(sys|proc|dev|host|etc)($|/)"'
        volumeMounts:
        - name: dev
          mountPath: /host/dev
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /

EOF

注意:由于Node Exporter需要能够访问宿主机,因此这里指定了:
      hostPID: true
      hostIPC: true
      hostNetwork: true
让Pod实例能够以主机网络以及系统进程的形式运行。

这三个配置主要用于主机的PID namespace、IPC namespace以及主机网络,这里需要注意的是namespace是用于容器隔离的关键技术,这里的namespace和集群中的namespace是两个完全不同的概念。

另外我们还需要将主机/dev、/proc、/sys这些目录挂在到容器中,这些因为我们采集的很多节点数据都是通过这些文件来获取系统信息。

hostNetwork:true:会直接将我们的宿主机的9100端口映射出来,从而不需要创建service在我们的宿主机上就会有一个9100的端口 容器的9100--->映射到宿主机9100

如果是使用kubeadm搭建的,同时需要监控master节点的,则需要添加下方的相应容忍:
- key:"node-role.kubernetes.io/master"
operator:"Exists"
effect:"NoSchedule

创建node-exporter并检查pod
# kubectl create -f prometheus-node-exporter.yaml
daemonset.extensions/node-exporter created

查看Daemonset以及Pod的运行状态
# kubectl get daemonsets -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
node-exporter             3         3         3       3            3           <none>                        3d18h

# kubectl get pod -n kube-system -o wide|grep node-exporter
node-exporter-cmjkc           1/1     Running     0          33h    192.168.29.176   k8s-node2   <none>           <none>
node-exporter-wl5lx           1/1     Running     0          27h    192.168.29.182   k8s-node3   <none>           <none>
node-exporter-xsv9z           1/1     Running     0          33h    192.168.29.175   k8s-node1   <none>           <none>

这里我们可以看到,我们有3个节点,在所有的节点上都启动了一个对应Pod进行获取数据

我们要查看一下Pod日志,以及node-exporter中的metrics

使用命令kubectl logs -n 命名空间 node-exporter中Pod名称检查Pod日志是否有报错

# kubectl logs -n kube-system node-exporter-22vkv
time="2020-10-23T07:58:22Z" level=info msg="Starting node_exporter (version=0.16.0, branch=HEAD, revision=d42bd70f4363dced6b77d8fc311ea57b63387e4f)" source="node_exporter.go:82"
time="2020-10-23T07:58:22Z" level=info msg="Build context (go=go1.9.6, user=root@a67a9bc13a69, date=20180515-15:52:42)" source="node_exporter.go:83"
time="2020-10-23T07:58:22Z" level=info msg="Enabled collectors:" source="node_exporter.go:90"
time="2020-10-23T07:58:22Z" level=info msg=" - arp" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - bcache" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - bonding" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - conntrack" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - cpu" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - diskstats" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - edac" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - entropy" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - filefd" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - filesystem" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - hwmon" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - infiniband" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - ipvs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - loadavg" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - mdadm" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - meminfo" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - netdev" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - netstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - nfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - nfsd" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - sockstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - stat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - textfile" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - time" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - timex" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - uname" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - vmstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - wifi" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - xfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - zfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg="Listening on :9100" source="node_exporter.go:111"

接下来,我们在任意集群节点curl ip:9100/metrics
# curl 127.0.0.1:9100/metrics|head
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  4 84864    4  3961    0     0  35179      0  0:00:02 --:--:--  0:00:02 35053# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
100 84864  100 84864    0     0   723k      0 --:--:-- --:--:-- --:--:--  720k
curl: (23) Failed writing body (135 != 15367)


只要metrics可以获取到数据说明node-exporter没有问题。

需要我们在Prometheus配置文件(prometheus.configmap.yaml)中,采用服务发现,添加如下信息:
    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
 
#通过制定Kubernetes_sd_config的模式为node,prometheus就会自动从Kubernetes中发现所有的node节点并作为当前job监控的目标实例,发现的节点/metrics接口是默认的kubelet的HTTP接口。


二、部署alertmanager

1、创建Prometheus报警规则ConfigMap资源对象

# cat >>prometheus-rules.yaml <<EOF 
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-rules
  namespace: kube-system
data:
  general.rules: |
    groups:
    - name: general.rules
      rules:
      - alert: InstanceDown
        expr: up == 0
        for: 1m
        labels:
          severity: error
        annotations:
          summary: "Instance {{ $labels.instance }} 停止工作"
          description: "{{ $labels.instance }} job {{ $labels.job }} 已经停止5分钟以上."
  node.rules: |
    groups:
    - name: node.rules
      rules:
      - alert: NodeFilesystemUsage
        expr: 100 - (node_filesystem_free_bytes{fstype=~"ext4|xfs"} / node_filesystem_size_bytes{fstype=~"ext4|xfs"} * 100) > 80
        for: 1m
        labels:
          severity: warning
        annotations:
          summary: "Instance {{ $labels.instance }} : {{ $labels.mountpoint }} 分区使用率过高"
          description: "{{ $labels.instance }}: {{ $labels.mountpoint }} 分区使用大于80% (当前值: {{ $value }})"

      - alert: NodeMemoryUsage
        expr: 100 - (node_memory_MemFree_bytes+node_memory_Cached_bytes+node_memory_Buffers_bytes) / node_memory_MemTotal_bytes * 100 > 80
        for: 1m
        labels:
          severity: warning
        annotations:
          summary: "Instance {{ $labels.instance }} 内存使用率过高"
          description: "{{ $labels.instance }}内存使用大于80% (当前值: {{ $value }})"

      - alert: NodeCPUUsage
        expr: 100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance) * 100) > 60
        for: 1m
        labels:
          severity: warning
        annotations:
          summary: "Instance {{ $labels.instance }} CPU使用率过高"
          description: "{{ $labels.instance }}CPU使用大于60% (当前值: {{ $value }})"
EOF
 
# kubectl apply -f prometheus-rules.yaml


2、创建AlertManager的ConfigMap资源对象

# cat >>alertmanager-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  alertmanager.yml: |
    global:
      # 在没有报警的情况下声明为已解决的时间
      resolve_timeout: 5m
      # 配置邮件发送信息
      smtp_smarthost: 'smtp.163.com:465'
      smtp_from: '1234567@163.com'
      smtp_auth_username: '1234567@163.com'
      smtp_auth_password: 'ACFDSWWXENPVHRDHTBPHC'
      smtp_hello: '163.com'
      smtp_require_tls: false
    receivers:
    - name: 'default'
      email_configs:
      - to: '45665464456@qq.com'
        send_resolved: true
    - name: 'email'
      email_configs:
      - to: '45665464456@qq.com'
        send_resolved: true
    # 所有报警信息进入后的根路由,用来设置报警的分发策略
    route:
      # 这里的标签列表是接收到报警信息后的重新分组标签,例如,接收到的报警信息里面有许多具有 cluster=A 和 alertname=LatncyHigh 这样的标签的报警信息将会批量被聚合到一个分组里面
      group_by: ['alertname', 'cluster']
      # 当一个新的报警分组被创建后,需要等待至少group_wait时间来初始化通知,这种方式可以确保您能有足够的时间为同一分组来获取多个警报,然后一起触发这个报警信息。
      group_wait: 30s

      # 当第一个报警发送后,等待'group_interval'时间来发送新的一组报警信息。
      group_interval: 5m

      # 如果一个报警信息已经发送成功了,等待'repeat_interval'时间来重新发送他们
      repeat_interval: 5m

      # 默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器
      receiver: default

      # 上面所有的属性都由所有子路由继承,并且可以在每个子路由上进行覆盖。
      routes:
      - receiver: email
        group_wait: 10s
        match:
          team: node  
EOF


# kubectl create -f alertmanager-configmap.yaml

# kubectl get cm -n kube-system
NAME                                 DATA   AGE
alertmanager-config                  1      19h
coredns                              1      90d
extension-apiserver-authentication   6      90d
kube-flannel-cfg                     2      90d
kube-proxy                           2      90d
kubeadm-config                       2      90d
kubelet-config-1.16                  1      90d
prometheus-config                    1      8d
prometheus-rules                     2      19h


3、创建PV、PVC进行数据持久化

cat >>alertmanager-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: alertmanager
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 192.101.11.156
    path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: alertmanager
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
EOF
 
# kubectl apply -f alertmanager-volume.yaml

4、创建AlertManager的Pod资源

# cat >>alertmanager-deployment.yaml <<EOF  
apiVersion: apps/v1
kind: Deployment
metadata:
  name: alertmanager
  namespace: kube-system
  labels:
    k8s-app: alertmanager
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.15.3
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: alertmanager
      version: v0.15.3
  template:
    metadata:
      labels:
        k8s-app: alertmanager
        version: v0.15.3
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      containers:
        - name: prometheus-alertmanager
          image: "prom/alertmanager:v0.15.3"
          imagePullPolicy: "IfNotPresent"
          args:
            - --config.file=/etc/config/alertmanager.yml
            - --storage.path=/data
            - --web.external-url=/
          ports:
            - containerPort: 9093
          readinessProbe:
            httpGet:
              path: /#/status
              port: 9093
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: storage-volume
              mountPath: "/data"
              subPath: ""
          resources:
            limits:
              cpu: 10m
              memory: 50Mi
            requests:
              cpu: 10m
              memory: 50Mi
        - name: prometheus-alertmanager-configmap-reload
          image: "jimmidyson/configmap-reload:v0.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --volume-dir=/etc/config
            - --webhook-url=http://localhost:9093/-/reload
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
              readOnly: true
          resources:
            limits:
              cpu: 10m
              memory: 10Mi
            requests:
              cpu: 10m
              memory: 10Mi
      volumes:
        - name: config-volume
          configMap:
            name: alertmanager-config
        - name: storage-volume
          persistentVolumeClaim:
            claimName: alertmanager
---
apiVersion: v1
kind: Service
metadata:
  name: alertmanager
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Alertmanager"
spec:
  ports:
    - name: http
      port: 9093
      protocol: TCP
      targetPort: 9093
  nodePort: 30093
  selector:
    k8s-app: alertmanager
  ttype: NodePort
 
EOF

# kubectl apply -f alertmanager-deployment.yaml

# kubectl get pod,svc -n kube-system -o wide

需要我们在Prometheus配置文件(prometheus.configmap.yaml)中,添加AlertManager的地址,让Prometheus能够访问AlertManager,添加报警规则:

    alerting:
      alertmanagers:
      - static_configs:
        - targets: ['localhost:9093']

    rule_files:
      - /etc/config/rules/*.rules

三、部署blackbox exporter

为了能够对Ingress和Service进行探测,我们需要在集群部署Blackbox Exporter实例。 如下所示,创建 blackbox-exporter.yaml用于描述部署相关的内容:

# cat >>blackbox-exporter.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
spec:
  ports:
  - name: blackbox
    port: 9115
    protocol: TCP
  selector:
    app: blackbox-exporter
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      containers:
      - image: prom/blackbox-exporter
        imagePullPolicy: IfNotPresent
        name: blackbox-exporter
EOF


# kubectl create -f blackbox-exporter.yaml

通过kubectl命令部署Blackbox Exporter实例,这里将部署一个Blackbox Exporter的Pod实例,同时通过服务blackbox-exporter在集群内暴露访问地址blackbox-exporter.default.svc.cluster.local,对于集群内的任意服务都可以通过该内部DNS域名访问Blackbox Exporter实例。

为了能够让Prometheus能够自动的对Service进行探测,我们需要通过服务发现自动找到所有的Service信息。 如下所示,在Prometheus的配置文件中添加名为kubernetes-services的监控采集任务:

    - job_name: 'kubernetes-services'
      metrics_path: /probe
      params:
        module: [http_2xx]
      kubernetes_sd_configs:
      - role: service
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.default.svc.cluster.local:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name


为了区分集群中需要进行探测的Service实例,我们通过标签‘prometheus.io/probe: true’进行判断,从而过滤出需要探测的所有Service实例:
并且将通过服务发现获取到的Service实例地址 __address__ 转换为获取监控数据的请求参数。同时 将 __address 执行Blackbox Exporter实例的访问地址,并且重写了标签instance的内容:

对于Ingress而言,也是一个相对类似的过程,这里给出对Ingress探测的Promthues任务配置:

    - job_name: 'kubernetes-ingresses'
      metrics_path: /probe
      params:
        module: [http_2xx]
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.default.svc.cluster.local:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name


四、部署prometheus

prometheus采用nfs挂载方式来存储数据,同时使用configMap管理配置文件。并且我们将所有的prometheus存储在kube-system namespace

当使用Deployment管理和部署应用程序时,用户可以方便了对应用进行扩容或者缩容,从而产生多个Pod实例。为了 能够统一管理这些Pod的配置信息,在Kubernetes中可以使用ConfigMaps资源定义和管理这些配置,并且通过环境 变量或者文件系统挂载的方式让容器使用这些配置。 这里将使用ConfigMaps管理Prometheus的配置文件,创建prometheus.configmap.yaml文件,并写入以下内容

1、创建Prometheus的ConfigMap资源对象

# cat >> prometheus.configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      scrape_timeout: 15s

    alerting:
      alertmanagers:
      - static_configs:
        - targets: ['localhost:9093']

    rule_files:
      - /etc/config/rules/*.rules

    scrape_configs:
    - job_name: 'prometheus'
      static_configs:
      - targets: ['localhost:9090']

    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    - job_name: 'kubernetes-kubelet'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

    - job_name: kubernetes-apiservers
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep        
        regex: default;kubernetes;https 
        source_labels:
        - __meta_kubernetes_namespace
        - __meta_kubernetes_service_name
        - __meta_kubernetes_endpoint_port_name
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

    - job_name: 'kubernetes-services'
      metrics_path: /probe
      params:
        module: [http_2xx]
      kubernetes_sd_configs:
      - role: service
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.default.svc.cluster.local:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name

    - job_name: 'kubernetes-ingresses'
      metrics_path: /probe
      params:
        module: [http_2xx]
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.default.svc.cluster.local:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name

EOF

# kubectl apply -f prometheus.configmap.yaml
configmap/prometheus-config created

# kubectl get configmaps -n kube-system |grep prometheus
prometheus-config                    1      25s


2、创建PV、PVC进行数据持久化

prometheus.yaml文件对应的ConfigMap对象通过volume的形式挂载进Pod,这样ConfigMap更新后,对应的pod也会热更新,然后我们再执行reload请求,prometheus配置就生效了。除此之外,对了将时序数据进行持久化,我们将数据目录和一个pvc对象进行了绑定,所以我们需要提前创建pvc对象:

# cat >prometheus-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: prometheus
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 192.101.11.156
    path: /app/k8s

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheus
  namespace: kube-system
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
EOF

通过一个简单的NFS作为存储后端创建一个pv & pvc
# kubectl create -f prometheus-volume.yaml
persistentvolume/prometheus created
persistentvolumeclaim/prometheus created

[
删除PersistentVolume、PersistentVolumeClaim,先删除使用了此应用的部署
kubectl delete -f prometheus.deployment.yaml

kubectl get PersistentVolume
kubectl get PersistentVolumeClaim  -n kube-system

kubectl delete PersistentVolume prometheus
kubectl delete  PersistentVolumeClaim prometheus -n kube-system
或者
kubectl delete -f prometheus-volume.yaml
]


3、创建rbac认证

因为prometheus需要访问k8s集群内部的资源,需要Kubernetes的访问授权

为了能够让Prometheus能够访问收到认证保护的Kubernetes API,我们首先需要做的是,对Prometheus进行访 问授权。在Kubernetes中主要使用基于角色的访问控制模型(Role-Based Access Control),用于管理 Kubernetes下资源访问权限。首先我们需要在Kubernetes下定义角色(ClusterRole),并且为该角色赋予响应 的访问权限。同时创建Prometheus所使用的账号(ServiceAccount),最后则是将该账号与角色进行绑定 (ClusterRoleBinding)。这些所有的操作在Kubernetes同样被视为是一系列的资源,可以通过YAML文件进行 描述并创建,这里创建prometheus-rbac.yaml文件,并写入以下内容:

# cat >>prometheus-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - services
  - endpoints
  - pods
  - nodes/proxy
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system
EOF


注意:ClusterRole是全局的,不需要指定命名空间。而ServiceAccount是属于特定命名空间的资 源

创建:
# kubectl create -f prometheus-rbac.yaml
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created

[
查看:
kubectl get serviceaccount -n kube-system | grep prometheus
kubectl get clusterrole  | grep prometheus
kubectl get clusterrolebinding | grep prometheus
删除:
kubectl delete  clusterrolebinding prometheus
kubectl delete  clusterrole prometheus
kubectl delete serviceaccount prometheus -n kube-system
]


4、创建prometheus的Pod资源

当ConfigMap资源创建成功后,我们就可以通过Volume挂载的方式,将Prometheus的配置文件挂载到容器中。 这里我们通过Deployment部署Prometheus Server实例,创建prometheus.deployment.yaml文件,并写入以下 内容:

# cat > prometheus.deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  namespace: kube-system
  labels:
    app: prometheus
spec:
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      serviceAccountName: prometheus    ##在完成角色权限以及用户的绑定之后,就可以指定Prometheus使用特定的ServiceAccount创建Pod实例
      containers:
      - image: prom/prometheus:v2.4.3
    imagePullPolicy: IfNotPresent
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"  ##数据存储路径
        - "--storage.tsdb.retention=7d"   ##数据保留期限的设置,企业中设置15天为宜
        - "--web.enable-admin-api"  # 控制对admin HTTP API的访问,其中包括删除时间序列等功能
        - "--web.enable-lifecycle"  # 支持热更新,直接执行 curl -X POST
curl -X POST http://localhost:9090/-/reload  立即生效
        ports:
        - containerPort: 9090
          protocol: TCP
          name: http
        volumeMounts:
        - mountPath: "/prometheus"
          subPath: prometheus
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        - mountPath: /etc/config/rules
          name: prometheus-rules
          subPath: ""
        resources:
          requests:
            cpu: 100m
            memory: 512Mi
          limits:
            cpu: 100m
            memory: 512Mi
      securityContext:          ##添加了一行securityContext,其中runAsUser设置为0,这是因为prometheus运行过程中使用的用户是nobody,如果不配置可能会出现权限问题
        runAsUser: 0
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: prometheus
      - name: config-volume
        configMap:
          name: prometheus-config
      - name: prometheus-rules
        configMap:
          name: prometheus-rules

---
apiVersion: v1
kind: Service
metadata:
  namespace: kube-system
  name: prometheus
  labels:
    app: prometheus
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
  - name: http
    port: 9090
protocol: TCP
    targetPort: 9090
    nodePort: 30090
EOF


将ConfigMap volume rbac 创建完毕后,就可以创建prometheus.deployment.yaml了,运行prometheus服务

#  kubectl create -f prometheus.deployment.yaml
deployment.extensions/prometheus created

# kubectl get pod -n kube-system |grep prometheus
prometheus-847494df74-zbz9v      1/1     Running   0          148m

#这里1/1 状态为Running即可

指定ServiceAccount创建的Pod实例中,会自动将用于访问Kubernetes API的CA证书以及当前账户对应的访问令牌文件挂载到Pod实例的/var/run/secrets/kubernetes.io/serviceaccount/目录下,可以通过以下命令进 行查看:

# kubectl exec -it prometheus-847494df74-zbz9v /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt     namespace  token

使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:30090


五、部署grafana

1、创建PV、PVC进行数据持久化

# cat >>grafana_volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 192.101.11.156
    path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana
  namespace: kube-system
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
EOF

# kubectl create -f grafana_volume.yaml

2、创建授权job

由于5.1(可以选择5.1之前的docker镜像,可以避免此类错误)版本后groupid更改,同时我们将/var/lib/grafana挂载到pvc后,目录拥有者可能不是grafana用户,所以我们还需要添加一个Job用于授权目录

cat > grafana_job.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: grafana-chown
  namespace: kube-system
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: grafana-chown
        command: ["chown", "-R", "472:472", "/var/lib/grafana"]
        image: busybox
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: storage
          subPath: grafana
          mountPath: /var/lib/grafana
      volumes:
      - name: storage
        persistentVolumeClaim:
          claimName: grafana
EOF

# kubectl create -f grafana_job.yaml


3、创建grafana的Pod资源


# cat >>grafana_deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    k8s-app: grafana
spec:
  selector:
    matchLabels:
      k8s-app: grafana
      app: grafana
  revisionHistoryLimit: 10
  template:
    metadata:
      labels:
        app: grafana
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:5.3.4
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          name: grafana
        env:
        - name: GF_SECURITY_ADMIN_USER
          value: admin
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: 12345@com
        readinessProbe:
          failureThreshold: 10
          httpGet:
            path: /api/health
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/health
            port: 3000
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 300m
            memory: 1024Mi
          requests:
            cpu: 300m
            memory: 1024Mi
        volumeMounts:
        - mountPath: /var/lib/grafana
          subPath: grafana
          name: storage
      securityContext:
        fsGroup: 472
        runAsUser: 472
      volumes:
      - name: storage
        persistentVolumeClaim:
          claimName: grafana
 
---

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
spec:
  type: NodePort
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 30091
  selector:
    app: grafana
 
EOF

# kubectl apply -f grafana_deployment.yaml

注意:比较重要的变量是GF_SECURITY_ADMIN_USER和GF_SECURITY_ADMIN_PASSWORD为grafana的账号和密码。
由于grafana将dashboard、插件这些数据保留在/var/lib/grafana目录下,所以我们这里需要做持久化,同时要针对这个目录做挂载声明,由于5.3.4版本用户的userid和groupid都有所变化,所以这里添加了一个securityContext设置用户ID。

# kubectl get pod,svc -n kube-system |grep grafana
pod/grafana-54f6755f88-5dwl7      1/1     Running     0          27h
pod/grafana-chown-lcw2v           0/1     Completed   0          27h
service/grafana      NodePort    10.0.0.202   <none>        3000:9006/TCP   27h

使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:30091,默认账号密码为:admin/12345@com


注意:grafana的prometheus data sources设置 URL为:
http://prometheus.kube-system.svc.cluster.local:9090


grafana默认走的是浏览器时区,但是prometheus使用的是UTC时区:
Configuration----->Preferences:Preferences:Timezone设置为UTC


六、prometheus热更新

编辑配置
# vi prometheus.configmap.yaml

更新配置
# kubectl apply -f prometheus.configmap.yaml
configmap/prometheus-config configured

# kubectl get pod -n kube-system -o wide|grep prometheus
prometheus-65b89bf89d-695bh         1/1     Running   0          15m     10.244.1.8       hadoop009   <none>           <none>

热更新
# curl -X POST http://10.244.1.8:9090/-/reload

七、问题:prometheus被OOM杀死

k8s集群内prometheus频繁oomkilled问题:

# kubectl get pod -n kube-system -o wide
NAME                                READY   STATUS      RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
prometheus-5b97f7496b-gkg2c         0/1     CrashLoopBackOff     106          14h   10.244.2.25      hadoop007   <none>           <none>

# kubectl describe pod -n kube-system prometheus-5b97f7496b-gkg2c
Name:         prometheus-5b97f7496b-gkg2c
Namespace:    kube-system
Priority:     0
Node:         hadoop007/192.101.11.159
Start Time:   Tue, 27 Oct 2020 17:49:49 +0800
Labels:       app=prometheus
              pod-template-hash=5b97f7496b
Annotations:  <none>
Status:       Running
IP:           10.244.2.25
IPs:
  IP:           10.244.2.25
Controlled By:  ReplicaSet/prometheus-5b97f7496b
Containers:
  prometheus:
    Container ID:  docker://1236f8c07f51eeb2b6589a7505b38b6c1f68e64d7237b56cdc46ddf210a921c9
    Image:         prom/prometheus:v2.4.3
    Image ID:      docker://sha256:f92db1f1a7ce28d5bb2b473022e62fd940b23bf54835a297daf1de4fd34e029d
    Port:          9090/TCP
    Host Port:     0/TCP
    Command:
      /bin/prometheus
    Args:
      --config.file=/etc/prometheus/prometheus.yml
      --storage.tsdb.path=/prometheus
      --storage.tsdb.retention=7d
      --web.enable-admin-api
      --web.enable-lifecycle
    State:          Running
      Started:      Tue, 27 Oct 2020 17:49:50 +0800
    Last State: Terminated
      Reason:  OOMKilled
      Exit Code: 137  
  
      Started: Tue, 27 Oct 2020 17:49:50 +0800
      Finished: Tue, 27 Oct 2020 17:51:50 +0800    
    Ready:          True
    Restart Count:  106
    Limits:
      cpu:     100m
      memory:  512Mi
    Requests:
      cpu:        100m
      memory:     512Mi
    Environment:  <none>
    Mounts:
      /etc/config/rules from prometheus-rules (rw)
      /etc/prometheus from config-volume (rw)
      /prometheus from data (rw,path="prometheus")
      /var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-rvnwz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  prometheus
    ReadOnly:   false
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-config
    Optional:  false
  prometheus-rules:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-rules
    Optional:  false
  prometheus-token-rvnwz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-token-rvnwz
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:         
...............

解决方案:

1、prometheus.deployment.yaml修改如下信息:

      containers:
      - image: prom/prometheus:v2.4.3
    imagePullPolicy: IfNotPresent
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus" 
        - "--storage.tsdb.retention=7d"  
        - "--web.enable-admin-api" 
        - "--web.enable-lifecycle"

修改为:

      containers:
      - name: prometheus
        image: prom/prometheus:v2.20.0
        imagePullPolicy: IfNotPresent
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=5d"
        - "--web.enable-admin-api" 
        - "--web.enable-lifecycle"
        - "--storage.tsdb.min-block-duration=1h"
        - "--storage.tsdb.max-block-duration=1h"
        - "--query.max-samples=30000000"


...............................

       resources:
          requests:
            cpu: 100m
            memory: 512Mi
          limits:
            cpu: 100m
            memory: 512Mi
修改为:
       resources:
          requests:
            cpu: 100m
            memory: 512Mi
          limits:
            cpu: 500m
            memory: 4Gi

2、prometheus.configmap.yaml修改如下信息:

    global:
      scrape_interval: 15s
      scrape_timeout: 15s

修改为:

    global:
      scrape_interval: 1m
      scrape_timeout: 1m

参考信息:

https://www.ibm.com/support/pages/node/882172

https://www.robustperception.io/new-features-in-prometheus-2-5-0

./prometheus --help
usage: prometheus [<flags>]
      --query.max-samples=50000000 Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return.

New Features in Prometheus 2.5.0:
The second feature is that there is now a limit on the number of samples a query can have in memory at once, making it possible to stop massive queries that take too much RAM and threaten to OOM your Prometheus.
This can be adjusted with the --query.max-samples flag. Each sample uses 16 bytes of memory, however keep in mind there's more than just active samples in memory for a query.


八、附录

Docker容器监控:

各节点的kubelet组件中除了包含自身的监控指标信息以外,kubelet组件还内置了对cAdvisor的支持。 cAdvisor能够获取当前节点上运行的所有容器的资源使用情况,通过访问kubelet的/metrics/cadvisor地址可以获取到cadvisor的监控指标,因此和获取kubelet监控指标类似,这里同样通过node模式自动发现所有的 kubelet信息,并通过适当的relabel过程,修改prometheus.configmap.yaml配置,增加如下job:

    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor


节点运行监控:

Kubelet组件运行在Kubernetes集群的各个节点中,其负责维护和管理节点上Pod的运行状态。kubelet组件的正常运行直接关系到该节点是否能够正常的被Kubernetes集群正常使用。

基于Node模式,Prometheus会自动发现Kubernetes中所有Node节点的信息并作为监控的目标Target。 而这些 Target的访问地址实际上就是Kubelet的访问地址,并且Kubelet实际上直接内置了对Promtheus的支持。修改prometheus.configmap.yaml配置,增加如下job:

    - job_name: 'kubernetes-kubelet'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics


Api-Service 监控:

kube-apiserver扮演了整个Kubernetes集群管理的入口的角色,负责对外暴露Kubernetes API。kubeapiserver组件一般是独立部署在集群外的,为了能够让部署在集群内的应用(kubernetes插件或者用户应用)能够与kube-apiserver交互,Kubernetes会在默认命名空间下创建一个名为kubernetes的服务,如下所示:

# kubectl get svc kubernetes -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP   29d   <none>

而该kubernetes服务代理的后端实际地址通过endpoints进行维护,如下所示:

#  kubectl get endpoints kubernetes
NAME         ENDPOINTS            AGE
kubernetes   192.168.122.2:6443   29d

通过这种方式集群内的应用或者系统主机就可以通过集群内部的DNS域名kubernetes.default.svc访问到部署外部的kube-apiserver实例。

因此,如果我们想要监控kube-apiserver相关的指标,只需要通过endpoints资源找到kubernetes对应的所有后 端地址即可。

如下所示,创建监控任务kubernetes-apiservers,这里指定了服务发现模式为endpoints。Promtheus会查找当前集群中所有的endpoints配置,并通过relabel进行判断是否为apiserver对应的访问地址。

    - job_name: kubernetes-apiservers
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep        
        regex: default;kubernetes;https 
        source_labels:
        - __meta_kubernetes_namespace
        - __meta_kubernetes_service_name
        - __meta_kubernetes_endpoint_port_name
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token


Kubernetes Pod监控:

需要通过 Prometheus的pod服务发现模式,找到当前集群中部署的Node Exporter实例即可。 需要注意的是,由于 Kubernetes中并非所有的Pod都提供了对Prometheus的支持,有些可能只是一些简单的用户应用,为了区分哪些Pod实例是可以供Prometheus进行采集的,这里我们为Node Exporter添加了注解:
prometheus.io/scrape: 'true'
由于Kubernetes中Pod可能会包含多个容器,还需要用户通过注解指定用户提供监控指标的采集端口:
prometheus.io/port: '9100'
而有些情况下,Pod中的容器可能并没有使用默认的/metrics作为监控采集路径,因此还需要支持用户指定采集路 径:
prometheus.io/path: 'metrics'

参见prometheus-node-exporter.yaml 中以上三项的注释配置。

为Prometheus创建监控采集任务kubernetes-pods,如下所示:

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

通过以上relabel过程实现对Pod实例的过滤,以及采集任务地址替换,从而实现对特定Pod实例监控指标的采集。需要说明的是kubernetes-pods并不是只针对Node Exporter而言,对于用户任意部署的Pod实例,只要其提供了对 Prometheus的支持,用户都可以通过为Pod添加注解的形式为其添加监控指标采集的支持。

Service监控:

apiserver实际上是一种特殊的Service,现在配置一个专门发现普通类型的Service。
这里我们对service进行过滤,只有在service配置了prometheus.io/scrape: "true"过滤出来

Serivce自动发现参数说明 (并不是所有创建的service都可以被prometheus发现)
#1.参数解释
relabel_configs:
-source_labels:[__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true  保留标签
source_labels: [__meta_kubernetes_service_annotation_prometheus_io_cheme]

这行配置代表我们只去筛选有__meta_kubernetes_service_annotation_prometheus_io_scrape的service,只有添加了这个声明才可以自动发现其他service

#2.参数解释
  - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
    action: replace
    target_label: __address__
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
#指定一个抓取的端口,有的service可能有多个端口(比如之前的redis)。默认使用的是我们添加时使用kubernetes_service端口

#3.参数解释
  - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__
    regex: (https?)
#这里如果是https证书类型,我们还需要在添加证书和token。

    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name















分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics