- 浏览: 565893 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (267)
- 随笔 (4)
- Spring (13)
- Java (61)
- HTTP (3)
- Windows (1)
- CI(Continuous Integration) (3)
- Dozer (1)
- Apache (11)
- DB (7)
- Architecture (41)
- Design Patterns (11)
- Test (5)
- Agile (1)
- ORM (3)
- PMP (2)
- ESB (2)
- Maven (5)
- IDE (1)
- Camel (1)
- Webservice (3)
- MySQL (6)
- CentOS (14)
- Linux (19)
- BI (3)
- RPC (2)
- Cluster (9)
- NoSQL (7)
- Oracle (25)
- Loadbalance (7)
- Web (5)
- tomcat (1)
- freemarker (1)
- 制造 (0)
最新评论
-
panamera:
如果设置了连接需要密码,Dynamic Broker-Clus ...
ActiveMQ 集群配置 -
panamera:
请问你的最后一种模式Broker-C节点是不是应该也要修改持久 ...
ActiveMQ 集群配置 -
maosheng:
longshao_feng 写道楼主使用 文件共享 模式的ma ...
ActiveMQ 集群配置 -
longshao_feng:
楼主使用 文件共享 模式的master-slave,produ ...
ActiveMQ 集群配置 -
tanglanwen:
感触很深,必定谨记!
少走弯路的十条忠告
准备工作
建议将所有的yaml文件存在如下目录:
# mkdir /script/prometheus -p && cd /script/prometheus
NFS搭建见: Linux NFS搭建与配置(https://www.iteye.com/blog/maosheng-2517254)
一、部署node-exporter
为了能够采集集群中各个节点的资源使用情况,我们需要在各节点中部署一个Node Exporter实例。与Prometheus的部署不同的是,对于Node Exporter而言每个节点只需要运行一个唯一的实例,此时,就需要使用Kubernetes的另外一种控制 器Daemonset。顾名思义,Daemonset的管理方式类似于操作系统中的守护进程。Daemonset会确保在集群中所有 (也可以指定)节点上运行一个唯一的Pod实例,这样每一个节点都会运行一个Pod,如果我们从集群中删除或添加节点后,也会进行自动扩展。
# cat >>prometheus-node-exporter.yaml<<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-system
labels:
name: node-exporter
k8s-app: node-exporter
spec:
selector:
matchLabels:
name: node-exporter
template:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9100'
prometheus.io/path: 'metrics'
labels:
name: node-exporter
app: node-exporter
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- name: node-exporter
image: prom/node-exporter:v0.16.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9100
resources:
requests:
cpu: 0.15
securityContext:
privileged: true
args:
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- '"^/(sys|proc|dev|host|etc)($|/)"'
volumeMounts:
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs
mountPath: /rootfs
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
volumes:
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
EOF
注意:由于Node Exporter需要能够访问宿主机,因此这里指定了:
hostPID: true
hostIPC: true
hostNetwork: true
让Pod实例能够以主机网络以及系统进程的形式运行。
这三个配置主要用于主机的PID namespace、IPC namespace以及主机网络,这里需要注意的是namespace是用于容器隔离的关键技术,这里的namespace和集群中的namespace是两个完全不同的概念。
另外我们还需要将主机/dev、/proc、/sys这些目录挂在到容器中,这些因为我们采集的很多节点数据都是通过这些文件来获取系统信息。
hostNetwork:true:会直接将我们的宿主机的9100端口映射出来,从而不需要创建service在我们的宿主机上就会有一个9100的端口 容器的9100--->映射到宿主机9100
如果是使用kubeadm搭建的,同时需要监控master节点的,则需要添加下方的相应容忍:
- key:"node-role.kubernetes.io/master"
operator:"Exists"
effect:"NoSchedule
创建node-exporter并检查pod
# kubectl create -f prometheus-node-exporter.yaml
daemonset.extensions/node-exporter created
查看Daemonset以及Pod的运行状态
# kubectl get daemonsets -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-exporter 3 3 3 3 3 <none> 3d18h
# kubectl get pod -n kube-system -o wide|grep node-exporter
node-exporter-cmjkc 1/1 Running 0 33h 192.168.29.176 k8s-node2 <none> <none>
node-exporter-wl5lx 1/1 Running 0 27h 192.168.29.182 k8s-node3 <none> <none>
node-exporter-xsv9z 1/1 Running 0 33h 192.168.29.175 k8s-node1 <none> <none>
这里我们可以看到,我们有3个节点,在所有的节点上都启动了一个对应Pod进行获取数据
我们要查看一下Pod日志,以及node-exporter中的metrics
使用命令kubectl logs -n 命名空间 node-exporter中Pod名称检查Pod日志是否有报错
# kubectl logs -n kube-system node-exporter-22vkv
time="2020-10-23T07:58:22Z" level=info msg="Starting node_exporter (version=0.16.0, branch=HEAD, revision=d42bd70f4363dced6b77d8fc311ea57b63387e4f)" source="node_exporter.go:82"
time="2020-10-23T07:58:22Z" level=info msg="Build context (go=go1.9.6, user=root@a67a9bc13a69, date=20180515-15:52:42)" source="node_exporter.go:83"
time="2020-10-23T07:58:22Z" level=info msg="Enabled collectors:" source="node_exporter.go:90"
time="2020-10-23T07:58:22Z" level=info msg=" - arp" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - bcache" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - bonding" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - conntrack" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - cpu" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - diskstats" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - edac" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - entropy" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - filefd" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - filesystem" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - hwmon" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - infiniband" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - ipvs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - loadavg" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - mdadm" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - meminfo" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - netdev" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - netstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - nfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - nfsd" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - sockstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - stat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - textfile" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - time" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - timex" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - uname" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - vmstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - wifi" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - xfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - zfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg="Listening on :9100" source="node_exporter.go:111"
接下来,我们在任意集群节点curl ip:9100/metrics
# curl 127.0.0.1:9100/metrics|head
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
4 84864 4 3961 0 0 35179 0 0:00:02 --:--:-- 0:00:02 35053# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
100 84864 100 84864 0 0 723k 0 --:--:-- --:--:-- --:--:-- 720k
curl: (23) Failed writing body (135 != 15367)
只要metrics可以获取到数据说明node-exporter没有问题。
需要我们在Prometheus配置文件(prometheus.configmap.yaml)中,采用服务发现,添加如下信息:
- job_name: 'kubernetes-node'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
action: replace
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
#通过制定Kubernetes_sd_config的模式为node,prometheus就会自动从Kubernetes中发现所有的node节点并作为当前job监控的目标实例,发现的节点/metrics接口是默认的kubelet的HTTP接口。
二、部署alertmanager
1、创建Prometheus报警规则ConfigMap资源对象
# cat >>prometheus-rules.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-rules
namespace: kube-system
data:
general.rules: |
groups:
- name: general.rules
rules:
- alert: InstanceDown
expr: up == 0
for: 1m
labels:
severity: error
annotations:
summary: "Instance {{ $labels.instance }} 停止工作"
description: "{{ $labels.instance }} job {{ $labels.job }} 已经停止5分钟以上."
node.rules: |
groups:
- name: node.rules
rules:
- alert: NodeFilesystemUsage
expr: 100 - (node_filesystem_free_bytes{fstype=~"ext4|xfs"} / node_filesystem_size_bytes{fstype=~"ext4|xfs"} * 100) > 80
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} : {{ $labels.mountpoint }} 分区使用率过高"
description: "{{ $labels.instance }}: {{ $labels.mountpoint }} 分区使用大于80% (当前值: {{ $value }})"
- alert: NodeMemoryUsage
expr: 100 - (node_memory_MemFree_bytes+node_memory_Cached_bytes+node_memory_Buffers_bytes) / node_memory_MemTotal_bytes * 100 > 80
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} 内存使用率过高"
description: "{{ $labels.instance }}内存使用大于80% (当前值: {{ $value }})"
- alert: NodeCPUUsage
expr: 100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance) * 100) > 60
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} CPU使用率过高"
description: "{{ $labels.instance }}CPU使用大于60% (当前值: {{ $value }})"
EOF
# kubectl apply -f prometheus-rules.yaml
2、创建AlertManager的ConfigMap资源对象
# cat >>alertmanager-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
alertmanager.yml: |
global:
# 在没有报警的情况下声明为已解决的时间
resolve_timeout: 5m
# 配置邮件发送信息
smtp_smarthost: 'smtp.163.com:465'
smtp_from: '1234567@163.com'
smtp_auth_username: '1234567@163.com'
smtp_auth_password: 'ACFDSWWXENPVHRDHTBPHC'
smtp_hello: '163.com'
smtp_require_tls: false
receivers:
- name: 'default'
email_configs:
- to: '45665464456@qq.com'
send_resolved: true
- name: 'email'
email_configs:
- to: '45665464456@qq.com'
send_resolved: true
# 所有报警信息进入后的根路由,用来设置报警的分发策略
route:
# 这里的标签列表是接收到报警信息后的重新分组标签,例如,接收到的报警信息里面有许多具有 cluster=A 和 alertname=LatncyHigh 这样的标签的报警信息将会批量被聚合到一个分组里面
group_by: ['alertname', 'cluster']
# 当一个新的报警分组被创建后,需要等待至少group_wait时间来初始化通知,这种方式可以确保您能有足够的时间为同一分组来获取多个警报,然后一起触发这个报警信息。
group_wait: 30s
# 当第一个报警发送后,等待'group_interval'时间来发送新的一组报警信息。
group_interval: 5m
# 如果一个报警信息已经发送成功了,等待'repeat_interval'时间来重新发送他们
repeat_interval: 5m
# 默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器
receiver: default
# 上面所有的属性都由所有子路由继承,并且可以在每个子路由上进行覆盖。
routes:
- receiver: email
group_wait: 10s
match:
team: node
EOF
# kubectl create -f alertmanager-configmap.yaml
# kubectl get cm -n kube-system
NAME DATA AGE
alertmanager-config 1 19h
coredns 1 90d
extension-apiserver-authentication 6 90d
kube-flannel-cfg 2 90d
kube-proxy 2 90d
kubeadm-config 2 90d
kubelet-config-1.16 1 90d
prometheus-config 1 8d
prometheus-rules 2 19h
3、创建PV、PVC进行数据持久化
cat >>alertmanager-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: alertmanager
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.101.11.156
path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: alertmanager
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
# kubectl apply -f alertmanager-volume.yaml
4、创建AlertManager的Pod资源
# cat >>alertmanager-deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: alertmanager
namespace: kube-system
labels:
k8s-app: alertmanager
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.15.3
spec:
replicas: 1
selector:
matchLabels:
k8s-app: alertmanager
version: v0.15.3
template:
metadata:
labels:
k8s-app: alertmanager
version: v0.15.3
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
priorityClassName: system-cluster-critical
containers:
- name: prometheus-alertmanager
image: "prom/alertmanager:v0.15.3"
imagePullPolicy: "IfNotPresent"
args:
- --config.file=/etc/config/alertmanager.yml
- --storage.path=/data
- --web.external-url=/
ports:
- containerPort: 9093
readinessProbe:
httpGet:
path: /#/status
port: 9093
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: storage-volume
mountPath: "/data"
subPath: ""
resources:
limits:
cpu: 10m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
- name: prometheus-alertmanager-configmap-reload
image: "jimmidyson/configmap-reload:v0.1"
imagePullPolicy: "IfNotPresent"
args:
- --volume-dir=/etc/config
- --webhook-url=http://localhost:9093/-/reload
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
resources:
limits:
cpu: 10m
memory: 10Mi
requests:
cpu: 10m
memory: 10Mi
volumes:
- name: config-volume
configMap:
name: alertmanager-config
- name: storage-volume
persistentVolumeClaim:
claimName: alertmanager
---
apiVersion: v1
kind: Service
metadata:
name: alertmanager
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Alertmanager"
spec:
ports:
- name: http
port: 9093
protocol: TCP
targetPort: 9093
nodePort: 30093
selector:
k8s-app: alertmanager
ttype: NodePort
EOF
# kubectl apply -f alertmanager-deployment.yaml
# kubectl get pod,svc -n kube-system -o wide
需要我们在Prometheus配置文件(prometheus.configmap.yaml)中,添加AlertManager的地址,让Prometheus能够访问AlertManager,添加报警规则:
alerting:
alertmanagers:
- static_configs:
- targets: ['localhost:9093']
rule_files:
- /etc/config/rules/*.rules
三、部署blackbox exporter
为了能够对Ingress和Service进行探测,我们需要在集群部署Blackbox Exporter实例。 如下所示,创建 blackbox-exporter.yaml用于描述部署相关的内容:
# cat >>blackbox-exporter.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: blackbox-exporter
name: blackbox-exporter
spec:
ports:
- name: blackbox
port: 9115
protocol: TCP
selector:
app: blackbox-exporter
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: blackbox-exporter
name: blackbox-exporter
spec:
replicas: 1
selector:
matchLabels:
app: blackbox-exporter
template:
metadata:
labels:
app: blackbox-exporter
spec:
containers:
- image: prom/blackbox-exporter
imagePullPolicy: IfNotPresent
name: blackbox-exporter
EOF
# kubectl create -f blackbox-exporter.yaml
通过kubectl命令部署Blackbox Exporter实例,这里将部署一个Blackbox Exporter的Pod实例,同时通过服务blackbox-exporter在集群内暴露访问地址blackbox-exporter.default.svc.cluster.local,对于集群内的任意服务都可以通过该内部DNS域名访问Blackbox Exporter实例。
为了能够让Prometheus能够自动的对Service进行探测,我们需要通过服务发现自动找到所有的Service信息。 如下所示,在Prometheus的配置文件中添加名为kubernetes-services的监控采集任务:
- job_name: 'kubernetes-services'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
为了区分集群中需要进行探测的Service实例,我们通过标签‘prometheus.io/probe: true’进行判断,从而过滤出需要探测的所有Service实例:
并且将通过服务发现获取到的Service实例地址 __address__ 转换为获取监控数据的请求参数。同时 将 __address 执行Blackbox Exporter实例的访问地址,并且重写了标签instance的内容:
对于Ingress而言,也是一个相对类似的过程,这里给出对Ingress探测的Promthues任务配置:
- job_name: 'kubernetes-ingresses'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
四、部署prometheus
prometheus采用nfs挂载方式来存储数据,同时使用configMap管理配置文件。并且我们将所有的prometheus存储在kube-system namespace
当使用Deployment管理和部署应用程序时,用户可以方便了对应用进行扩容或者缩容,从而产生多个Pod实例。为了 能够统一管理这些Pod的配置信息,在Kubernetes中可以使用ConfigMaps资源定义和管理这些配置,并且通过环境 变量或者文件系统挂载的方式让容器使用这些配置。 这里将使用ConfigMaps管理Prometheus的配置文件,创建prometheus.configmap.yaml文件,并写入以下内容
1、创建Prometheus的ConfigMap资源对象
# cat >> prometheus.configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_timeout: 15s
alerting:
alertmanagers:
- static_configs:
- targets: ['localhost:9093']
rule_files:
- /etc/config/rules/*.rules
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'kubernetes-node'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
action: replace
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-kubelet'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'kubernetes-services'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
EOF
# kubectl apply -f prometheus.configmap.yaml
configmap/prometheus-config created
# kubectl get configmaps -n kube-system |grep prometheus
prometheus-config 1 25s
2、创建PV、PVC进行数据持久化
prometheus.yaml文件对应的ConfigMap对象通过volume的形式挂载进Pod,这样ConfigMap更新后,对应的pod也会热更新,然后我们再执行reload请求,prometheus配置就生效了。除此之外,对了将时序数据进行持久化,我们将数据目录和一个pvc对象进行了绑定,所以我们需要提前创建pvc对象:
# cat >prometheus-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.101.11.156
path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus
namespace: kube-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
通过一个简单的NFS作为存储后端创建一个pv & pvc
# kubectl create -f prometheus-volume.yaml
persistentvolume/prometheus created
persistentvolumeclaim/prometheus created
[
删除PersistentVolume、PersistentVolumeClaim,先删除使用了此应用的部署
kubectl delete -f prometheus.deployment.yaml
kubectl get PersistentVolume
kubectl get PersistentVolumeClaim -n kube-system
kubectl delete PersistentVolume prometheus
kubectl delete PersistentVolumeClaim prometheus -n kube-system
或者
kubectl delete -f prometheus-volume.yaml
]
3、创建rbac认证
因为prometheus需要访问k8s集群内部的资源,需要Kubernetes的访问授权
为了能够让Prometheus能够访问收到认证保护的Kubernetes API,我们首先需要做的是,对Prometheus进行访 问授权。在Kubernetes中主要使用基于角色的访问控制模型(Role-Based Access Control),用于管理 Kubernetes下资源访问权限。首先我们需要在Kubernetes下定义角色(ClusterRole),并且为该角色赋予响应 的访问权限。同时创建Prometheus所使用的账号(ServiceAccount),最后则是将该账号与角色进行绑定 (ClusterRoleBinding)。这些所有的操作在Kubernetes同样被视为是一系列的资源,可以通过YAML文件进行 描述并创建,这里创建prometheus-rbac.yaml文件,并写入以下内容:
# cat >>prometheus-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
EOF
注意:ClusterRole是全局的,不需要指定命名空间。而ServiceAccount是属于特定命名空间的资 源
创建:
# kubectl create -f prometheus-rbac.yaml
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[
查看:
kubectl get serviceaccount -n kube-system | grep prometheus
kubectl get clusterrole | grep prometheus
kubectl get clusterrolebinding | grep prometheus
删除:
kubectl delete clusterrolebinding prometheus
kubectl delete clusterrole prometheus
kubectl delete serviceaccount prometheus -n kube-system
]
4、创建prometheus的Pod资源
当ConfigMap资源创建成功后,我们就可以通过Volume挂载的方式,将Prometheus的配置文件挂载到容器中。 这里我们通过Deployment部署Prometheus Server实例,创建prometheus.deployment.yaml文件,并写入以下 内容:
# cat > prometheus.deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: kube-system
labels:
app: prometheus
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
serviceAccountName: prometheus ##在完成角色权限以及用户的绑定之后,就可以指定Prometheus使用特定的ServiceAccount创建Pod实例
containers:
- image: prom/prometheus:v2.4.3
imagePullPolicy: IfNotPresent
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus" ##数据存储路径
- "--storage.tsdb.retention=7d" ##数据保留期限的设置,企业中设置15天为宜
- "--web.enable-admin-api" # 控制对admin HTTP API的访问,其中包括删除时间序列等功能
- "--web.enable-lifecycle" # 支持热更新,直接执行 curl -X POST
curl -X POST http://localhost:9090/-/reload 立即生效
ports:
- containerPort: 9090
protocol: TCP
name: http
volumeMounts:
- mountPath: "/prometheus"
subPath: prometheus
name: data
- mountPath: "/etc/prometheus"
name: config-volume
- mountPath: /etc/config/rules
name: prometheus-rules
subPath: ""
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 100m
memory: 512Mi
securityContext: ##添加了一行securityContext,其中runAsUser设置为0,这是因为prometheus运行过程中使用的用户是nobody,如果不配置可能会出现权限问题
runAsUser: 0
volumes:
- name: data
persistentVolumeClaim:
claimName: prometheus
- name: config-volume
configMap:
name: prometheus-config
- name: prometheus-rules
configMap:
name: prometheus-rules
---
apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: prometheus
labels:
app: prometheus
spec:
type: NodePort
selector:
app: prometheus
ports:
- name: http
port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
EOF
将ConfigMap volume rbac 创建完毕后,就可以创建prometheus.deployment.yaml了,运行prometheus服务
# kubectl create -f prometheus.deployment.yaml
deployment.extensions/prometheus created
# kubectl get pod -n kube-system |grep prometheus
prometheus-847494df74-zbz9v 1/1 Running 0 148m
#这里1/1 状态为Running即可
指定ServiceAccount创建的Pod实例中,会自动将用于访问Kubernetes API的CA证书以及当前账户对应的访问令牌文件挂载到Pod实例的/var/run/secrets/kubernetes.io/serviceaccount/目录下,可以通过以下命令进 行查看:
# kubectl exec -it prometheus-847494df74-zbz9v /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:30090
五、部署grafana
1、创建PV、PVC进行数据持久化
# cat >>grafana_volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.101.11.156
path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana
namespace: kube-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
# kubectl create -f grafana_volume.yaml
2、创建授权job
由于5.1(可以选择5.1之前的docker镜像,可以避免此类错误)版本后groupid更改,同时我们将/var/lib/grafana挂载到pvc后,目录拥有者可能不是grafana用户,所以我们还需要添加一个Job用于授权目录
cat > grafana_job.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: grafana-chown
namespace: kube-system
spec:
template:
spec:
restartPolicy: Never
containers:
- name: grafana-chown
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
image: busybox
imagePullPolicy: IfNotPresent
volumeMounts:
- name: storage
subPath: grafana
mountPath: /var/lib/grafana
volumes:
- name: storage
persistentVolumeClaim:
claimName: grafana
EOF
# kubectl create -f grafana_job.yaml
3、创建grafana的Pod资源
# cat >>grafana_deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
k8s-app: grafana
spec:
selector:
matchLabels:
k8s-app: grafana
app: grafana
revisionHistoryLimit: 10
template:
metadata:
labels:
app: grafana
k8s-app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:5.3.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: grafana
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
value: 12345@com
readinessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
livenessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 300m
memory: 1024Mi
requests:
cpu: 300m
memory: 1024Mi
volumeMounts:
- mountPath: /var/lib/grafana
subPath: grafana
name: storage
securityContext:
fsGroup: 472
runAsUser: 472
volumes:
- name: storage
persistentVolumeClaim:
claimName: grafana
---
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30091
selector:
app: grafana
EOF
# kubectl apply -f grafana_deployment.yaml
注意:比较重要的变量是GF_SECURITY_ADMIN_USER和GF_SECURITY_ADMIN_PASSWORD为grafana的账号和密码。
由于grafana将dashboard、插件这些数据保留在/var/lib/grafana目录下,所以我们这里需要做持久化,同时要针对这个目录做挂载声明,由于5.3.4版本用户的userid和groupid都有所变化,所以这里添加了一个securityContext设置用户ID。
# kubectl get pod,svc -n kube-system |grep grafana
pod/grafana-54f6755f88-5dwl7 1/1 Running 0 27h
pod/grafana-chown-lcw2v 0/1 Completed 0 27h
service/grafana NodePort 10.0.0.202 <none> 3000:9006/TCP 27h
使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:30091,默认账号密码为:admin/12345@com
注意:grafana的prometheus data sources设置 URL为:
http://prometheus.kube-system.svc.cluster.local:9090
grafana默认走的是浏览器时区,但是prometheus使用的是UTC时区:
Configuration----->Preferences:Preferences:Timezone设置为UTC
六、prometheus热更新
编辑配置
# vi prometheus.configmap.yaml
更新配置
# kubectl apply -f prometheus.configmap.yaml
configmap/prometheus-config configured
# kubectl get pod -n kube-system -o wide|grep prometheus
prometheus-65b89bf89d-695bh 1/1 Running 0 15m 10.244.1.8 hadoop009 <none> <none>
热更新
# curl -X POST http://10.244.1.8:9090/-/reload
七、问题:prometheus被OOM杀死
k8s集群内prometheus频繁oomkilled问题:
# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
prometheus-5b97f7496b-gkg2c 0/1 CrashLoopBackOff 106 14h 10.244.2.25 hadoop007 <none> <none>
# kubectl describe pod -n kube-system prometheus-5b97f7496b-gkg2c
Name: prometheus-5b97f7496b-gkg2c
Namespace: kube-system
Priority: 0
Node: hadoop007/192.101.11.159
Start Time: Tue, 27 Oct 2020 17:49:49 +0800
Labels: app=prometheus
pod-template-hash=5b97f7496b
Annotations: <none>
Status: Running
IP: 10.244.2.25
IPs:
IP: 10.244.2.25
Controlled By: ReplicaSet/prometheus-5b97f7496b
Containers:
prometheus:
Container ID: docker://1236f8c07f51eeb2b6589a7505b38b6c1f68e64d7237b56cdc46ddf210a921c9
Image: prom/prometheus:v2.4.3
Image ID: docker://sha256:f92db1f1a7ce28d5bb2b473022e62fd940b23bf54835a297daf1de4fd34e029d
Port: 9090/TCP
Host Port: 0/TCP
Command:
/bin/prometheus
Args:
--config.file=/etc/prometheus/prometheus.yml
--storage.tsdb.path=/prometheus
--storage.tsdb.retention=7d
--web.enable-admin-api
--web.enable-lifecycle
State: Running
Started: Tue, 27 Oct 2020 17:49:50 +0800
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Tue, 27 Oct 2020 17:49:50 +0800
Finished: Tue, 27 Oct 2020 17:51:50 +0800
Ready: True
Restart Count: 106
Limits:
cpu: 100m
memory: 512Mi
Requests:
cpu: 100m
memory: 512Mi
Environment: <none>
Mounts:
/etc/config/rules from prometheus-rules (rw)
/etc/prometheus from config-volume (rw)
/prometheus from data (rw,path="prometheus")
/var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-rvnwz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus
ReadOnly: false
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-config
Optional: false
prometheus-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-rules
Optional: false
prometheus-token-rvnwz:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-token-rvnwz
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
...............
解决方案:
1、prometheus.deployment.yaml修改如下信息:
containers:
- image: prom/prometheus:v2.4.3
imagePullPolicy: IfNotPresent
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=7d"
- "--web.enable-admin-api"
- "--web.enable-lifecycle"
修改为:
containers:
- name: prometheus
image: prom/prometheus:v2.20.0
imagePullPolicy: IfNotPresent
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=5d"
- "--web.enable-admin-api"
- "--web.enable-lifecycle"
- "--storage.tsdb.min-block-duration=1h"
- "--storage.tsdb.max-block-duration=1h"
- "--query.max-samples=30000000"
...............................
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 100m
memory: 512Mi
修改为:
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 4Gi
2、prometheus.configmap.yaml修改如下信息:
global:
scrape_interval: 15s
scrape_timeout: 15s
修改为:
global:
scrape_interval: 1m
scrape_timeout: 1m
参考信息:
https://www.ibm.com/support/pages/node/882172
https://www.robustperception.io/new-features-in-prometheus-2-5-0
./prometheus --help
usage: prometheus [<flags>]
--query.max-samples=50000000 Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return.
New Features in Prometheus 2.5.0:
The second feature is that there is now a limit on the number of samples a query can have in memory at once, making it possible to stop massive queries that take too much RAM and threaten to OOM your Prometheus.
This can be adjusted with the --query.max-samples flag. Each sample uses 16 bytes of memory, however keep in mind there's more than just active samples in memory for a query.
八、附录
Docker容器监控:
各节点的kubelet组件中除了包含自身的监控指标信息以外,kubelet组件还内置了对cAdvisor的支持。 cAdvisor能够获取当前节点上运行的所有容器的资源使用情况,通过访问kubelet的/metrics/cadvisor地址可以获取到cadvisor的监控指标,因此和获取kubelet监控指标类似,这里同样通过node模式自动发现所有的 kubelet信息,并通过适当的relabel过程,修改prometheus.configmap.yaml配置,增加如下job:
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
节点运行监控:
Kubelet组件运行在Kubernetes集群的各个节点中,其负责维护和管理节点上Pod的运行状态。kubelet组件的正常运行直接关系到该节点是否能够正常的被Kubernetes集群正常使用。
基于Node模式,Prometheus会自动发现Kubernetes中所有Node节点的信息并作为监控的目标Target。 而这些 Target的访问地址实际上就是Kubelet的访问地址,并且Kubelet实际上直接内置了对Promtheus的支持。修改prometheus.configmap.yaml配置,增加如下job:
- job_name: 'kubernetes-kubelet'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
Api-Service 监控:
kube-apiserver扮演了整个Kubernetes集群管理的入口的角色,负责对外暴露Kubernetes API。kubeapiserver组件一般是独立部署在集群外的,为了能够让部署在集群内的应用(kubernetes插件或者用户应用)能够与kube-apiserver交互,Kubernetes会在默认命名空间下创建一个名为kubernetes的服务,如下所示:
# kubectl get svc kubernetes -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 29d <none>
而该kubernetes服务代理的后端实际地址通过endpoints进行维护,如下所示:
# kubectl get endpoints kubernetes
NAME ENDPOINTS AGE
kubernetes 192.168.122.2:6443 29d
通过这种方式集群内的应用或者系统主机就可以通过集群内部的DNS域名kubernetes.default.svc访问到部署外部的kube-apiserver实例。
因此,如果我们想要监控kube-apiserver相关的指标,只需要通过endpoints资源找到kubernetes对应的所有后 端地址即可。
如下所示,创建监控任务kubernetes-apiservers,这里指定了服务发现模式为endpoints。Promtheus会查找当前集群中所有的endpoints配置,并通过relabel进行判断是否为apiserver对应的访问地址。
- job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
Kubernetes Pod监控:
需要通过 Prometheus的pod服务发现模式,找到当前集群中部署的Node Exporter实例即可。 需要注意的是,由于 Kubernetes中并非所有的Pod都提供了对Prometheus的支持,有些可能只是一些简单的用户应用,为了区分哪些Pod实例是可以供Prometheus进行采集的,这里我们为Node Exporter添加了注解:
prometheus.io/scrape: 'true'
由于Kubernetes中Pod可能会包含多个容器,还需要用户通过注解指定用户提供监控指标的采集端口:
prometheus.io/port: '9100'
而有些情况下,Pod中的容器可能并没有使用默认的/metrics作为监控采集路径,因此还需要支持用户指定采集路 径:
prometheus.io/path: 'metrics'
参见prometheus-node-exporter.yaml 中以上三项的注释配置。
为Prometheus创建监控采集任务kubernetes-pods,如下所示:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
通过以上relabel过程实现对Pod实例的过滤,以及采集任务地址替换,从而实现对特定Pod实例监控指标的采集。需要说明的是kubernetes-pods并不是只针对Node Exporter而言,对于用户任意部署的Pod实例,只要其提供了对 Prometheus的支持,用户都可以通过为Pod添加注解的形式为其添加监控指标采集的支持。
Service监控:
apiserver实际上是一种特殊的Service,现在配置一个专门发现普通类型的Service。
这里我们对service进行过滤,只有在service配置了prometheus.io/scrape: "true"过滤出来
Serivce自动发现参数说明 (并不是所有创建的service都可以被prometheus发现)
#1.参数解释
relabel_configs:
-source_labels:[__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true 保留标签
source_labels: [__meta_kubernetes_service_annotation_prometheus_io_cheme]
这行配置代表我们只去筛选有__meta_kubernetes_service_annotation_prometheus_io_scrape的service,只有添加了这个声明才可以自动发现其他service
#2.参数解释
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
#指定一个抓取的端口,有的service可能有多个端口(比如之前的redis)。默认使用的是我们添加时使用kubernetes_service端口
#3.参数解释
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
#这里如果是https证书类型,我们还需要在添加证书和token。
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
建议将所有的yaml文件存在如下目录:
# mkdir /script/prometheus -p && cd /script/prometheus
NFS搭建见: Linux NFS搭建与配置(https://www.iteye.com/blog/maosheng-2517254)
一、部署node-exporter
为了能够采集集群中各个节点的资源使用情况,我们需要在各节点中部署一个Node Exporter实例。与Prometheus的部署不同的是,对于Node Exporter而言每个节点只需要运行一个唯一的实例,此时,就需要使用Kubernetes的另外一种控制 器Daemonset。顾名思义,Daemonset的管理方式类似于操作系统中的守护进程。Daemonset会确保在集群中所有 (也可以指定)节点上运行一个唯一的Pod实例,这样每一个节点都会运行一个Pod,如果我们从集群中删除或添加节点后,也会进行自动扩展。
# cat >>prometheus-node-exporter.yaml<<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-system
labels:
name: node-exporter
k8s-app: node-exporter
spec:
selector:
matchLabels:
name: node-exporter
template:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9100'
prometheus.io/path: 'metrics'
labels:
name: node-exporter
app: node-exporter
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- name: node-exporter
image: prom/node-exporter:v0.16.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9100
resources:
requests:
cpu: 0.15
securityContext:
privileged: true
args:
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- '"^/(sys|proc|dev|host|etc)($|/)"'
volumeMounts:
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs
mountPath: /rootfs
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
volumes:
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
EOF
注意:由于Node Exporter需要能够访问宿主机,因此这里指定了:
hostPID: true
hostIPC: true
hostNetwork: true
让Pod实例能够以主机网络以及系统进程的形式运行。
这三个配置主要用于主机的PID namespace、IPC namespace以及主机网络,这里需要注意的是namespace是用于容器隔离的关键技术,这里的namespace和集群中的namespace是两个完全不同的概念。
另外我们还需要将主机/dev、/proc、/sys这些目录挂在到容器中,这些因为我们采集的很多节点数据都是通过这些文件来获取系统信息。
hostNetwork:true:会直接将我们的宿主机的9100端口映射出来,从而不需要创建service在我们的宿主机上就会有一个9100的端口 容器的9100--->映射到宿主机9100
如果是使用kubeadm搭建的,同时需要监控master节点的,则需要添加下方的相应容忍:
- key:"node-role.kubernetes.io/master"
operator:"Exists"
effect:"NoSchedule
创建node-exporter并检查pod
# kubectl create -f prometheus-node-exporter.yaml
daemonset.extensions/node-exporter created
查看Daemonset以及Pod的运行状态
# kubectl get daemonsets -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-exporter 3 3 3 3 3 <none> 3d18h
# kubectl get pod -n kube-system -o wide|grep node-exporter
node-exporter-cmjkc 1/1 Running 0 33h 192.168.29.176 k8s-node2 <none> <none>
node-exporter-wl5lx 1/1 Running 0 27h 192.168.29.182 k8s-node3 <none> <none>
node-exporter-xsv9z 1/1 Running 0 33h 192.168.29.175 k8s-node1 <none> <none>
这里我们可以看到,我们有3个节点,在所有的节点上都启动了一个对应Pod进行获取数据
我们要查看一下Pod日志,以及node-exporter中的metrics
使用命令kubectl logs -n 命名空间 node-exporter中Pod名称检查Pod日志是否有报错
# kubectl logs -n kube-system node-exporter-22vkv
time="2020-10-23T07:58:22Z" level=info msg="Starting node_exporter (version=0.16.0, branch=HEAD, revision=d42bd70f4363dced6b77d8fc311ea57b63387e4f)" source="node_exporter.go:82"
time="2020-10-23T07:58:22Z" level=info msg="Build context (go=go1.9.6, user=root@a67a9bc13a69, date=20180515-15:52:42)" source="node_exporter.go:83"
time="2020-10-23T07:58:22Z" level=info msg="Enabled collectors:" source="node_exporter.go:90"
time="2020-10-23T07:58:22Z" level=info msg=" - arp" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - bcache" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - bonding" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - conntrack" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - cpu" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - diskstats" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - edac" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - entropy" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - filefd" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - filesystem" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - hwmon" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - infiniband" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - ipvs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - loadavg" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - mdadm" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - meminfo" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - netdev" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - netstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - nfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - nfsd" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - sockstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - stat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - textfile" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - time" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - timex" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - uname" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - vmstat" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - wifi" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - xfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg=" - zfs" source="node_exporter.go:97"
time="2020-10-23T07:58:22Z" level=info msg="Listening on :9100" source="node_exporter.go:111"
接下来,我们在任意集群节点curl ip:9100/metrics
# curl 127.0.0.1:9100/metrics|head
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
4 84864 4 3961 0 0 35179 0 0:00:02 --:--:-- 0:00:02 35053# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
100 84864 100 84864 0 0 723k 0 --:--:-- --:--:-- --:--:-- 720k
curl: (23) Failed writing body (135 != 15367)
只要metrics可以获取到数据说明node-exporter没有问题。
需要我们在Prometheus配置文件(prometheus.configmap.yaml)中,采用服务发现,添加如下信息:
- job_name: 'kubernetes-node'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
action: replace
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
#通过制定Kubernetes_sd_config的模式为node,prometheus就会自动从Kubernetes中发现所有的node节点并作为当前job监控的目标实例,发现的节点/metrics接口是默认的kubelet的HTTP接口。
二、部署alertmanager
1、创建Prometheus报警规则ConfigMap资源对象
# cat >>prometheus-rules.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-rules
namespace: kube-system
data:
general.rules: |
groups:
- name: general.rules
rules:
- alert: InstanceDown
expr: up == 0
for: 1m
labels:
severity: error
annotations:
summary: "Instance {{ $labels.instance }} 停止工作"
description: "{{ $labels.instance }} job {{ $labels.job }} 已经停止5分钟以上."
node.rules: |
groups:
- name: node.rules
rules:
- alert: NodeFilesystemUsage
expr: 100 - (node_filesystem_free_bytes{fstype=~"ext4|xfs"} / node_filesystem_size_bytes{fstype=~"ext4|xfs"} * 100) > 80
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} : {{ $labels.mountpoint }} 分区使用率过高"
description: "{{ $labels.instance }}: {{ $labels.mountpoint }} 分区使用大于80% (当前值: {{ $value }})"
- alert: NodeMemoryUsage
expr: 100 - (node_memory_MemFree_bytes+node_memory_Cached_bytes+node_memory_Buffers_bytes) / node_memory_MemTotal_bytes * 100 > 80
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} 内存使用率过高"
description: "{{ $labels.instance }}内存使用大于80% (当前值: {{ $value }})"
- alert: NodeCPUUsage
expr: 100 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance) * 100) > 60
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} CPU使用率过高"
description: "{{ $labels.instance }}CPU使用大于60% (当前值: {{ $value }})"
EOF
# kubectl apply -f prometheus-rules.yaml
2、创建AlertManager的ConfigMap资源对象
# cat >>alertmanager-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
alertmanager.yml: |
global:
# 在没有报警的情况下声明为已解决的时间
resolve_timeout: 5m
# 配置邮件发送信息
smtp_smarthost: 'smtp.163.com:465'
smtp_from: '1234567@163.com'
smtp_auth_username: '1234567@163.com'
smtp_auth_password: 'ACFDSWWXENPVHRDHTBPHC'
smtp_hello: '163.com'
smtp_require_tls: false
receivers:
- name: 'default'
email_configs:
- to: '45665464456@qq.com'
send_resolved: true
- name: 'email'
email_configs:
- to: '45665464456@qq.com'
send_resolved: true
# 所有报警信息进入后的根路由,用来设置报警的分发策略
route:
# 这里的标签列表是接收到报警信息后的重新分组标签,例如,接收到的报警信息里面有许多具有 cluster=A 和 alertname=LatncyHigh 这样的标签的报警信息将会批量被聚合到一个分组里面
group_by: ['alertname', 'cluster']
# 当一个新的报警分组被创建后,需要等待至少group_wait时间来初始化通知,这种方式可以确保您能有足够的时间为同一分组来获取多个警报,然后一起触发这个报警信息。
group_wait: 30s
# 当第一个报警发送后,等待'group_interval'时间来发送新的一组报警信息。
group_interval: 5m
# 如果一个报警信息已经发送成功了,等待'repeat_interval'时间来重新发送他们
repeat_interval: 5m
# 默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器
receiver: default
# 上面所有的属性都由所有子路由继承,并且可以在每个子路由上进行覆盖。
routes:
- receiver: email
group_wait: 10s
match:
team: node
EOF
# kubectl create -f alertmanager-configmap.yaml
# kubectl get cm -n kube-system
NAME DATA AGE
alertmanager-config 1 19h
coredns 1 90d
extension-apiserver-authentication 6 90d
kube-flannel-cfg 2 90d
kube-proxy 2 90d
kubeadm-config 2 90d
kubelet-config-1.16 1 90d
prometheus-config 1 8d
prometheus-rules 2 19h
3、创建PV、PVC进行数据持久化
cat >>alertmanager-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: alertmanager
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.101.11.156
path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: alertmanager
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
# kubectl apply -f alertmanager-volume.yaml
4、创建AlertManager的Pod资源
# cat >>alertmanager-deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: alertmanager
namespace: kube-system
labels:
k8s-app: alertmanager
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.15.3
spec:
replicas: 1
selector:
matchLabels:
k8s-app: alertmanager
version: v0.15.3
template:
metadata:
labels:
k8s-app: alertmanager
version: v0.15.3
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
priorityClassName: system-cluster-critical
containers:
- name: prometheus-alertmanager
image: "prom/alertmanager:v0.15.3"
imagePullPolicy: "IfNotPresent"
args:
- --config.file=/etc/config/alertmanager.yml
- --storage.path=/data
- --web.external-url=/
ports:
- containerPort: 9093
readinessProbe:
httpGet:
path: /#/status
port: 9093
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: storage-volume
mountPath: "/data"
subPath: ""
resources:
limits:
cpu: 10m
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
- name: prometheus-alertmanager-configmap-reload
image: "jimmidyson/configmap-reload:v0.1"
imagePullPolicy: "IfNotPresent"
args:
- --volume-dir=/etc/config
- --webhook-url=http://localhost:9093/-/reload
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
resources:
limits:
cpu: 10m
memory: 10Mi
requests:
cpu: 10m
memory: 10Mi
volumes:
- name: config-volume
configMap:
name: alertmanager-config
- name: storage-volume
persistentVolumeClaim:
claimName: alertmanager
---
apiVersion: v1
kind: Service
metadata:
name: alertmanager
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Alertmanager"
spec:
ports:
- name: http
port: 9093
protocol: TCP
targetPort: 9093
nodePort: 30093
selector:
k8s-app: alertmanager
ttype: NodePort
EOF
# kubectl apply -f alertmanager-deployment.yaml
# kubectl get pod,svc -n kube-system -o wide
需要我们在Prometheus配置文件(prometheus.configmap.yaml)中,添加AlertManager的地址,让Prometheus能够访问AlertManager,添加报警规则:
alerting:
alertmanagers:
- static_configs:
- targets: ['localhost:9093']
rule_files:
- /etc/config/rules/*.rules
三、部署blackbox exporter
为了能够对Ingress和Service进行探测,我们需要在集群部署Blackbox Exporter实例。 如下所示,创建 blackbox-exporter.yaml用于描述部署相关的内容:
# cat >>blackbox-exporter.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: blackbox-exporter
name: blackbox-exporter
spec:
ports:
- name: blackbox
port: 9115
protocol: TCP
selector:
app: blackbox-exporter
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: blackbox-exporter
name: blackbox-exporter
spec:
replicas: 1
selector:
matchLabels:
app: blackbox-exporter
template:
metadata:
labels:
app: blackbox-exporter
spec:
containers:
- image: prom/blackbox-exporter
imagePullPolicy: IfNotPresent
name: blackbox-exporter
EOF
# kubectl create -f blackbox-exporter.yaml
通过kubectl命令部署Blackbox Exporter实例,这里将部署一个Blackbox Exporter的Pod实例,同时通过服务blackbox-exporter在集群内暴露访问地址blackbox-exporter.default.svc.cluster.local,对于集群内的任意服务都可以通过该内部DNS域名访问Blackbox Exporter实例。
为了能够让Prometheus能够自动的对Service进行探测,我们需要通过服务发现自动找到所有的Service信息。 如下所示,在Prometheus的配置文件中添加名为kubernetes-services的监控采集任务:
- job_name: 'kubernetes-services'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
为了区分集群中需要进行探测的Service实例,我们通过标签‘prometheus.io/probe: true’进行判断,从而过滤出需要探测的所有Service实例:
并且将通过服务发现获取到的Service实例地址 __address__ 转换为获取监控数据的请求参数。同时 将 __address 执行Blackbox Exporter实例的访问地址,并且重写了标签instance的内容:
对于Ingress而言,也是一个相对类似的过程,这里给出对Ingress探测的Promthues任务配置:
- job_name: 'kubernetes-ingresses'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
四、部署prometheus
prometheus采用nfs挂载方式来存储数据,同时使用configMap管理配置文件。并且我们将所有的prometheus存储在kube-system namespace
当使用Deployment管理和部署应用程序时,用户可以方便了对应用进行扩容或者缩容,从而产生多个Pod实例。为了 能够统一管理这些Pod的配置信息,在Kubernetes中可以使用ConfigMaps资源定义和管理这些配置,并且通过环境 变量或者文件系统挂载的方式让容器使用这些配置。 这里将使用ConfigMaps管理Prometheus的配置文件,创建prometheus.configmap.yaml文件,并写入以下内容
1、创建Prometheus的ConfigMap资源对象
# cat >> prometheus.configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_timeout: 15s
alerting:
alertmanagers:
- static_configs:
- targets: ['localhost:9093']
rule_files:
- /etc/config/rules/*.rules
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'kubernetes-node'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
action: replace
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-kubelet'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- job_name: 'kubernetes-services'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
metrics_path: /probe
params:
module: [http_2xx]
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.default.svc.cluster.local:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
EOF
# kubectl apply -f prometheus.configmap.yaml
configmap/prometheus-config created
# kubectl get configmaps -n kube-system |grep prometheus
prometheus-config 1 25s
2、创建PV、PVC进行数据持久化
prometheus.yaml文件对应的ConfigMap对象通过volume的形式挂载进Pod,这样ConfigMap更新后,对应的pod也会热更新,然后我们再执行reload请求,prometheus配置就生效了。除此之外,对了将时序数据进行持久化,我们将数据目录和一个pvc对象进行了绑定,所以我们需要提前创建pvc对象:
# cat >prometheus-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.101.11.156
path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus
namespace: kube-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
通过一个简单的NFS作为存储后端创建一个pv & pvc
# kubectl create -f prometheus-volume.yaml
persistentvolume/prometheus created
persistentvolumeclaim/prometheus created
[
删除PersistentVolume、PersistentVolumeClaim,先删除使用了此应用的部署
kubectl delete -f prometheus.deployment.yaml
kubectl get PersistentVolume
kubectl get PersistentVolumeClaim -n kube-system
kubectl delete PersistentVolume prometheus
kubectl delete PersistentVolumeClaim prometheus -n kube-system
或者
kubectl delete -f prometheus-volume.yaml
]
3、创建rbac认证
因为prometheus需要访问k8s集群内部的资源,需要Kubernetes的访问授权
为了能够让Prometheus能够访问收到认证保护的Kubernetes API,我们首先需要做的是,对Prometheus进行访 问授权。在Kubernetes中主要使用基于角色的访问控制模型(Role-Based Access Control),用于管理 Kubernetes下资源访问权限。首先我们需要在Kubernetes下定义角色(ClusterRole),并且为该角色赋予响应 的访问权限。同时创建Prometheus所使用的账号(ServiceAccount),最后则是将该账号与角色进行绑定 (ClusterRoleBinding)。这些所有的操作在Kubernetes同样被视为是一系列的资源,可以通过YAML文件进行 描述并创建,这里创建prometheus-rbac.yaml文件,并写入以下内容:
# cat >>prometheus-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
EOF
注意:ClusterRole是全局的,不需要指定命名空间。而ServiceAccount是属于特定命名空间的资 源
创建:
# kubectl create -f prometheus-rbac.yaml
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[
查看:
kubectl get serviceaccount -n kube-system | grep prometheus
kubectl get clusterrole | grep prometheus
kubectl get clusterrolebinding | grep prometheus
删除:
kubectl delete clusterrolebinding prometheus
kubectl delete clusterrole prometheus
kubectl delete serviceaccount prometheus -n kube-system
]
4、创建prometheus的Pod资源
当ConfigMap资源创建成功后,我们就可以通过Volume挂载的方式,将Prometheus的配置文件挂载到容器中。 这里我们通过Deployment部署Prometheus Server实例,创建prometheus.deployment.yaml文件,并写入以下 内容:
# cat > prometheus.deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: kube-system
labels:
app: prometheus
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
serviceAccountName: prometheus ##在完成角色权限以及用户的绑定之后,就可以指定Prometheus使用特定的ServiceAccount创建Pod实例
containers:
- image: prom/prometheus:v2.4.3
imagePullPolicy: IfNotPresent
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus" ##数据存储路径
- "--storage.tsdb.retention=7d" ##数据保留期限的设置,企业中设置15天为宜
- "--web.enable-admin-api" # 控制对admin HTTP API的访问,其中包括删除时间序列等功能
- "--web.enable-lifecycle" # 支持热更新,直接执行 curl -X POST
curl -X POST http://localhost:9090/-/reload 立即生效
ports:
- containerPort: 9090
protocol: TCP
name: http
volumeMounts:
- mountPath: "/prometheus"
subPath: prometheus
name: data
- mountPath: "/etc/prometheus"
name: config-volume
- mountPath: /etc/config/rules
name: prometheus-rules
subPath: ""
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 100m
memory: 512Mi
securityContext: ##添加了一行securityContext,其中runAsUser设置为0,这是因为prometheus运行过程中使用的用户是nobody,如果不配置可能会出现权限问题
runAsUser: 0
volumes:
- name: data
persistentVolumeClaim:
claimName: prometheus
- name: config-volume
configMap:
name: prometheus-config
- name: prometheus-rules
configMap:
name: prometheus-rules
---
apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: prometheus
labels:
app: prometheus
spec:
type: NodePort
selector:
app: prometheus
ports:
- name: http
port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
EOF
将ConfigMap volume rbac 创建完毕后,就可以创建prometheus.deployment.yaml了,运行prometheus服务
# kubectl create -f prometheus.deployment.yaml
deployment.extensions/prometheus created
# kubectl get pod -n kube-system |grep prometheus
prometheus-847494df74-zbz9v 1/1 Running 0 148m
#这里1/1 状态为Running即可
指定ServiceAccount创建的Pod实例中,会自动将用于访问Kubernetes API的CA证书以及当前账户对应的访问令牌文件挂载到Pod实例的/var/run/secrets/kubernetes.io/serviceaccount/目录下,可以通过以下命令进 行查看:
# kubectl exec -it prometheus-847494df74-zbz9v /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:30090
五、部署grafana
1、创建PV、PVC进行数据持久化
# cat >>grafana_volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.101.11.156
path: /app/k8s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana
namespace: kube-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
# kubectl create -f grafana_volume.yaml
2、创建授权job
由于5.1(可以选择5.1之前的docker镜像,可以避免此类错误)版本后groupid更改,同时我们将/var/lib/grafana挂载到pvc后,目录拥有者可能不是grafana用户,所以我们还需要添加一个Job用于授权目录
cat > grafana_job.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: grafana-chown
namespace: kube-system
spec:
template:
spec:
restartPolicy: Never
containers:
- name: grafana-chown
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
image: busybox
imagePullPolicy: IfNotPresent
volumeMounts:
- name: storage
subPath: grafana
mountPath: /var/lib/grafana
volumes:
- name: storage
persistentVolumeClaim:
claimName: grafana
EOF
# kubectl create -f grafana_job.yaml
3、创建grafana的Pod资源
# cat >>grafana_deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
k8s-app: grafana
spec:
selector:
matchLabels:
k8s-app: grafana
app: grafana
revisionHistoryLimit: 10
template:
metadata:
labels:
app: grafana
k8s-app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:5.3.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: grafana
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
value: 12345@com
readinessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
livenessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 300m
memory: 1024Mi
requests:
cpu: 300m
memory: 1024Mi
volumeMounts:
- mountPath: /var/lib/grafana
subPath: grafana
name: storage
securityContext:
fsGroup: 472
runAsUser: 472
volumes:
- name: storage
persistentVolumeClaim:
claimName: grafana
---
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30091
selector:
app: grafana
EOF
# kubectl apply -f grafana_deployment.yaml
注意:比较重要的变量是GF_SECURITY_ADMIN_USER和GF_SECURITY_ADMIN_PASSWORD为grafana的账号和密码。
由于grafana将dashboard、插件这些数据保留在/var/lib/grafana目录下,所以我们这里需要做持久化,同时要针对这个目录做挂载声明,由于5.3.4版本用户的userid和groupid都有所变化,所以这里添加了一个securityContext设置用户ID。
# kubectl get pod,svc -n kube-system |grep grafana
pod/grafana-54f6755f88-5dwl7 1/1 Running 0 27h
pod/grafana-chown-lcw2v 0/1 Completed 0 27h
service/grafana NodePort 10.0.0.202 <none> 3000:9006/TCP 27h
使用任意一个NodeIP加端口进行访问,访问地址:http://NodeIP:30091,默认账号密码为:admin/12345@com
注意:grafana的prometheus data sources设置 URL为:
http://prometheus.kube-system.svc.cluster.local:9090
grafana默认走的是浏览器时区,但是prometheus使用的是UTC时区:
Configuration----->Preferences:Preferences:Timezone设置为UTC
六、prometheus热更新
编辑配置
# vi prometheus.configmap.yaml
更新配置
# kubectl apply -f prometheus.configmap.yaml
configmap/prometheus-config configured
# kubectl get pod -n kube-system -o wide|grep prometheus
prometheus-65b89bf89d-695bh 1/1 Running 0 15m 10.244.1.8 hadoop009 <none> <none>
热更新
# curl -X POST http://10.244.1.8:9090/-/reload
七、问题:prometheus被OOM杀死
k8s集群内prometheus频繁oomkilled问题:
# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
prometheus-5b97f7496b-gkg2c 0/1 CrashLoopBackOff 106 14h 10.244.2.25 hadoop007 <none> <none>
# kubectl describe pod -n kube-system prometheus-5b97f7496b-gkg2c
Name: prometheus-5b97f7496b-gkg2c
Namespace: kube-system
Priority: 0
Node: hadoop007/192.101.11.159
Start Time: Tue, 27 Oct 2020 17:49:49 +0800
Labels: app=prometheus
pod-template-hash=5b97f7496b
Annotations: <none>
Status: Running
IP: 10.244.2.25
IPs:
IP: 10.244.2.25
Controlled By: ReplicaSet/prometheus-5b97f7496b
Containers:
prometheus:
Container ID: docker://1236f8c07f51eeb2b6589a7505b38b6c1f68e64d7237b56cdc46ddf210a921c9
Image: prom/prometheus:v2.4.3
Image ID: docker://sha256:f92db1f1a7ce28d5bb2b473022e62fd940b23bf54835a297daf1de4fd34e029d
Port: 9090/TCP
Host Port: 0/TCP
Command:
/bin/prometheus
Args:
--config.file=/etc/prometheus/prometheus.yml
--storage.tsdb.path=/prometheus
--storage.tsdb.retention=7d
--web.enable-admin-api
--web.enable-lifecycle
State: Running
Started: Tue, 27 Oct 2020 17:49:50 +0800
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Tue, 27 Oct 2020 17:49:50 +0800
Finished: Tue, 27 Oct 2020 17:51:50 +0800
Ready: True
Restart Count: 106
Limits:
cpu: 100m
memory: 512Mi
Requests:
cpu: 100m
memory: 512Mi
Environment: <none>
Mounts:
/etc/config/rules from prometheus-rules (rw)
/etc/prometheus from config-volume (rw)
/prometheus from data (rw,path="prometheus")
/var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-rvnwz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus
ReadOnly: false
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-config
Optional: false
prometheus-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-rules
Optional: false
prometheus-token-rvnwz:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-token-rvnwz
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
...............
解决方案:
1、prometheus.deployment.yaml修改如下信息:
containers:
- image: prom/prometheus:v2.4.3
imagePullPolicy: IfNotPresent
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=7d"
- "--web.enable-admin-api"
- "--web.enable-lifecycle"
修改为:
containers:
- name: prometheus
image: prom/prometheus:v2.20.0
imagePullPolicy: IfNotPresent
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=5d"
- "--web.enable-admin-api"
- "--web.enable-lifecycle"
- "--storage.tsdb.min-block-duration=1h"
- "--storage.tsdb.max-block-duration=1h"
- "--query.max-samples=30000000"
...............................
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 100m
memory: 512Mi
修改为:
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 4Gi
2、prometheus.configmap.yaml修改如下信息:
global:
scrape_interval: 15s
scrape_timeout: 15s
修改为:
global:
scrape_interval: 1m
scrape_timeout: 1m
参考信息:
https://www.ibm.com/support/pages/node/882172
https://www.robustperception.io/new-features-in-prometheus-2-5-0
./prometheus --help
usage: prometheus [<flags>]
--query.max-samples=50000000 Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return.
New Features in Prometheus 2.5.0:
The second feature is that there is now a limit on the number of samples a query can have in memory at once, making it possible to stop massive queries that take too much RAM and threaten to OOM your Prometheus.
This can be adjusted with the --query.max-samples flag. Each sample uses 16 bytes of memory, however keep in mind there's more than just active samples in memory for a query.
八、附录
Docker容器监控:
各节点的kubelet组件中除了包含自身的监控指标信息以外,kubelet组件还内置了对cAdvisor的支持。 cAdvisor能够获取当前节点上运行的所有容器的资源使用情况,通过访问kubelet的/metrics/cadvisor地址可以获取到cadvisor的监控指标,因此和获取kubelet监控指标类似,这里同样通过node模式自动发现所有的 kubelet信息,并通过适当的relabel过程,修改prometheus.configmap.yaml配置,增加如下job:
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
节点运行监控:
Kubelet组件运行在Kubernetes集群的各个节点中,其负责维护和管理节点上Pod的运行状态。kubelet组件的正常运行直接关系到该节点是否能够正常的被Kubernetes集群正常使用。
基于Node模式,Prometheus会自动发现Kubernetes中所有Node节点的信息并作为监控的目标Target。 而这些 Target的访问地址实际上就是Kubelet的访问地址,并且Kubelet实际上直接内置了对Promtheus的支持。修改prometheus.configmap.yaml配置,增加如下job:
- job_name: 'kubernetes-kubelet'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
Api-Service 监控:
kube-apiserver扮演了整个Kubernetes集群管理的入口的角色,负责对外暴露Kubernetes API。kubeapiserver组件一般是独立部署在集群外的,为了能够让部署在集群内的应用(kubernetes插件或者用户应用)能够与kube-apiserver交互,Kubernetes会在默认命名空间下创建一个名为kubernetes的服务,如下所示:
# kubectl get svc kubernetes -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 29d <none>
而该kubernetes服务代理的后端实际地址通过endpoints进行维护,如下所示:
# kubectl get endpoints kubernetes
NAME ENDPOINTS AGE
kubernetes 192.168.122.2:6443 29d
通过这种方式集群内的应用或者系统主机就可以通过集群内部的DNS域名kubernetes.default.svc访问到部署外部的kube-apiserver实例。
因此,如果我们想要监控kube-apiserver相关的指标,只需要通过endpoints资源找到kubernetes对应的所有后 端地址即可。
如下所示,创建监控任务kubernetes-apiservers,这里指定了服务发现模式为endpoints。Promtheus会查找当前集群中所有的endpoints配置,并通过relabel进行判断是否为apiserver对应的访问地址。
- job_name: kubernetes-apiservers
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: default;kubernetes;https
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
- __meta_kubernetes_endpoint_port_name
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
Kubernetes Pod监控:
需要通过 Prometheus的pod服务发现模式,找到当前集群中部署的Node Exporter实例即可。 需要注意的是,由于 Kubernetes中并非所有的Pod都提供了对Prometheus的支持,有些可能只是一些简单的用户应用,为了区分哪些Pod实例是可以供Prometheus进行采集的,这里我们为Node Exporter添加了注解:
prometheus.io/scrape: 'true'
由于Kubernetes中Pod可能会包含多个容器,还需要用户通过注解指定用户提供监控指标的采集端口:
prometheus.io/port: '9100'
而有些情况下,Pod中的容器可能并没有使用默认的/metrics作为监控采集路径,因此还需要支持用户指定采集路 径:
prometheus.io/path: 'metrics'
参见prometheus-node-exporter.yaml 中以上三项的注释配置。
为Prometheus创建监控采集任务kubernetes-pods,如下所示:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
通过以上relabel过程实现对Pod实例的过滤,以及采集任务地址替换,从而实现对特定Pod实例监控指标的采集。需要说明的是kubernetes-pods并不是只针对Node Exporter而言,对于用户任意部署的Pod实例,只要其提供了对 Prometheus的支持,用户都可以通过为Pod添加注解的形式为其添加监控指标采集的支持。
Service监控:
apiserver实际上是一种特殊的Service,现在配置一个专门发现普通类型的Service。
这里我们对service进行过滤,只有在service配置了prometheus.io/scrape: "true"过滤出来
Serivce自动发现参数说明 (并不是所有创建的service都可以被prometheus发现)
#1.参数解释
relabel_configs:
-source_labels:[__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true 保留标签
source_labels: [__meta_kubernetes_service_annotation_prometheus_io_cheme]
这行配置代表我们只去筛选有__meta_kubernetes_service_annotation_prometheus_io_scrape的service,只有添加了这个声明才可以自动发现其他service
#2.参数解释
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
#指定一个抓取的端口,有的service可能有多个端口(比如之前的redis)。默认使用的是我们添加时使用kubernetes_service端口
#3.参数解释
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
#这里如果是https证书类型,我们还需要在添加证书和token。
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
发表评论
-
HTTPS的加密原理解读
2021-12-31 11:25 280一、为什么需要加密? 因为http的内容是明文传输的,明文数据 ... -
容器技术的基石: cgroup、namespace和联合文件系统
2021-12-09 10:47 680Docker 是基于 Linux Kernel 的 Names ... -
链路追踪skywalking安装部署
2021-10-21 12:06 792APM 安装部署: 一、下载 版本目录地址:http://a ... -
自动化运维 Ansible 安装部署
2021-08-20 19:06 821一、概述 Ansible 实现了批量系统配置、批量程序部署、 ... -
Linux 下 Kafka Cluster 搭建
2021-07-08 11:23 957概述 http://kafka.apachecn.org/q ... -
ELK RPM 安装配置
2021-06-22 18:59 600相关组件: 1)filebeat。用于收集日志组件,经测试其 ... -
在Kubernetes上部署 Redis 三主三从 集群
2021-03-10 16:25 629NFS搭建见: Linux NFS搭建与配置(https:// ... -
docker-compose 部署ELK(logstash->elasticsearch->kibana)
2020-11-11 18:02 1558概述: ELK是三个开源软件的缩写,分别表示:elastic ... -
Linux NFS 搭建与配置
2020-10-21 17:58 405一、NFS 介绍 NFS 是 Network FileSys ... -
K8S 备份及升级
2020-10-20 15:48 857一、准备工作 查看集群版本: # kubectl get no ... -
API 网关 kong 的 konga 配置使用
2020-09-23 10:46 4105一、Kong 概述: kong的 ... -
云原生技术 Docker、K8S
2020-09-02 16:53 537容器的三大好处 1.资源 ... -
Kubernetes 应用编排、管理与运维
2020-08-24 16:40 562一、kubectl 运维命令 kubectl control ... -
API 网关 kong/konga 安装部署
2020-08-25 17:34 558一、概述 Kong是Mashape开 ... -
Linux 下 Redis Cluster 搭建
2020-08-13 09:14 708Redis集群演变过程: 单 ... -
Kubernetes离线安装的本地yum源构建
2020-08-08 22:41 499一、需求场景 在K8S的使用过程中有时候会遇到在一些无法上网 ... -
Kubernetes 证书延期
2020-08-01 22:28 432一、概述 kubeadm 是 kubernetes 提供的一 ... -
kubeadm方式部署安装kubernetes
2020-07-29 08:01 2336一、前提准备: 0、升级更新系统(切记升级一下,曾被坑过) ... -
Kubernetes 部署 Nginx 集群
2020-07-20 09:32 833一.设置标签 为了保证nginx之能分配到nginx服务器需要 ... -
Prometheus 外部监控 Kubernetes 集群
2020-07-10 15:59 1997大多情况都是将 Prometheus 通过 yaml 安装在 ...
相关推荐
最新版mongodb-compass-1.16.3-win32-x64
源来源:https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy 本版本兼容性好,可以直接安装cvxopt,详细说明见我的博客 numpy-1.16.3+mkl-cp36-cp36m-win_amd64.whl cvxopt-1.2.3-cp36-cp36m-win_amd64.whl
Kubernetes(简称K8s)是一种强大的容器编排系统,它能够自动化容器的部署、扩展和管理,极大地简化了云原生应用的生命周期管理。 aws_cdk.aws_eks-1.16.3-py3-none-any.whl这个Python库版本1.16.3,表明它支持...
Python的基础模块之一,很多模块的先决条件,因为官网下载慢
spglib-1.16.3-cp310-cp310-win_amd64.whl
spglib-1.16.3-cp39-cp39-win_amd64.whl
spglib-1.16.3-cp38-cp38-win_amd64.whl
onnxruntime-1.16.3-cp38-cp38-win_amd64.whl
onnxruntime-1.16.3-cp39-cp39-win_amd64.whl
onnxruntime-1.16.3-cp311-cp311-win_amd64.whl
onnxruntime-1.16.3-cp310-cp310-win_amd64.whl
spglib-1.16.3-cp37-cp37m-win_amd64.whl
mongodb-compass-community-1.16.3-win32-x64.exe 可移步百度网盘:链接: https://pan.baidu.com/s/1RcffFh0CYLSrN9F0FXie9Q 提取码: g53s
pycairo-1.16.3-cp36-cp36m-win_amd64 对应3.6版本的python
spglib-1.16.3-pp38-pypy38_pp73-win_amd64.whl
python库。 资源全名:aws_cdk.aws_s3_assets-1.16.3-py3-none-any.whl
这个特定的版本是Go 1.16.3,针对Linux操作系统,并且适配于AMD64(也称为x86_64)架构。"tar.gz"是文件的格式,它是一个被gzip压缩的tar归档文件,通常在Unix-like系统中使用。 Go,又称Golang,是由Google开发的...
资源分类:Python库 所属语言:Python 资源全名:aws_cdk.aws_sdb-1.16.3-py3-none-any.whl 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
MongoDB Compass Community 1.16.3 是一个专为Windows 32位和64位操作系统设计的强大图形用户界面工具,它使数据库管理和开发工作变得更加直观和高效。这款软件是MongoDB公司推出的官方产品,旨在帮助用户更好地理解...