`
maosheng
  • 浏览: 565885 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

kubeadm方式部署安装kubernetes

阅读更多
一、前提准备:

0、升级更新系统(切记升级一下,曾被坑过)
$ yum -y update

提示:如出现 错误:保护多库版本
yum -y  update  --setopt=protected_multilib=false

1、关闭防火墙
$ systemctl status firewalld
$ systemctl stop firewalld
$ systemctl disable firewalld


2、关闭selinux
##查看selinux状态
$ getenforce

##临时关闭selinux
$ setenforce 0

##永久关闭selinux
$ cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

$ sed -i 's/enforcing/disabled/' /etc/selinux/config
$ cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     disabled - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of disabled.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

3.关闭swap
##临时关闭swap
$ swapoff -a  

##永久关闭swap
$ cat /etc/fstab

# /etc/fstab
# Created by anaconda on Sat May 30 16:59:33 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a046783d-73e8-4780-94b6-c4a309971353 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0

##永久关闭swap,注释swap这一行
$ vi /etc/fstab 
cat /etc/fstab

# /etc/fstab
# Created by anaconda on Sat May 30 16:59:33 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a046783d-73e8-4780-94b6-c4a309971353 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

4、修改hostname
$ hostnamectl --help
$ hostnamectl set-hostname k8s-master
$ hostname
$ hostnamectl

5、添加主机名与IP对应关系
$ cat /etc/hosts
192.168.18.180  k8s-master
192.168.18.181  k8s-node1
192.168.18.182  k8s-node2

6、将桥接IPv4流量传递到iptables的链,开启路由转发
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

# 生效
$ sysctl -p /etc/sysctl.d/k8s.conf
$ sysctl --system


7、所有节点安装Docker、kubeadm 和 kubelet

二、安装docker

1、Docker安装

Step 1: 安装必要的一些系统工具
$ yum install -y yum-utils device-mapper-persistent-data lvm2

Step 2: 配置docker源
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
(或者
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
)
$ cd /etc/yum.repos.d/
$ ls

Step 3: 查找Docker-CE的版本:
$ yum list docker-ce.x86_64 --showduplicates | sort -r

Step 4: 安装指定版本的Docker-CE
$ yum makecache fast       ##命令是将软件包信息提前在本地缓存一份,用来提高搜索安装软件的速度
$ yum install -y docker-ce-18.06.1.ce-3.el7

[
yum 执行问题:
http://192.101.11.8/centos7.3/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
处理方式:
yum-config-manager --save --setopt=Centos7_3.skip_if_unavailable=true
]

Step 5: 设置开机启动,并启动Docker
$ systemctl enable docker && systemctl start docker
$ systemctl status docker
$ docker version
Client:
Version:           18.06.1-ce
API version:       1.38
Go version:        go1.10.3
Git commit:        e68fc7a
Built:             Tue Aug 21 17:23:03 2018
OS/Arch:           linux/amd64
Experimental:      false

Server:
Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:25:29 2018
  OS/Arch:          linux/amd64
  Experimental:     false

2、Docker配置

kubelet的启动环境变量要与docker的cgroup-driver驱动一样

$ docker info | grep -i cgroup

docker的cgroup-driver是cgroupfs,而k8s默认是systemd,所以修改docker的cgroup-driver为systemd

修改docker的配置文件,目前k8s推荐使用的docker文件驱动是systemd,按照k8s官方文档可查看如何配置

$ vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}


改下防火墙forward链路规则,在/usr/lib/systemd/system/docker.service,增加:
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
或者
sed -i '20i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service



3、Docker启动

# 启动服务,设置开机启动
$ systemctl daemon-reload  && systemctl restart docker && systemctl enable docker

# 查看状态
$ systemctl status docker
# 查看版本
$ docker version
Client:
Version:           18.06.1-ce
API version:       1.38
Go version:        go1.10.3
Git commit:        e68fc7a
Built:             Tue Aug 21 17:23:03 2018
OS/Arch:           linux/amd64
Experimental:      false

Server:
Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:25:29 2018
  OS/Arch:          linux/amd64
  Experimental:     false


三、安装kubeadm,kubelet和kubectl

备注:Master节点安装kubectl,Node节点可不安装kubectl

1、添加阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg

wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import rpm-package-key.gpg        


$ yum makecache fast

2、安装

$ yum install -y kubectl-1.18.2 kubelet-1.18.2 kubeadm-1.18.2
$ yum list installed
$ systemctl enable kubelet

[
如卸载重新安装:
rpm -qa | grep kube
yum remove kubeadm-1.18.2-0.x86_64
]

##查看需要的镜像版本
$ kubeadm config images list
  
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7


四、安装master

1、导入镜像:

由于网络的问题,镜像下载存在问题,所以需要在master节点导入本地虚拟环境下载好的镜像

(1)下载镜像(本地虚拟环境):
$ docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.18.2

$ docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.2

$ docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.2

$ docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2

其他同理.......

(2)导出镜像(本地虚拟环境):
$ docker save -o /opt/images/kube-proxy-v1.18.2.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.18.2
#或者
$ docker save registry.aliyuncs.com/google_containers/kube-proxy:v1.18.2 > /opt/images/kube-proxy-v1.18.2.tar

$ docker save -o /opt/images/kube-apiserver-v1.18.2.tar registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.2

$ docker save -o /opt/images/kube-scheduler-v1.18.2.tar registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.2

$ docker save -o /opt/images/kube-controller-manager-v1.18.2.tar registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2

其他同理......

[root@VServer1 images]# ls
coredns-1.6.7.tar          kube-apiserver-v1.18.2.tar           kube-scheduler-v1.18.2.tar
dashboard-v2.0.3.tar       kube-controller-manager-v1.18.2.tar  metrics-scraper-v1.0.4.tar
etcd-3.4.3-0.tar           kube-flannel(v0.11.0).yaml           pause-3.2.tar
flannel-v0.11.0-amd64.tar  kube-proxy-v1.18.2.tar               recommended(v2.0.3).yaml


下载镜像和导出镜像以及修改tag可执行如下脚本:

$ vim kubeadm-pull-images.sh

#!/bin/bash
## 使用如下脚本下载国内镜像,并修改tag为google的tag(这样导入镜像后不需要修改tag)

set -e

KUBE_VERSION=v1.18.2
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.3-0
CORE_DNS_VERSION=1.6.7

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
  docker save $GCR_URL/$imageName > /opt/images/${imageName%:*}.tar
done

运行脚本,拉取镜像

$ sh ./kubeadm.sh

(3)导入镜像(K8S环境):
#从本地虚拟环境下载到windows:
>pscp -r -l root -pw maosheng123 192.168.18.181:/opt/images/kube-apiserver-v1.18.2.tar D:\CentOS\images

从windows上传到K8S环境:
>pscp D:\CentOS\images\kube-apiserver-v1.18.2.tar root@192.101.11.233:/opt/images

主节点导入镜像:
cd /opt/images/

[root@hadoop010 images]# docker load < kube-apiserver-v1.18.2.tar
fc4976bd934b: Loading layer [==================================================>]  53.88MB/53.88MB
884dffcfc972: Loading layer [==================================================>]  120.7MB/120.7MB
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.2

[root@hadoop010 images]# docker load < kube-controller-manager-v1.18.2.tar

[root@hadoop010 images]# docker load < kube-proxy-v1.18.2.tar      

[root@hadoop010 images]# docker load < kube-scheduler-v1.18.2.tar 

其他同理......

[root@k8s-node1 images]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
kubernetesui/dashboard                                            v2.0.3              503bc4b7440b        5 weeks ago         225MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.2             0d40868643c6        3 months ago        117MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.2             a3099161e137        3 months ago        95.3MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.2             6ed75ad404bd        3 months ago        173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.2             ace0a8c17ba9        3 months ago        162MB
kubernetesui/metrics-scraper                                      v1.0.4              86262685d9ab        4 months ago        36.9MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        5 months ago        683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        6 months ago        43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        9 months ago        288MB
quay.io/coreos/flannel                                            v0.11.0-amd64       ff281650a721        18 months ago       52.6MB


2、初始化Master

编辑kubelet的配置文件/etc/sysconfig/kubelet,设置其忽略Swap启用的状态错误,内容如下:KUBELET_EXTRA_ARGS="--fail-swap-on=false"

$ vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"


##通过参数指定image-repository="registry.aliyuncs.com/google_containers"
##导入镜像可以使用阿里云导出的镜像,不需要修改为k8s.gcr.io的tag

$ kubeadm init --apiserver-advertise-address=192.101.11.162 --control-plane-endpoint="192.101.11.162:6443" \
--kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16  \
--image-repository="registry.aliyuncs.com/google_containers" --upload-certs  --token-ttl 0  \
--ignore-preflight-errors=Swap | tee kubeadm-init.log

##不指定参数image-repository,导入阿里云导出的镜像,需要修改为k8s.gcr.io的tag

$ kubeadm init --apiserver-advertise-address=192.101.11.162 --control-plane-endpoint="192.101.11.162:6443" \
--kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --upload-certs  --token-ttl 0  \
--ignore-preflight-errors=Swap | tee kubeadm-init.log


W0725 15:22:04.835583    9386 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.18.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.18.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.18.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0725 15:22:09.089328    9386 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0725 15:22:09.089982    9386 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.010248 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7b1c938a54e0b3d0a9620667801cb2c4797f90720bb29ab4ff8e492797e50284
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: lktech.yoqdq7mvvuur5aad
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.18.100:6443 --token lktech.yoqdq7mvvuur5aad \
    --discovery-token-ca-cert-hash sha256:23bab27fcdcc857a93fd0161d26ac97d56050fc70975cc18fa0c03c293cbb58e

查看kubelet日志
$ journalctl -xefu kubelet

查看系统日志
tail -f /var/log/message


3、Master节点上使用kubectl工具
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes

如果执行了kubeadm reset后,需要执行如下:
$ rm -rf $HOME/.kube
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes


4、在所有节点上构建flannel网络,仅Master节点执行

#下载flannel配置文件
$ mkdir -p /opt/flannel
$ cd /opt/flannel/
$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

因为kube-flannel.yml文件中使用的镜像为quay.io的,国内无法拉取,所以同样的先从国内源上下载,再修改tag,脚本如下
$ vim flanneld.sh

#!/bin/bash
set -e

FLANNEL_VERSION=v0.11.0

# 在这里修改源
QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
  docker pull $QINIU_URL/$imageName
  docker tag  $QINIU_URL/$imageName $QUAY_URL/$imageName
  docker rmi $QINIU_URL/$imageName
  docker save $GCR_URL/$imageName > /opt/images/${imageName%:*}.tar
done

运行脚本,这个脚本需要在每个节点上执行

$ sh flanneld.sh

$ kubectl create -f kube-flannel.yml
$ systemctl restart network


5、Kubernetes Node加入集群

##执行在k8s-master上init后输出的join命令,如果找不到了,可以在k8s-master上执行以下命令输出
$ kubeadm token create --print-join-command
kubeadm join 192.168.18.100:6443 --token lktech.yoqdq7mvvuur5aad --discovery-token-ca-cert-hash sha256:23bab27fcdcc857a93fd0161d26ac97d56050fc70975cc18fa0c03c293cbb58e

##加入集群
$ kubeadm join 192.168.18.100:6443 --token lktech.yoqdq7mvvuur5aad \
    --discovery-token-ca-cert-hash sha256:23bab27fcdcc857a93fd0161d26ac97d56050fc70975cc18fa0c03c293cbb58e --ignore-preflight-errors=all

>     --discovery-token-ca-cert-hash sha256:23bab27fcdcc857a93fd0161d26ac97d56050fc70975cc18fa0c03c293cbb58e
W0725 15:53:38.909055   10778 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


6、集群后续扩容
默认情况下加入集群的token是24小时过期,24小时后如果是想要新的node加入到集群,需要重新生成一个token,命令如下

# 显示获取token列表
$ kubeadm token list
# 生成新的token
$ kubeadm token create --print-join-command


7、集群缩容

master节点:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

node节点:

kubeadm reset


8、部署 Dashboard
地址:https://github.com/kubernetes/dashboard
文档:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
部署最新版本v2.0.3,下载yaml
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
kubernetesui/dashboard:v2.0.3
kubernetesui/metrics-scraper:v1.0.4

$ mkdir dashboard
$ cd dashboard/
$ wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

# 修改service类型为nodeport
$ vim recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
...

$ kubectl apply -f recommended.yaml


9、创建service account并绑定默认cluster-admin管理员集群角色

$ vim dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

$ kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
获取token

$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-hb5vs
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d699cd10-82cb-48ac-af7e-e8eea540b46e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ing5T2gwbFR2Wk56SG9rR2xVck5BOFhVRnRWVE0wdHhSdndyOXZ3Uk5vYkUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWhiNXZzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNjk5Y2QxMC04MmNiLTQ4YWMtYWY3ZS1lOGVlYTU0MGI0NmUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.OkhaAJ5wLhQA2oR8wNIvEW9UYYtwEOuGQIMa281f42SD5UrJzHBxk1_YeNbTQFKMJHcgeRpLxCy7PyZotLq7S_x_lhrVtg82MPbagu3ofDjlXLKc3pU9R9DqCHyid1rGXA94muNJRRWuI4Vq4DaPEnZ0xjfkep4AVPiOjFTlHXuBa68qRc-XK4dhs95BozVIHwir1W2CWhlNdfgTEY2QYJX0N1WqBQu_UWi3ay3NDLQR6pn1OcsG4xCemHjjsMmrKElZthAAc3r1aUQdCV7YNpSBajCPSSyfbMiU3mOjy1xLipEijFditif3HGXpKyYLkbuOY4dYtZHocWK7bfgGDQ


10、访问Dashboard
访问地址:http://NodeIP:30001


五、问题及方案

1、问题一:
master节点:
# kubeadm init --kubernetes-version=v1.16.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap  | tee kubeadm-init.log
node节点:
# kubeadm join 192.101.10.82:6443 --token 61ncjb.7omeewgf9q2gl6r2  --discovery-token-ca-cert-hash sha256:14246e460482fee37b9de5d37e46e3bcea6c7f5ab303d733fc62ea2f0055897b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
error execution phase kubelet-start: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-node0 pki]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 三 2020-10-21 12:38:57 CST; 22ms ago
     Docs: https://kubernetes.io/docs/
  Process: 5476 ExecStart=/usr/bin/kubelet (code=exited, status=255)
Main PID: 5476 (code=exited, status=255)

10月 21 12:38:57 k8s-node0 systemd[1]: Unit kubelet.service entered failed state.
10月 21 12:38:57 k8s-node0 systemd[1]: kubelet.service failed.
[root@k8s-node0 pki]# journalctl -xeu kubelet
10月 21 12:38:47 k8s-node0 kubelet[5414]: W1021 12:38:47.695115    5414 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-
10月 21 12:38:47 k8s-node0 kubelet[5414]: I1021 12:38:47.695144    5414 docker_service.go:240] Hairpin mode set to "hairpin-veth"
10月 21 12:38:47 k8s-node0 kubelet[5414]: W1021 12:38:47.695326    5414 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
10月 21 12:38:47 k8s-node0 kubelet[5414]: I1021 12:38:47.699338    5414 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
10月 21 12:38:47 k8s-node0 kubelet[5414]: I1021 12:38:47.707187    5414 docker_service.go:260] Docker Info: &{ID:NBHL:T3BF:XU7B:6DVA:HHPR:E35Y:OOLD:Q7FT:J2S4:UI56:LCLO:FGFI Containers:1 Con
10月 21 12:38:47 k8s-node0 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
10月 21 12:38:47 k8s-node0 kubelet[5414]: F1021 12:38:47.707351    5414 server.go:271] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" i
10月 21 12:38:47 k8s-node0 systemd[1]: Unit kubelet.service entered failed state.
10月 21 12:38:47 k8s-node0 systemd[1]: kubelet.service failed.
10月 21 12:38:57 k8s-node0 systemd[1]: kubelet.service holdoff time over, scheduling restart.
10月 21 12:38:57 k8s-node0 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
10月 21 12:38:57 k8s-node0 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.900719    5476 server.go:410] Version: v1.16.3
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.900994    5476 plugins.go:100] No cloud provider specified.
10月 21 12:38:57 k8s-node0 kubelet[5476]: W1021 12:38:57.901013    5476 server.go:549] standalone mode, no API client
10月 21 12:38:57 k8s-node0 kubelet[5476]: W1021 12:38:57.939289    5476 server.go:467] No api server defined - no events will be sent to API server.
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.939314    5476 server.go:636] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.939743    5476 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.939763    5476 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCg
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.939907    5476 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.939917    5476 container_manager_linux.go:305] Creating device plugin manager: true
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.939947    5476 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/de
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.940003    5476 state_mem.go:36] [cpumanager] initializing new in-memory state store
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.940131    5476 state_mem.go:84] [cpumanager] updated default cpuset: ""
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.940145    5476 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.940158    5476 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x79a0338 10000000000 0xc0
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.942192    5476 client.go:75] Connecting to docker on unix:///var/run/docker.sock
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.942216    5476 client.go:104] Start docker client with request timeout=2m0s
10月 21 12:38:57 k8s-node0 kubelet[5476]: W1021 12:38:57.943590    5476 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.943628    5476 docker_service.go:240] Hairpin mode set to "hairpin-veth"
10月 21 12:38:57 k8s-node0 kubelet[5476]: W1021 12:38:57.943784    5476 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.947984    5476 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
10月 21 12:38:57 k8s-node0 kubelet[5476]: I1021 12:38:57.957296    5476 docker_service.go:260] Docker Info: &{ID:NBHL:T3BF:XU7B:6DVA:HHPR:E35Y:OOLD:Q7FT:J2S4:UI56:LCLO:FGFI Containers:1 Con
10月 21 12:38:57 k8s-node0 kubelet[5476]: F1021 12:38:57.957405    5476 server.go:271] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" i
10月 21 12:38:57 k8s-node0 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
10月 21 12:38:57 k8s-node0 systemd[1]: Unit kubelet.service entered failed state.
10月 21 12:38:57 k8s-node0 systemd[1]: kubelet.service failed.

方案:

[root@k8s-node0 net.d]# cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

[root@k8s-node0 net.d]# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
[root@k8s-node0 net.d]# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
[root@k8s-node0 net.d]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
[root@k8s-node0 net.d]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
* Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8s-node0 net.d]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed Nov  7 16:37:14 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/asianux-root /                       xfs     defaults        0 0
UUID=42c0548a-a0b2-46a5-817c-ef58c8712457 /boot                   ext4    defaults        1 2
/dev/mapper/asianux-home /home                   xfs     defaults        0 0
#/dev/mapper/asianux-swap swap                    swap    defaults        0 0
[root@k8s-node0 net.d]# swapoff -a
[root@k8s-node0 net.d]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-node0 net.d]# getenforce
Disabled
[root@k8s-node0 net.d]# systemctl stop firewalld
[root@k8s-node0 net.d]# sed -i '20i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service
[root@k8s-node0 net.d]# systemctl daemon-reload  && systemctl restart docker && systemctl enable docker && systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2020-10-21 12:56:39 CST; 62ms ago
     Docs: https://docs.docker.com
Main PID: 8314 (dockerd)
   CGroup: /system.slice/docker.service
           ├─8314 /usr/bin/dockerd
           └─8328 docker-containerd --config /var/run/docker/containerd/containerd.toml

10月 21 12:56:38 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:38.950084536+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201c6c60, READY" module=grpc
10月 21 12:56:38 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:38.950109780+08:00" level=info msg="Loading containers: start."
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.075270454+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. D... IP address"
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.101219616+08:00" level=info msg="Loading containers: done."
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.102406347+08:00" level=warning msg="Not using native diff for overlay2, this may cause degraded performan...ver=overlay2
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.109926124+08:00" level=info msg="Docker daemon" commit=e68fc7a graphdriver(s)=overlay2 version=18.06.1-ce
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.109990232+08:00" level=info msg="Daemon has completed initialization"
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.110372171+08:00" level=warning msg="Could not register builder git source: failed to find git binary: exe...nd in $PATH"
10月 21 12:56:39 k8s-node0 dockerd[8314]: time="2020-10-21T12:56:39.115692449+08:00" level=info msg="API listen on /var/run/docker.sock"
10月 21 12:56:39 k8s-node0 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS


# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"


# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1021 12:57:00.140019    8576 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-node0 net.d]# kubeadm join 192.101.10.82:6443 --token 61ncjb.7omeewgf9q2gl6r2 \
>     --discovery-token-ca-cert-hash sha256:14246e460482fee37b9de5d37e46e3bcea6c7f5ab303d733fc62ea2f0055897b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


2、问题二:

1)、kubectl get node

NAME        STATUS         ROLES    AGE   VERSION
ebspxapp2   NotReady    <none>   70m   v1.18.2
ebspxapp3   NotReady    <none>   71m   v1.18.2
ebspxapp4   Ready           master   71m   v1.18.2


2)、kubectl get pod -o wide -n kube-system

kube-flannel-ds-amd64-jx5gn         0/1     Init:0/1            0          26m   10.2.72.51    ebspxapp2   <none>           <none>
kube-flannel-ds-amd64-spxs9         0/1     Init:0/1            0          30m   10.2.72.52    ebspxapp3   <none>           <none>
kube-proxy-5p6kk                    0/1     ContainerCreating   0          27m   10.2.72.52    ebspxapp3   <none>           <none>
kube-proxy-8ccrx                    1/1     Running             2          34m   10.2.72.53    ebspxapp4   <none>           <none>
kube-proxy-q9vp5                    0/1     ContainerCreating   0          33m   10.2.72.51    ebspxapp2   <none>           <none>

3)、kubectl describe pod kube-proxy-5p6kk -n kube-system

Events:
  Type     Reason                  Age                  From                Message
  ----     ------                  ----                 ----                -------
  Normal   Scheduled               26m                  default-scheduler   Successfully assigned kube-system/kube-proxy-5p6kk to ebspxapp3
  Warning  FailedCreatePodSandBox  67s (x116 over 26m)  kubelet, ebspxapp3  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to inspect sandbox image "k8s.gcr.io/pause:3.2": Error response from daemon: stat /var/lib/docker/overlay2/a72c6915b9101d6d84cc97ccbec0a4948a9d1d1284be54bb2de581920a683bc0: no such file or directory

kubectl describe pod kube-flannel-ds-amd64-spxs9 -n kube-system

Events:
  Type     Reason                  Age                    From                Message
  ----     ------                  ----                   ----                -------
  Normal   Scheduled               4m43s                  default-scheduler   Successfully assigned kube-system/kube-flannel-ds-amd64-spxs9 to ebspxapp3
  Warning  FailedCreatePodSandBox  3m47s (x5 over 4m43s)  kubelet, ebspxapp3  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to inspect sandbox image "k8s.gcr.io/pause:3.2": Error response from daemon: stat /var/lib/docker/overlay2/a72c6915b9101d6d84cc97ccbec0a4948a9d1d1284be54bb2de581920a683bc0: no such file or directory
  Warning  FailedCreatePodSandBox  0s (x14 over 2m53s)    kubelet, ebspxapp3  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to inspect sandbox image "k8s.gcr.io/pause:3.2": Error response from daemon: stat /var/lib/docker/overlay2/a72c6915b9101d6d84cc97ccbec0a4948a9d1d1284be54bb2de581920a683bc0: no such file or directory


方案:

提示:在所有NotReady的节点执行如下操作

systemctl daemon-reload
systemctl stop docker

rm -rf /var/lib/docker/

systemctl restart docker
systemctl enable docker
systemctl status docker

重新导入镜像:

docker load < kube-apiserver-v1.18.2.tar

docker load < kube-controller-manager-v1.18.2.tar

docker load < kube-proxy-v1.18.2.tar      

docker load < kube-scheduler-v1.18.2.tar 

.............

docker images

REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.2             0d40868643c6        3 months ago        117MB
k8s.gcr.io/kube-apiserver            v1.18.2             6ed75ad404bd        3 months ago        173MB
k8s.gcr.io/kube-scheduler            v1.18.2             a3099161e137        3 months ago        95.3MB
k8s.gcr.io/kube-controller-manager   v1.18.2             ace0a8c17ba9        3 months ago        162MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        5 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        5 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        9 months ago        288MB
quay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        18 months ago       52.6MB

docker ps -a

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                      PORTS               NAMES
a1d8a18f2205        ff281650a721           "/opt/bin/flanneld -…"   44 minutes ago      Up 44 minutes                                   k8s_kube-flannel_kube-flannel-ds-amd64-jx5gn_kube-system_9f348af4-f4c5-4602-9fea-9af9d6c621aa_0
bf1ca4a71ba7        ff281650a721           "cp -f /etc/kube-fla…"   44 minutes ago      Exited (0) 44 minutes ago                       k8s_install-cni_kube-flannel-ds-amd64-jx5gn_kube-system_9f348af4-f4c5-4602-9fea-9af9d6c621aa_0
794f438f8050        k8s.gcr.io/pause:3.2   "/pause"                 44 minutes ago      Up 44 minutes                                   k8s_POD_kube-flannel-ds-amd64-jx5gn_kube-system_9f348af4-f4c5-4602-9fea-9af9d6c621aa_0
75ad5663ecac        0d40868643c6           "/usr/local/bin/kube…"   44 minutes ago      Up 44 minutes                                   k8s_kube-proxy_kube-proxy-q9vp5_kube-system_e7ef96ba-a125-4b59-a52c-353f458eb0b2_0
5fb865000895        k8s.gcr.io/pause:3.2   "/pause"                 44 minutes ago      Up 44 minutes                                   k8s_POD_kube-proxy-q9vp5_kube-system_e7ef96ba-a125-4b59-a52c-353f458eb0b2_0


六、Master节点检查:

[root@ebspxapp4 net.d]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
ebspxapp2   Ready    <none>   70m   v1.18.2
ebspxapp3   Ready    <none>   71m   v1.18.2
ebspxapp4   Ready    master   71m   v1.18.2


[root@ebspxapp4 net.d]# kubectl get  pod  -o wide --all-namespaces
NAMESPACE              NAME                                    READY   STATUS             RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
kube-system            coredns-66bff467f8-2tnk9                1/1     Running            3          90m   10.244.0.12   ebspxapp4   <none>           <none>
kube-system            coredns-66bff467f8-4n8jg                1/1     Running            2          90m   10.244.0.10   ebspxapp4   <none>           <none>
kube-system            etcd-ebspxapp4                          1/1     Running            2          90m   10.2.72.53    ebspxapp4   <none>           <none>
kube-system            kube-apiserver-ebspxapp4                1/1     Running            2          90m   10.2.72.53    ebspxapp4   <none>           <none>
kube-system            kube-controller-manager-ebspxapp4       1/1     Running            3          90m   10.2.72.53    ebspxapp4   <none>           <none>
kube-system            kube-flannel-ds-amd64-6n97g             1/1     Running            3          85m   10.2.72.53    ebspxapp4   <none>           <none>
kube-system            kube-flannel-ds-amd64-jx5gn             1/1     Running            0          82m   10.2.72.51    ebspxapp2   <none>           <none>
kube-system            kube-flannel-ds-amd64-spxs9             1/1     Running            0          85m   10.2.72.52    ebspxapp3   <none>           <none>
kube-system            kube-proxy-5p6kk                        1/1     Running            0          82m   10.2.72.52    ebspxapp3   <none>           <none>
kube-system            kube-proxy-8ccrx                        1/1     Running            2          90m   10.2.72.53    ebspxapp4   <none>           <none>
kube-system            kube-proxy-q9vp5                        1/1     Running            0          89m   10.2.72.51    ebspxapp2   <none>           <none>
kube-system            kube-scheduler-ebspxapp4                1/1     Running            2          90m   10.2.72.53    ebspxapp4   <none>           <none>


[root@ebspxapp4 net.d]# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
25b3b90a7ff7        ff281650a721           "/opt/bin/flanneld -…"   57 minutes ago      Up 57 minutes                           k8s_kube-flannel_kube-flannel-ds-amd64-6n97g_kube-system_c4a539c7-9c9d-488a-931f-c2c64f459ccb_3
20bc1bded117        67da37a9a360           "/coredns -conf /etc…"   57 minutes ago      Up 57 minutes                           k8s_coredns_coredns-66bff467f8-2tnk9_kube-system_fe7cbb8a-1e95-4691-bf09-9382030b7c53_3
aa5686078bd7        ace0a8c17ba9           "kube-controller-man…"   57 minutes ago      Up 57 minutes                           k8s_kube-controller-manager_kube-controller-manager-ebspxapp4_kube-system_bde38af668115eac9d0a0ed7d36ade15_3
96068279b078        67da37a9a360           "/coredns -conf /etc…"   58 minutes ago      Up 58 minutes                           k8s_coredns_coredns-66bff467f8-4n8jg_kube-system_58bd31b9-d20e-4540-a203-856ef6d59c1d_2
93f9a68a7a3a        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_coredns-66bff467f8-2tnk9_kube-system_fe7cbb8a-1e95-4691-bf09-9382030b7c53_3
a5378f3c0a0a        a3099161e137           "kube-scheduler --au…"   58 minutes ago      Up 58 minutes                           k8s_kube-scheduler_kube-scheduler-ebspxapp4_kube-system_155707e0c19147c8dc5e997f089c0ad1_2
01bfcdbc94a7        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_kubernetes-dashboard-7f99b75bf4-s2q2k_kubernetes-dashboard_e10f41f2-e707-4efa-ac75-25f8d59a4eab_3
c201cec81b8d        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_kube-scheduler-ebspxapp4_kube-system_155707e0c19147c8dc5e997f089c0ad1_3
55271fe8b5fa        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_kube-flannel-ds-amd64-6n97g_kube-system_c4a539c7-9c9d-488a-931f-c2c64f459ccb_2
3a75573420e2        303ce5db0e90           "etcd --advertise-cl…"   58 minutes ago      Up 58 minutes                           k8s_etcd_etcd-ebspxapp4_kube-system_1084eb6e962f4c712997ccd7714a8fdd_2
8e5a745351fb        6ed75ad404bd           "kube-apiserver --ad…"   58 minutes ago      Up 58 minutes                           k8s_kube-apiserver_kube-apiserver-ebspxapp4_kube-system_877cd5a6384470588a55f3c4fb1bb6f1_2
baf18478db6e        0d40868643c6           "/usr/local/bin/kube…"   58 minutes ago      Up 58 minutes                           k8s_kube-proxy_kube-proxy-8ccrx_kube-system_87480089-5266-4a12-9181-f2631225821f_2
c11eab2740d8        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_coredns-66bff467f8-4n8jg_kube-system_58bd31b9-d20e-4540-a203-856ef6d59c1d_2
d95dbacee6a8        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_etcd-ebspxapp4_kube-system_1084eb6e962f4c712997ccd7714a8fdd_2
c68e3c811dcd        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_kube-controller-manager-ebspxapp4_kube-system_bde38af668115eac9d0a0ed7d36ade15_3
06dacaf5bb7a        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_kube-proxy-8ccrx_kube-system_87480089-5266-4a12-9181-f2631225821f_2
0f529e4cc07a        k8s.gcr.io/pause:3.2   "/pause"                 58 minutes ago      Up 58 minutes                           k8s_POD_kube-apiserver-ebspxapp4_kube-system_877cd5a6384470588a55f3c4fb1bb6f1_2



node 节点配置:

# tree /etc/kubernetes/
/etc/kubernetes/
├── kubelet.conf
├── manifests
└── pki
    └── ca.crt

2 directories, 2 files

# tree /var/lib/etcd/
/var/lib/etcd/

0 directories, 0 files

# tree /var/lib/kubelet/
/var/lib/kubelet/
├── config.yaml
├── cpu_manager_state
├── device-plugins
│?? ├── DEPRECATION
│?? ├── kubelet_internal_checkpoint
│?? └── kubelet.sock
├── kubeadm-flags.env
├── pki
│?? ├── kubelet-client-2020-10-21-12-57-13.pem
│?? ├── kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2020-10-21-12-57-13.pem
│?? ├── kubelet.crt
│?? └── kubelet.key
├── plugin-containers
├── plugins
├── plugins_registry
├── pod-resources
│?? └── kubelet.sock
└── pods
    ├── 36061b6e-499d-4796-a4cc-a7f9cab27a33
    │?? ├── containers
    │?? │?? └── consul
    │?? │??     └── e21c74f0
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? │?? └── kubernetes.io~empty-dir
    │?? │??     └── wrapped_default-token-sszr4
    │?? │??         └── ready
    │?? └── volumes
    │??     └── kubernetes.io~secret
    │??         └── default-token-sszr4
    │??             ├── ca.crt -> ..data/ca.crt
    │??             ├── namespace -> ..data/namespace
    │??             └── token -> ..data/token
    ├── 3efe9dd6-62f3-4404-af58-559d3d9c76d3
    │?? ├── containers
    │?? │?? ├── install-cni
    │?? │?? │?? └── 4831cc9a
    │?? │?? └── kube-flannel
    │?? │??     └── 6abf664b
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? │?? └── kubernetes.io~empty-dir
    │?? │??     ├── wrapped_flannel-cfg
    │?? │??     │?? └── ready
    │?? │??     └── wrapped_flannel-token-ltds5
    │?? │??         └── ready
    │?? └── volumes
    │??     ├── kubernetes.io~configmap
    │??     │?? └── flannel-cfg
    │??     │??     ├── cni-conf.json -> ..data/cni-conf.json
    │??     │??     └── net-conf.json -> ..data/net-conf.json
    │??     └── kubernetes.io~secret
    │??         └── flannel-token-ltds5
    │??             ├── ca.crt -> ..data/ca.crt
    │??             ├── namespace -> ..data/namespace
    │??             └── token -> ..data/token
    ├── 4cdb5e39-2d59-4336-a2be-085cfffd1179
    │?? ├── containers
    │?? │?? └── kube-proxy
    │?? │??     └── d7951e7e
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? │?? └── kubernetes.io~empty-dir
    │?? │??     ├── wrapped_kube-proxy
    │?? │??     │?? └── ready
    │?? │??     └── wrapped_kube-proxy-token-26dst
    │?? │??         └── ready
    │?? └── volumes
    │??     ├── kubernetes.io~configmap
    │??     │?? └── kube-proxy
    │??     │??     ├── config.conf -> ..data/config.conf
    │??     │??     └── kubeconfig.conf -> ..data/kubeconfig.conf
    │??     └── kubernetes.io~secret
    │??         └── kube-proxy-token-26dst
    │??             ├── ca.crt -> ..data/ca.crt
    │??             ├── namespace -> ..data/namespace
    │??             └── token -> ..data/token
    └── 7a824811-b9aa-4937-84c8-7477ac79c358
        ├── containers
        │?? └── consul-client
        │??     └── 0f940373
        ├── etc-hosts
        ├── plugins
        │?? └── kubernetes.io~empty-dir
        │??     └── wrapped_default-token-sszr4
        │??         └── ready
        └── volumes
            └── kubernetes.io~secret
                └── default-token-sszr4
                    ├── ca.crt -> ..data/ca.crt
                    ├── namespace -> ..data/namespace
                    └── token -> ..data/token

50 directories, 42 files

master节点配置:

# tree /etc/kubernetes/
/etc/kubernetes/
├── admin.conf
├── controller-manager.conf
├── kubeadm-init.log
├── kubelet.conf
├── manifests
│?? ├── etcd.yaml
│?? ├── kube-apiserver.yaml
│?? ├── kube-controller-manager.yaml
│?? └── kube-scheduler.yaml
├── pki
│?? ├── apiserver.crt
│?? ├── apiserver-etcd-client.crt
│?? ├── apiserver-etcd-client.key
│?? ├── apiserver.key
│?? ├── apiserver-kubelet-client.crt
│?? ├── apiserver-kubelet-client.key
│?? ├── ca.crt
│?? ├── ca.key
│?? ├── etcd
│?? │?? ├── ca.crt
│?? │?? ├── ca.key
│?? │?? ├── healthcheck-client.crt
│?? │?? ├── healthcheck-client.key
│?? │?? ├── peer.crt
│?? │?? ├── peer.key
│?? │?? ├── server.crt
│?? │?? └── server.key
│?? ├── front-proxy-ca.crt
│?? ├── front-proxy-ca.key
│?? ├── front-proxy-client.crt
│?? ├── front-proxy-client.key
│?? ├── sa.key
│?? └── sa.pub
└── scheduler.conf

3 directories, 31 files

tree /var/lib/etcd/
/var/lib/etcd/
└── member
    ├── snap
    │?? ├── 0000000000000002-0000000000002711.snap
    │?? ├── 0000000000000002-0000000000004e22.snap
    │?? └── db
    └── wal
        ├── 0000000000000000-0000000000000000.wal
        └── 0.tmp

3 directories, 5 files

tree /var/lib/kubelet/
/var/lib/kubelet/
├── config.yaml
├── cpu_manager_state
├── device-plugins
│?? ├── DEPRECATION
│?? ├── kubelet_internal_checkpoint
│?? └── kubelet.sock
├── kubeadm-flags.env
├── pki
│?? ├── kubelet-client-2020-10-21-10-42-31.pem
│?? ├── kubelet-client-2020-10-21-10-43-00.pem
│?? ├── kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2020-10-21-10-43-00.pem
│?? ├── kubelet.crt
│?? └── kubelet.key
├── plugin-containers
├── plugins
├── plugins_registry
├── pod-resources
│?? └── kubelet.sock
└── pods
    ├── 09b14979c022b93fad3690c8a2084f70
    │?? ├── containers
    │?? │?? └── etcd
    │?? │??     └── ef4e4db4
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? └── volumes
    ├── 0a430757-12ef-410b-9069-9854a08eb17a
    │?? ├── containers
    │?? │?? └── kube-proxy
    │?? │??     └── bdd76235
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? │?? └── kubernetes.io~empty-dir
    │?? │??     ├── wrapped_kube-proxy
    │?? │??     │?? └── ready
    │?? │??     └── wrapped_kube-proxy-token-26dst
    │?? │??         └── ready
    │?? └── volumes
    │??     ├── kubernetes.io~configmap
    │??     │?? └── kube-proxy
    │??     │??     ├── config.conf -> ..data/config.conf
    │??     │??     └── kubeconfig.conf -> ..data/kubeconfig.conf
    │??     └── kubernetes.io~secret
    │??         └── kube-proxy-token-26dst
    │??             ├── ca.crt -> ..data/ca.crt
    │??             ├── namespace -> ..data/namespace
    │??             └── token -> ..data/token
    ├── 324fa25dc1c8c4033391819aaa687bf8
    │?? ├── containers
    │?? │?? └── kube-controller-manager
    │?? │??     └── 604e94f7
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? └── volumes
    ├── 4e04118d-fadb-46f9-a678-cf07f922be65
    │?? ├── containers
    │?? │?? ├── install-cni
    │?? │?? │?? └── 84a1b318
    │?? │?? └── kube-flannel
    │?? │??     └── 7921931f
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? │?? └── kubernetes.io~empty-dir
    │?? │??     ├── wrapped_flannel-cfg
    │?? │??     │?? └── ready
    │?? │??     └── wrapped_flannel-token-ltds5
    │?? │??         └── ready
    │?? └── volumes
    │??     ├── kubernetes.io~configmap
    │??     │?? └── flannel-cfg
    │??     │??     ├── cni-conf.json -> ..data/cni-conf.json
    │??     │??     └── net-conf.json -> ..data/net-conf.json
    │??     └── kubernetes.io~secret
    │??         └── flannel-token-ltds5
    │??             ├── ca.crt -> ..data/ca.crt
    │??             ├── namespace -> ..data/namespace
    │??             └── token -> ..data/token
    ├── 4e1bd6e5b41d60d131353157588ab020
    │?? ├── containers
    │?? │?? └── kube-scheduler
    │?? │??     └── 7e9889f7
    │?? ├── etc-hosts
    │?? ├── plugins
    │?? └── volumes
    └── 62acd5d6f848e7e15f07ee8968b42de5
        ├── containers
        │?? └── kube-apiserver
        │??     └── 5befa250
        ├── etc-hosts
        ├── plugins
        └── volumes

52 directories, 39 files




        
         

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics