- 浏览: 565960 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (267)
- 随笔 (4)
- Spring (13)
- Java (61)
- HTTP (3)
- Windows (1)
- CI(Continuous Integration) (3)
- Dozer (1)
- Apache (11)
- DB (7)
- Architecture (41)
- Design Patterns (11)
- Test (5)
- Agile (1)
- ORM (3)
- PMP (2)
- ESB (2)
- Maven (5)
- IDE (1)
- Camel (1)
- Webservice (3)
- MySQL (6)
- CentOS (14)
- Linux (19)
- BI (3)
- RPC (2)
- Cluster (9)
- NoSQL (7)
- Oracle (25)
- Loadbalance (7)
- Web (5)
- tomcat (1)
- freemarker (1)
- 制造 (0)
最新评论
-
panamera:
如果设置了连接需要密码,Dynamic Broker-Clus ...
ActiveMQ 集群配置 -
panamera:
请问你的最后一种模式Broker-C节点是不是应该也要修改持久 ...
ActiveMQ 集群配置 -
maosheng:
longshao_feng 写道楼主使用 文件共享 模式的ma ...
ActiveMQ 集群配置 -
longshao_feng:
楼主使用 文件共享 模式的master-slave,produ ...
ActiveMQ 集群配置 -
tanglanwen:
感触很深,必定谨记!
少走弯路的十条忠告
前提
前提:VirtualBox CentOS7
物理机IP 192.168.18.8
虚拟机0IP:192.168.18.100(VMaster master)
虚拟机2IP:192.168.18.101(VServer1 node1)
虚拟机3IP:192.168.18.102(VServer2 node2)
虚拟机3IP:192.168.18.103(VServer3 node3)
一、CentOS7 虚拟机IP配置
1.#cd /etc/sysconfig/network-scripts
2.#vi ifcfg-enp0s3
TYPE=Ethernet
DEVICE=enp0s3
NAME=enp0s3
ONBOOT=yes
DEFROUTE=yes
BOOTPROTO=static
IPADDR=192.168.18.101
NETMASK=255.255.255.0
DNS1=192.168.18.1
GATEWAY=192.168.18.1
BROADCAST=192.168.18.1
3.#service network restart
4.#ip address
二:虚拟机hostname 设置(重启生效)
1.#hostname
或
#hostnamectl
2.#vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=VServer1
3.#vi /etc/hosts
最后一行加上修改后的IP地址及对应的主机名:
192.168.18.101 VServer1
4.#vi /etc/hostname
修改为VServer1
5.#reboot ##重启虚拟机
centos7系统,有systemctl restart systemd-hostnamed服务,重启这个服务即可
#systemctl restart systemd-hostnamed
6.#hostname
或
#hostnamectl
7.#yum update
CentOS升级(包括系统版本和内核版本)
8.#reboot ##重启虚拟机
三.K8S安装前准备
1.关闭firewalld,并禁止开机启动
# systemctl stop firewalld.service #停止firewall
# systemctl disable firewalld.service #禁止firewall开机启动
2.安装ntp服务
# yum install -y ntp wget net-tools
# systemctl start ntpd
# systemctl enable ntpd
四、K8S Master安装配置
1.安装Kubernetes Master
# yum install -y kubernetes etcd
2.编辑/etc/etcd/etcd.conf 使etcd监听所有的ip地址,确保下列行没有注释,并修改为下面的值
# vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.18.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.18.100:2379"
ETCD_INITIAL_CLUSTER="default=http://192.168.18.100:2380"
3.编辑Kubernetes API server的配置文件 /etc/kubernetes/apiserver,确保下列行没有被注释,并为下列的值
# vi /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.18.100:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
4.启动etcd, kube-apiserver, kube-controller-manager and kube-scheduler服务,并设置开机自启
#mkdir /script
#touch kubenetes_service.sh
#vi kubenetes_service.sh
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
#chmod 777 kubenetes_service.sh
#sh /script/kubenetes_service.sh
5.在etcd中定义flannel network的配置,这些配置会被flannel service下发到nodes:
# etcdctl mk /centos.com/network/config '{"Network":"172.17.0.0/16"}'
6.添加iptables规则,允许相应的端口
# iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
# iptables-save
#瞄一眼现有的iptables规则
#iptables -L -n
#netstat -lnpt|grep kube-apiserver
查看k8s的版本信息
#kubectl api-versions
#kubectl version
五、K8S Nodes安装配置
1.使用yum安装kubernetes 和 flannel
# yum install -y flannel kubernetes
2.为flannel service配置etcd服务器,编辑/etc/sysconfig/flanneld文件中的下列行以连接到master
#vi /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.18.100:2379" #改为etcd服务器的ip
FLANNEL_ETCD_PREFIX="/centos.com/network"
3.编辑/etc/kubernetes/config 中kubernetes的默认配置,确保KUBE_MASTER的值是连接到Kubernetes master API server:
#vi /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.18.100:8080"
4.编辑/etc/kubernetes/kubelet 如下行:
node1:
# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.101"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""
node2:
# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.102"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""
node2:
# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.103"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""
5.启动kube-proxy, kubelet, docker 和 flanneld services服务,并设置开机自启
#mkdir /script
#touch kubernetes_node_service.sh
#vi kubernetes_node_service.sh
for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
#chmod 777 kubernetes_node_service.sh
#sh /script/kubernetes_node_service.sh
6.添加iptables规则:
# iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
# iptables-save
#检查服务状态
# systemctl status kubelet
# systemctl status docker
..........
#瞄一眼现有的iptables规则
#iptables -L -n
#netstat -lnpt|grep kubelet
#ss -tunlp | grep 8080
六、问题处理
(一)、CentOS7用yum安装软件显示错误:cannot find a valid baseurl for repo: base/7/x86_64
解决方案:
1、首先备份 CentOS-Base.repo
#sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
2.将yum源配置文件/etc/yum.repos.d/CentOS-Base.repo改为阿里云源,内容如下:
#vi CentOS-Base.rep
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
3、清除缓存
yum clean all
rm -rf /var/cache/yum/
4、生成缓存
yum makecache fast
(二)、使用kubernetes 通过Deployment.yaml创建容器一直处于ContainerCreating状态问题:
1、通过命令查看:
#kubectl get pod [pod-name] -o wide
#kubectl describe pod [pod-name]
Events:
FirstSeen LastSeen Count From SubObjec tPath Type Reason Message
--------- -------- ----- ---- -------- ----- -------- ------ -------
30m 30m 1 {default-scheduler } N ormal Scheduled Successfully assigned nginx-deployment-67353951- 4b29t to 192.168.18.184
30m 3m 10 {kubelet 192.168.18.184} W arning FailedSync Error syncing pod, skipping: failed to "StartCon tainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redh at.com/rhel7/pod-infrastructure:latest, this may be because there are no credent ials on this request. details: (open /etc/docker/certs.d/registry.access.redhat .com/redhat-ca.crt: no such file or directory)"
30m 9s 128 {kubelet 192.168.18.184} Warning FailedSy nc Error syncing pod, skipping: failed to "StartContainer" for "POD" with I magePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod- infrastructure:latest\""
2、解决方案:
Node节点执行:
#yum install -y *rhsm*
#wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
#rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
前两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件
#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
#systemctl restart kubelet
#systemctl status kubelet
Master 节点执行:
#kubectl get pod -o wide
(三)docker启动WARNING:IPv4 forwarding is disabled. Networking will not work. 报错解决办法:
1.问题:
centos 7 docker 启动了一个web服务,但是启动时报
WARNING: IPv4 forwarding is disabled. Networking will not work.
2.解决办法:
#vi /etc/sysctl.conf
net.ipv4.ip_forward=1 #添加这段代码
重启network服务
#systemctl restart network && systemctl restart docker
查看是否修改成功 (备注:返回1,就是成功)
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
(四)使用kubernetes 通过Deployment.yaml创建容器,running后变为CrashLoopBackOff状态问题:
通过命令查看:
#kubectl get pod --namespace=kube-system
#kubectl describe pod --namespace=kube-system [pod-name]
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {default-scheduler } Normal Scheduled Successfully assigned kubernetes-dashboard-latest-1032225734-t7h1k to 192.168.18.182
15m 15m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 49eb3e178a31; Security:[seccomp=unconfined]
15m 15m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 49eb3e178a31
14m 14m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 49eb3e178a31: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
14m 14m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 593ef9a3b58b
14m 14m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 593ef9a3b58b; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id db852a4811ef
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 593ef9a3b58b: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id db852a4811ef; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id db852a4811ef: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 8e17cc3051f4; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 8e17cc3051f4
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 78410f1187c4
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 8e17cc3051f4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 78410f1187c4; Security:[seccomp=unconfined]
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 78410f1187c4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
12m 11m 5 {kubelet 192.168.18.182} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"
11m 11m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id db7406db72b5
11m 11m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id db7406db72b5; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 21f23ff4afd4; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 21f23ff4afd4
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id db7406db72b5: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 21f23ff4afd4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
10m 7m 14 {kubelet 192.168.18.182} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"
7m 7m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 36b562f2b843
7m 7m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 36b562f2b843; Security:[seccomp=unconfined]
7m 7m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 36b562f2b843: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id c4892ddab77b; Security:[seccomp=unconfined]
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id c4892ddab77b
15m 1m 11 {kubelet 192.168.18.182} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created (events with common reason combined)
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started (events with common reason combined)
15m 1m 10 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Pulled Container image "registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.5.0" already present on machine
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id c4892ddab77b: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
52s 52s 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing (events with common reason combined)
14m 52s 12 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Warning Unhealthy Liveness probe failed: Get http://172.17.0.2:9180/: dial tcp 172.17.0.2:9180: getsockopt: connection refused
7m 11s 29 {kubelet 192.168.18.182} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"
12m 11s 48 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Warning BackOff Back-off restarting failed docker container
解决方案:
#iptables -P FORWARD ACCEPT
(五)ngnix pod 发布成功后,外部端口无法访问问题
内部通过如下方式可以访问:
#curl http://192.168.18.180:30080
解决方案:
容器所在的服务器执行,开启转发功能
#iptables -P FORWARD ACCEPT
(六)当pod运行状态为Terminating时,强制删除pod
#kubectl delete pod ${pod_name} --grace-period=0 --force
强制删除rc
#kubectl delete rc nginx-controller --force --cascade=false
强制删除deployment
#kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system --force=true --cascade=false
七、常用命令:
#kubectl get svc --namespace=kube-system
#kubectl delete svc --namespace=kube-system kubernetes-dashboard
#kubectl get deployment --namespace=kube-system
#kubectl delete deployment --namespace=kube-system kubernetes-dashboard-latest
#kubectl get pod --namespace=kube-system
#kubectl delete pod --namespace=kube-system kubernetes-dashboard-latest-3665071062-t6sgk
#kubectl create --validate -f dashboard-deployment.yaml
#kubectl create --validate -f dashboard-service.yaml
#kubectl describe pod --namespace=kube-system kubernetes-dashboard-latest-3665071062-b5k84
#kubectl --namespace=kube-system get pod kubernetes-dashboard-latest-3665071062-b5k84 -o yaml
查看kubelet日志
#journalctl -u kubelet
#kubectl logs <pod-name>
#kubectl logs --previous <pod-name>
进入一个正在运行的Pod
#kubectl exec -it pod-name /bin/bash
进入一个正在运行的包含多个容器的Pod
#kubectl exec -it pod-name -c container-name /bin/bash
前提:VirtualBox CentOS7
物理机IP 192.168.18.8
虚拟机0IP:192.168.18.100(VMaster master)
虚拟机2IP:192.168.18.101(VServer1 node1)
虚拟机3IP:192.168.18.102(VServer2 node2)
虚拟机3IP:192.168.18.103(VServer3 node3)
一、CentOS7 虚拟机IP配置
1.#cd /etc/sysconfig/network-scripts
2.#vi ifcfg-enp0s3
TYPE=Ethernet
DEVICE=enp0s3
NAME=enp0s3
ONBOOT=yes
DEFROUTE=yes
BOOTPROTO=static
IPADDR=192.168.18.101
NETMASK=255.255.255.0
DNS1=192.168.18.1
GATEWAY=192.168.18.1
BROADCAST=192.168.18.1
3.#service network restart
4.#ip address
二:虚拟机hostname 设置(重启生效)
1.#hostname
或
#hostnamectl
2.#vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=VServer1
3.#vi /etc/hosts
最后一行加上修改后的IP地址及对应的主机名:
192.168.18.101 VServer1
4.#vi /etc/hostname
修改为VServer1
5.#reboot ##重启虚拟机
centos7系统,有systemctl restart systemd-hostnamed服务,重启这个服务即可
#systemctl restart systemd-hostnamed
6.#hostname
或
#hostnamectl
7.#yum update
CentOS升级(包括系统版本和内核版本)
8.#reboot ##重启虚拟机
三.K8S安装前准备
1.关闭firewalld,并禁止开机启动
# systemctl stop firewalld.service #停止firewall
# systemctl disable firewalld.service #禁止firewall开机启动
2.安装ntp服务
# yum install -y ntp wget net-tools
# systemctl start ntpd
# systemctl enable ntpd
四、K8S Master安装配置
1.安装Kubernetes Master
# yum install -y kubernetes etcd
2.编辑/etc/etcd/etcd.conf 使etcd监听所有的ip地址,确保下列行没有注释,并修改为下面的值
# vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.18.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.18.100:2379"
ETCD_INITIAL_CLUSTER="default=http://192.168.18.100:2380"
3.编辑Kubernetes API server的配置文件 /etc/kubernetes/apiserver,确保下列行没有被注释,并为下列的值
# vi /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.18.100:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
4.启动etcd, kube-apiserver, kube-controller-manager and kube-scheduler服务,并设置开机自启
#mkdir /script
#touch kubenetes_service.sh
#vi kubenetes_service.sh
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
#chmod 777 kubenetes_service.sh
#sh /script/kubenetes_service.sh
5.在etcd中定义flannel network的配置,这些配置会被flannel service下发到nodes:
# etcdctl mk /centos.com/network/config '{"Network":"172.17.0.0/16"}'
6.添加iptables规则,允许相应的端口
# iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
# iptables-save
#瞄一眼现有的iptables规则
#iptables -L -n
#netstat -lnpt|grep kube-apiserver
查看k8s的版本信息
#kubectl api-versions
#kubectl version
五、K8S Nodes安装配置
1.使用yum安装kubernetes 和 flannel
# yum install -y flannel kubernetes
2.为flannel service配置etcd服务器,编辑/etc/sysconfig/flanneld文件中的下列行以连接到master
#vi /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.18.100:2379" #改为etcd服务器的ip
FLANNEL_ETCD_PREFIX="/centos.com/network"
3.编辑/etc/kubernetes/config 中kubernetes的默认配置,确保KUBE_MASTER的值是连接到Kubernetes master API server:
#vi /etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.18.100:8080"
4.编辑/etc/kubernetes/kubelet 如下行:
node1:
# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.101"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""
node2:
# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.102"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""
node2:
# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=192.168.18.103"
KUBELET_API_SERVER="--api_servers=http://192.168.18.100:8080"
KUBELET_ARGS=""
5.启动kube-proxy, kubelet, docker 和 flanneld services服务,并设置开机自启
#mkdir /script
#touch kubernetes_node_service.sh
#vi kubernetes_node_service.sh
for SERVICES in kube-proxy kubelet docker flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
#chmod 777 kubernetes_node_service.sh
#sh /script/kubernetes_node_service.sh
6.添加iptables规则:
# iptables -I INPUT -p tcp --dport 2379 -j ACCEPT
# iptables -I INPUT -p tcp --dport 10250 -j ACCEPT
# iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
# iptables-save
#检查服务状态
# systemctl status kubelet
# systemctl status docker
..........
#瞄一眼现有的iptables规则
#iptables -L -n
#netstat -lnpt|grep kubelet
#ss -tunlp | grep 8080
六、问题处理
(一)、CentOS7用yum安装软件显示错误:cannot find a valid baseurl for repo: base/7/x86_64
解决方案:
1、首先备份 CentOS-Base.repo
#sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
2.将yum源配置文件/etc/yum.repos.d/CentOS-Base.repo改为阿里云源,内容如下:
#vi CentOS-Base.rep
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
3、清除缓存
yum clean all
rm -rf /var/cache/yum/
4、生成缓存
yum makecache fast
(二)、使用kubernetes 通过Deployment.yaml创建容器一直处于ContainerCreating状态问题:
1、通过命令查看:
#kubectl get pod [pod-name] -o wide
#kubectl describe pod [pod-name]
Events:
FirstSeen LastSeen Count From SubObjec tPath Type Reason Message
--------- -------- ----- ---- -------- ----- -------- ------ -------
30m 30m 1 {default-scheduler } N ormal Scheduled Successfully assigned nginx-deployment-67353951- 4b29t to 192.168.18.184
30m 3m 10 {kubelet 192.168.18.184} W arning FailedSync Error syncing pod, skipping: failed to "StartCon tainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redh at.com/rhel7/pod-infrastructure:latest, this may be because there are no credent ials on this request. details: (open /etc/docker/certs.d/registry.access.redhat .com/redhat-ca.crt: no such file or directory)"
30m 9s 128 {kubelet 192.168.18.184} Warning FailedSy nc Error syncing pod, skipping: failed to "StartContainer" for "POD" with I magePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod- infrastructure:latest\""
2、解决方案:
Node节点执行:
#yum install -y *rhsm*
#wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
#rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
前两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件
#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
#systemctl restart kubelet
#systemctl status kubelet
Master 节点执行:
#kubectl get pod -o wide
(三)docker启动WARNING:IPv4 forwarding is disabled. Networking will not work. 报错解决办法:
1.问题:
centos 7 docker 启动了一个web服务,但是启动时报
WARNING: IPv4 forwarding is disabled. Networking will not work.
2.解决办法:
#vi /etc/sysctl.conf
net.ipv4.ip_forward=1 #添加这段代码
重启network服务
#systemctl restart network && systemctl restart docker
查看是否修改成功 (备注:返回1,就是成功)
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
(四)使用kubernetes 通过Deployment.yaml创建容器,running后变为CrashLoopBackOff状态问题:
通过命令查看:
#kubectl get pod --namespace=kube-system
#kubectl describe pod --namespace=kube-system [pod-name]
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {default-scheduler } Normal Scheduled Successfully assigned kubernetes-dashboard-latest-1032225734-t7h1k to 192.168.18.182
15m 15m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 49eb3e178a31; Security:[seccomp=unconfined]
15m 15m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 49eb3e178a31
14m 14m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 49eb3e178a31: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
14m 14m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 593ef9a3b58b
14m 14m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 593ef9a3b58b; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id db852a4811ef
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 593ef9a3b58b: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id db852a4811ef; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id db852a4811ef: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 8e17cc3051f4; Security:[seccomp=unconfined]
13m 13m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 8e17cc3051f4
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 78410f1187c4
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 8e17cc3051f4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 78410f1187c4; Security:[seccomp=unconfined]
12m 12m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 78410f1187c4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
12m 11m 5 {kubelet 192.168.18.182} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"
11m 11m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id db7406db72b5
11m 11m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id db7406db72b5; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 21f23ff4afd4; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 21f23ff4afd4
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id db7406db72b5: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
10m 10m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 21f23ff4afd4: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
10m 7m 14 {kubelet 192.168.18.182} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"
7m 7m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 36b562f2b843
7m 7m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 36b562f2b843; Security:[seccomp=unconfined]
7m 7m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id 36b562f2b843: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id c4892ddab77b; Security:[seccomp=unconfined]
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id c4892ddab77b
15m 1m 11 {kubelet 192.168.18.182} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Created (events with common reason combined)
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Started (events with common reason combined)
15m 1m 10 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Pulled Container image "registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.5.0" already present on machine
1m 1m 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing Killing container with docker id c4892ddab77b: pod "kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
52s 52s 1 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Normal Killing (events with common reason combined)
14m 52s 12 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Warning Unhealthy Liveness probe failed: Get http://172.17.0.2:9180/: dial tcp 172.17.0.2:9180: getsockopt: connection refused
7m 11s 29 {kubelet 192.168.18.182} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-latest-1032225734-t7h1k_kube-system(9f253538-ba6b-11ea-a541-080027f52697)"
12m 11s 48 {kubelet 192.168.18.182} spec.containers{kubernetes-dashboard} Warning BackOff Back-off restarting failed docker container
解决方案:
#iptables -P FORWARD ACCEPT
(五)ngnix pod 发布成功后,外部端口无法访问问题
内部通过如下方式可以访问:
#curl http://192.168.18.180:30080
解决方案:
容器所在的服务器执行,开启转发功能
#iptables -P FORWARD ACCEPT
(六)当pod运行状态为Terminating时,强制删除pod
#kubectl delete pod ${pod_name} --grace-period=0 --force
强制删除rc
#kubectl delete rc nginx-controller --force --cascade=false
强制删除deployment
#kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system --force=true --cascade=false
七、常用命令:
#kubectl get svc --namespace=kube-system
#kubectl delete svc --namespace=kube-system kubernetes-dashboard
#kubectl get deployment --namespace=kube-system
#kubectl delete deployment --namespace=kube-system kubernetes-dashboard-latest
#kubectl get pod --namespace=kube-system
#kubectl delete pod --namespace=kube-system kubernetes-dashboard-latest-3665071062-t6sgk
#kubectl create --validate -f dashboard-deployment.yaml
#kubectl create --validate -f dashboard-service.yaml
#kubectl describe pod --namespace=kube-system kubernetes-dashboard-latest-3665071062-b5k84
#kubectl --namespace=kube-system get pod kubernetes-dashboard-latest-3665071062-b5k84 -o yaml
查看kubelet日志
#journalctl -u kubelet
#kubectl logs <pod-name>
#kubectl logs --previous <pod-name>
进入一个正在运行的Pod
#kubectl exec -it pod-name /bin/bash
进入一个正在运行的包含多个容器的Pod
#kubectl exec -it pod-name -c container-name /bin/bash
发表评论
-
Centos7 部署 Prometheus、Grafana、Cadvisor
2020-06-03 10:41 1402概述 Prometheus (中文名:普罗米修斯)是由 Sou ... -
阿里云服务器ESC Centos下安装配置svn服务器
2017-05-16 13:35 9461.检查是否已安装 rpm -qa subversion ... -
Centos7 部署 Kafka 集群
2016-06-06 15:08 959概述 Kafka特性: 1、高吞吐量、低延迟:Kafka每 ... -
CentOS 6.3 安装 HAProxy
2015-11-13 16:05 1523HAProxy是一款开源的高性能的代理转发软件,提供高可用性、 ... -
CentOS 6.3 安装 Zabbix
2015-11-05 10:36 930Linux下常用的系统监控软件有Nagios、Cacti、Za ... -
CentOS 6.3 安装 mysql-cluster-gpl-7.4.7
2015-10-14 15:32 4331什么是MySQL集群 MySQL集 ... -
CentOS 6.3 安装 Keepalived
2015-08-28 16:43 1591Keepalived is a routing softwar ... -
Linux的IO模型介绍
2015-08-26 15:09 1297阻塞:用户进程访问数据时,如果未完成IO,等待IO操作完成 ... -
CentOS 6.3 安装 ActiveMQ
2015-08-25 14:36 20901. tar -zvxf apache-activemq-5. ... -
CentOS 6.3 启动\停止Oracle
2015-08-14 13:46 1178在CentOS 6.3下安装完Oracle Database ... -
CentOS 6.3 安装 Redis
2015-08-03 15:13 8921. tar -zvxf redis-3.0.0.tar.gz ... -
CentOS下SSH免密码登录的配置
2015-07-31 09:44 2714整体流程: 1.在客户机 ... -
CentOS下service zookeeper does not support chkconfig的解决办法
2015-07-07 15:46 8093最初的zookeeper执行脚本 ... -
CentOS 6.3 安装 JDK
2015-05-12 11:19 589好久没写点东西了,今天简单的介绍CentOS下安装JDK 首 ...
相关推荐
CentOS 7部署K8S集群
k8s离线部署-centos7-附部署资源
此文档包含centos7系统部署k8s 1.16.2 版本的详细步骤,从初始化系统到部署成功,在不需要了解K8S原理基础下能部署成功
CentOS7二进制部署K8S_v15.2.pdf
在本教程中,我们将深入探讨如何在CentOS 7.9最小化安装环境中部署Kubernetes(简称k8s)版本1.25.3。Kubernetes是一个开源的容器编排系统,用于自动化容器化应用程序的部署、扩展和管理。CentOS作为一款稳定的Linux...
centos7与ubuntu搭建k8s集群方案,包含了在k8s中搭建的各种常用微服务与存储。
在部署k8s时,首先需要设置好cri-dockerd,使其成为k8s节点上容器运行时的代理,接着安装runc以支持容器的执行。然后,部署kube-flannel.yml,确保集群内的网络互通性。这些步骤是k8s在Linux环境中基本的部署流程,...
【Kubernetes 离线安装在 CentOS 7.5 (1.10 版本)】 在本文中,我们将详细探讨如何在 CentOS 7.5 操作...尽管这个过程适用于学习和实验环境,但在生产环境中,请务必考虑使用最新的稳定版本,并遵循更安全的部署策略。
总结,安装k8s v1.13.4在CentOS 7环境中是一个涉及多步骤的过程,包括系统准备、证书管理、组件配置和服务启动。每个环节都至关重要,确保遵循最佳实践和安全指南,以构建一个稳定、安全的k8s集群。
在本教程中,我们将深入探讨如何使用`kubeadm`在CentOS 7上构建一个基于Kubernetes(k8s)v1.21.0的集群,并结合cri-o容器运行时和Calico网络策略。`kubeadm`是官方推荐的Kubernetes集群初始化工具,它简化了集群的...
基于CentOS 7的Kubernetes安装全过程(含附件) 目录如下: 第一部分:Nginx on Kubernetes应用部署 3 一、环境准备 3 1.1软硬件环境 3 1.2 网络拓扑 4 二、Kubenetes及相关组件部署 6 2.1 Docker容器及私有仓库部署...
cat > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system ``` 时间同步 在安装 Kubernetes 之前,需要同步时间,以免影响安装过程。...
CentOS7+Kubeadm+K8S集群部署学习实践手册
python语言结合ansible-playbook编写,大家无需关心具体逻辑,只需根据readme.md的使用说明修改相关参数,然后一键部署即可! 使用前提:执行脚本的机器上安装有ansible,配置好ansible到其他机器的ssh免密登录;...
本文介绍VMware虚拟机下centos7操作系统中如何安装云原生 Kubernetes(k8s)集群、k8s可视化界面kuboard,以及如何利用docker容器化将springboot+vue项目在k8s集群中部署运行。
1、系统环境: centos 7 纯净系统; 2、只适用部署1台master + 2台node ; 3、在脚本 install-etcd-flannel-k8s.sh 中设置 3台的IP; 4、做好master到两台node的免密登陆; 5、将安装包放在master上,直接执行bash ...
在centos7系统下自动部署k8s,默认是1主2worker;可通过修改相关变量增加worker节点; 需要对照自己网络环境进行相关修改内部参数;运行前检查设备必须可以连接外网; 需要提前配置好IP和主机名,脚本中使用的192....
centos安装部署k8s集群。