`
sillycat
  • 浏览: 2542324 次
  • 性别: Icon_minigender_1
  • 来自: 成都
社区版块
存档分类
最新评论

K8S(1)Set Up a Customer Cluster

 
阅读更多
K8S(1)Set Up a Customer Cluster

Following this document
https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-ubuntu-server-16.04-with-kubeadm.html
I have 2 Ubuntu machines
192.168.56.101 ubuntu-master
192.168.56.103 ubuntu-dev4
Make Sure and Install Docker
> apt-get install docker.io
Check version
> docker --version
Docker version 18.06.1-ce, build e68fc7a
Install the tools, but fail
> sudo apt-get install kubelet kubeadm kubectl
No apt package "kubeadm", but there is a snap with that name.
Try "snap install kubeadm"
Solution:
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
> sudo apt-get update && sudo apt-get install -y apt-transport-https
> curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
> echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
> sudo apt-get update
> sudo apt-get install kubelet kubeadm kubectl
Use Kubeadm to install K8S cluster
Set Up Master First
> sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.101
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
Disable the swap
> sudo swapoff -a
> sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.101
[init] Using Kubernetes version: v1.13.3
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
  kubeadm join 192.168.56.101:6443 --token hwg09t.gmd6t2f7s3f18z2t --discovery-token-ca-cert-hash sha256:9eab712056ad59e628e852e40ab5889071865f6f925c8ef54c1a257eaf02f3d9
Follow the docs
> mkdir -p $HOME/.kube
> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
> sudo chown $(id -u):$(id -g) /home/carl/.kube/config
Set Up Slaves
> sudo swapoff -a
> sudo kubeadm join 192.168.56.101:6443 --token hwg09t.gmd6t2f7s3f18z2t --discovery-token-ca-cert-hash sha256:9eab712056ad59e628e852e40ab5889071865f6f925c8ef54c1a257eaf02f3d9
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
We can run command on master to see the nodes
> kubectl get nodes
NAME            STATUS     ROLES    AGE    VERSION
ubuntu-dev4     NotReady   <none>   75s    v1.13.3
ubuntu-master   NotReady   master   164m   v1.13.3
We can also check from master
> kubectl get pod -n kube-system -o wide
Install the network plugin canal
> kubectl apply -f  https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
> kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/canal.yaml
configmap/canal-config created
daemonset.extensions/canal created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
serviceaccount/canal created
After apply these two commands above, we check the status again, it is running
> kubectl get pod -n kube-system -o wide
Check node status
> kubectl get node
NAME            STATUS   ROLES    AGE    VERSION
ubuntu-dev4     Ready    <none>   20m    v1.13.3
ubuntu-master   Ready    master   3h3m   v1.13.3
Enable running pod on master as well
> kubectl taint nodes --all node-role.kubernetes.io/master-
Turn off the swap even after restart
> sudo swapoff -a
> cat /etc/fstab | grep -v '^#' | grep -v 'swap' | sudo tee /etc/fstab
Checking documents
https://juejin.im/post/5bb45d63f265da0a9e532128
Set Up the UI
> kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
> kubectl get pods --namespace=kube-system
kubernetes-dashboard-57df4db6b-qs87d    1/1     Running   0          69s
Check all the pod running
> kubectl get pods --namespace=kube-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
canal-cfbbs                             3/3     Running   6          125m    192.168.56.103   ubuntu-dev4     <none>           <none>
canal-vsg6n                             3/3     Running   7          125m    192.168.56.101   ubuntu-master   <none>           <none>
coredns-86c58d9df4-9gs9l                1/1     Running   1          4h53m   10.244.0.4       ubuntu-master   <none>           <none>
coredns-86c58d9df4-ch28b                1/1     Running   1          4h53m   10.244.0.5       ubuntu-master   <none>           <none>
etcd-ubuntu-master                      1/1     Running   4          4h52m   192.168.56.101   ubuntu-master   <none>           <none>
kube-apiserver-ubuntu-master            1/1     Running   4          4h52m   192.168.56.101   ubuntu-master   <none>           <none>
kube-controller-manager-ubuntu-master   1/1     Running   4          4h52m   192.168.56.101   ubuntu-master   <none>           <none>
kube-proxy-hp9n2                        1/1     Running   4          4h53m   192.168.56.101   ubuntu-master   <none>           <none>
kube-proxy-nmmvz                        1/1     Running   2          130m    192.168.56.103   ubuntu-dev4     <none>           <none>
kube-scheduler-ubuntu-master            1/1     Running   4          4h52m   192.168.56.101   ubuntu-master   <none>           <none>
kubernetes-dashboard-57df4db6b-qs87d    1/1     Running   0          2m5s    10.244.1.2       ubuntu-dev4     <none>           <none>
Start a Proxy on ubuntu-master
> kubectl proxy --address='0.0.0.0' --accept-hosts='^*'
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
Checking network
https://www.cnblogs.com/xzkzzz/p/9952716.html
> kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
> kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml

Copy the file to my host machine
> scp ubuntu-master:/home/carl/.kube/config ~/.kube/config-ubuntu
> export KUBECONFIG=~/.kube/config-ubuntu
Then I can run these commands on my host machine as well
> kubectl get pods --namespace=kube-system -o wide
> kubectl proxy
Starting to serve on 127.0.0.1:8001
This command need to be on ubuntu-master
> kubectl proxy --address='0.0.0.0' --accept-hosts='^*'
These URLs do not work.
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview?namespace=default
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default
I think it is related the network settings, that I can not ping 10.244.1.2 which the admin console is running
https://stackoverflow.com/questions/50401355/requests-timing-out-when-accesing-a-kubernetes-clusterip-service
Download the 2 setting files
> wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
Change that file
  canal_iface: "enp0s8"
> kubectl apply -f ./canal.yaml
Reboot the ubuntu-master, then it works.
> ping 10.244.1.4
PING 10.244.1.4 (10.244.1.4) 56(84) bytes of data.
64 bytes from 10.244.1.4: icmp_seq=1 ttl=63 time=0.739 ms
Run the proxy on master
> kubectl proxy --address='0.0.0.0' --accept-hosts='^*'
YEAH, this UI will work
http://ubuntu-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Check TOKEN
> kubectl get secret -n kube-system | grep dashboard
kubernetes-dashboard-certs                       Opaque                                0         1h
kubernetes-dashboard-csrf                        Opaque                                1         1h
kubernetes-dashboard-key-holder                  Opaque                                2         1h
kubernetes-dashboard-token-d4p5x                 kubernetes.io/service-account-token   3         1h
Check the Dashboard I installed
> wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
I use this command already find out the token
> kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-token|awk '{print $1}')|grep token:|awk '{print $2}'
Give rights to skip
>cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
Not working, checking this
https://www.cnblogs.com/RainingNight/p/deploying-k8s-dashboard-ui.html
It seems the latest documents are here
https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/wiki/Creating-sample-user

References:
https://kubernetes.io/docs/setup/scratch/
http://docs.kubernetes.org.cn/774.html
https://www.jianshu.com/p/7d1fb03b8925
https://juejin.im/post/5bb45d63f265da0a9e532128
https://my.oschina.net/u/1013857/blog/2991314
https://zhanghongtong.github.io/2018/10/09/ubuntu-%E4%BD%BF%E7%94%A8kubeadm%E5%AE%89%E8%A3%85kubernetes/
https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-ubuntu-server-16.04-with-kubeadm.html
分享到:
评论

相关推荐

    k8s clientset exec

    k8s clientset exec

    k8s部署rabbitmq-cluster集群配置文件和docker镜像文件

    k8s部署rabbitmq-cluster集群配置文件和docker镜像文件,配合文章学习用,有需要可以下载,无需修改直接部署即可

    k8s 安装 mongodb 分片(Sharding)+ 副本集(Replica Set)

    k8s 安装 MongoDB 分片(Sharding)+ 副本集(Replica Set) k8s 安装 MongoDB 分片(Sharding)+ 副本集(Replica Set)是结合 Kubernetes(k8s)和 MongoDB 实现高可用性和高性能的解决方案。本解决方案通过使用 ...

    k8s部署redis所需要的配置文件

    在Kubernetes(k8s)环境中部署Redis集群是一项常见的任务,尤其当需要高可用性和数据持久化时。这里我们将深入探讨如何使用Redis集群,并结合Ceph作为持久化存储的配置方法。 首先,Redis是一个高性能的键值存储...

    k8s之lens使用方式

    1. **导入K8s集群配置**: - 安装完成后启动Lens。 - 点击主界面右上角的三条横线图标,选择“File”,然后点击“Add Cluster”。 - 在弹出的对话框中,选择“Use existing Kubernetes configuration file”选项...

    K8S入门基础课件docx版本

    1. **Pod**: Pod是K8S的基本执行单元,它可以包含一个或多个紧密相关的容器。Pod代表了运行时的实例,如应用程序进程或数据库。 2. **Service**: Service是K8S中的抽象层,它定义了一种访问Pod的方式,提供了负载...

    使用sealos安装k8s时的工具

    1. **简单易用**:SealOS提供简洁的命令行接口(CLI),如`sealos install`,只需几行命令,即可完成k8s集群的部署。 2. **自动化配置**:SealOS会自动处理网络、存储等基础设施的配置,减少了手动干预的需求。 3. *...

    Kubernetes(K8S)超快速入门视频教程

    手把手视频详细讲解项目开发全过程,需要的小伙伴... 1、K8S概述 2、K8S功能及架构设计 3、K8S集群部署 4、K8S资源清单YAML 5、K8S集群NameSpace(命名空间)、Pod、Controller(控制器)、Service 6、K8S课程总结

    k8s(kubernetes)常见故障处理总结-详细笔记文档总结

    1. k8s 资源配置错误:部署 deployment 和 statefulset 时,资源清单书写有问题,导致 pod 无法正常创建。 2. 代码问题:应用程序代码在容器启动后失败,需要通过检查代码找错误。 3. 网络问题:网络插件部署有问题...

    Install-k8s-cluster

    部署kubernetes集群的步骤。本系列系文档适用于 CentOS 7 、 Ubuntu 16.04 及以上版本系统。

    k8s-auto-kubernetes/k8s一键安装

    kubernetes/k8s自动安装程序,版本对应:v1.18.2,linux环境 使用kubeadm安装,改程序若环境不符合要求,是不能一键安装的,需按照程序指示分布安装,该程序是为了搭建测试环境时,简化繁琐的配置时所用,不能用作...

    K8S部署+实战+集群架构图.zip

    本资源"K8S部署+实战+集群架构图.zip"提供了关于Kubernetes部署、实战应用以及集群架构的深入理解,包括3个架构图、1个部署文档和1个实战文档。以下将详细阐述这些关键知识点。 1. **Kubernetes(K8s)基础知识**:...

    akka-simple-cluster-k8s

    kubectl create -f k8s/simple-akka-cluster-rbac.yml # create deployment kubectl create -f k8s/simple-akka-cluster-deployment.yml # create service kubectl create -f k8s/simple-akka-cluster-service.yml ...

    二进制部署k8s高可用集群(二进制-V1.20).docx

    1. 生产环境部署 K8s 集群的两种方式 在生产环境中,部署 K8s 集群有两种方式:一种是使用二进制方式,另一种是使用容器方式。二进制方式是使用二进制文件来部署 K8s 集群,而容器方式是使用容器来部署 K8s 集群。...

    k8s证书修改为10年文件

    1. **创建自签名证书**:通常,k8s集群部署时会使用`openssl`或`cfssl`等工具生成证书。要创建有效期为10年的证书,可以在生成证书时指定`notBefore`和`notAfter`时间戳。例如,使用openssl命令行可以这样设置: ``...

    k8s-grafana模板.rar

    1. **数据源配置**:模板会预设Prometheus作为数据源,因为Prometheus是k8s社区广泛使用的监控解决方案。它能收集到丰富的k8s API指标,如节点、Pod、服务的性能指标。 2. **面板设计**:面板是Grafana中的可视化...

    Kubernetes(K8s)搭建视频.rar

    ├ k8s-1、搭建docker+kubernetes │ │ k8s-1、搭建docker+kubernetes.pdf │ │ VMware启动.mp4 │ └ 安装docker和k8s.mp4 ├ k8s-2、k8s安装网络插件Flannel │ │ k8s-2、k8s安装网络插件Flannel.pdf │ └ ...

    k8s1.20镜像包

    【k8s1.20镜像包】是针对Kubernetes(k8s)1.20版本的组件镜像集合。Kubernetes是目前广泛使用的容器编排系统,它允许用户管理和部署容器化的应用程序。这个镜像包包含了四个关键组件的镜像文件:`controller.tar`、...

    k8s技术分享.ppt

    k8s技术分享ppt

Global site tag (gtag.js) - Google Analytics