`
sillycat
  • 浏览: 2542534 次
  • 性别: Icon_minigender_1
  • 来自: 成都
社区版块
存档分类
最新评论

K8S Helm(1)Understand YAML and Kubectl Pod and Deployment

 
阅读更多
K8S Helm(1)Understand YAML and Kubectl Pod and Deployment


In K8S, usually, we write all the resources: Pod, Service, Volume, Namespace, ReplicaSet, Deployment, Job and etc. We define all these resources in YAML file, then Kubectl will call Kubernetes API to deploy that.

Helm will manage Charts, similar to APT to Ubuntu, or YUM to CentOS.

Understand YAML file in K8S
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
YAML Basic
Only blank is allowed, no Tab
# stands for comments
Lists and Maps

Maps - key value pair
---
apiVersion: v1
kind: Pod

Lists
args
— Cat
— Dog
— Fish

Equal to

{
    “args”: [ “Cat”, “Dog”, “Fish” ]
}

Use YAML to Create Pod
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/

apiVersion: v1 - K8S API version
kind: Pod - Other options can be Deployment, Job, Ingress, Service
metadata: - some meta information, like name, namespace, label
spec: containers, storage, volumes and etc

A simple pod definition file as follow:


Set Up Kubectl Command Line in Rancher Home or My Local MAC
On CentOS 7
> kubectl version
-bash: kubectl: command not found

> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Remove the old version if I have one
> sudo rm -fr /usr/local/bin/kubectl

Give permission to the local file
> chmod a+x ./kubectl

Move the file to the PATH
> sudo mv ./kubectl /usr/local/bin/kubectl

Check version
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

It can not connect to the service, because of the configuration is not point to my local rancher server.
In my rancher server cluster — home, click on the button [Kubeconfig File]
> mkdir ~/.kube
> vi ~/.kube/config

Paste the configuration content there
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

If we need this on my MAC book.
> curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl
> chmod a+x ./kubectl
> sudo mv ./kubectl /usr/local/bin/kubectl
> vi ~/.kube/config

> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

> kubectl cluster-info
Kubernetes master is running at https://rancher-home/k8s/clusters/c-2ldm9
CoreDNS is running at https://rancher-home/k8s/clusters/c-2ldm9/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Choose configuration file
> export KUBECONFIG=~/.kube/config-dev

Check the running pods
> kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
sillycatnginx-775bf7d556-pg2fl   1/1     Running   1          2d7h
sillycatnginx-775bf7d556-sqd5x   1/1     Running   1          2d7h

Check the running services
> kubectl get svc
NAME                                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
ingress-5ed19ad916d01260cef53a965736a2ff   ClusterIP   10.43.122.67   <none>        80/TCP    2d7h
kubernetes                                 ClusterIP   10.43.0.1      <none>        443/TCP   8d
sillycatnginx                                 ClusterIP   10.43.14.70    <none>        80/TCP    2d7h

Check namespace
> kubectl get namespaces
NAME              STATUS   AGE
cattle-system     Active   9d
default           Active   9d
ingress-nginx     Active   9d
kube-node-lease   Active   9d
kube-public       Active   9d
kube-system       Active   9d

List the Nodes
> kubectl get node
NAME              STATUS   ROLES               AGE   VERSION
rancher-home      Ready    controlplane,etcd   9d    v1.14.6
rancher-worker1   Ready    worker              9d    v1.14.6
rancher-worker2   Ready    worker              9d    v1.14.6

List the Nodes with IP
> kubectl get node -o wide
NAME              STATUS   ROLES               AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
rancher-home      Ready    controlplane,etcd   9d    v1.14.6   192.168.56.110   <none>        CentOS Linux 7 (Core)   3.10.0-1062.1.1.el7.x86_64   docker://19.3.2
rancher-worker1   Ready    worker              9d    v1.14.6   192.168.56.111   <none>        CentOS Linux 7 (Core)   3.10.0-1062.1.1.el7.x86_64   docker://19.3.2
rancher-worker2   Ready    worker              9d    v1.14.6   10.0.3.15        <none>        CentOS Linux 7 (Core)   3.10.0-1062.1.1.el7.x86_64   docker://19.3.2

SSH to the pod
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/bash

Or
> kubectl exec -it sillycatnginx-775bf7d556-pg2fl -- /bin/sh

First simple YAML to create Pod
> cat nginxpod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: front-end
      image: nginx
      ports:
        - containerPort: 80

Create the pod if we need
> kubectl create -f nginxpod.yaml
pod/nginx created

> kubectl get pod
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m12s

> kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          3m17s   10.42.2.36   rancher-worker2   <none>           <none>

On the node rancher-worker2, we can see
> curl -G http://10.42.2.36

Check the pod information
> kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         rancher-worker2/10.0.3.15
Start Time:   Sat, 28 Sep 2019 13:30:34 -0400
Labels:       app=web
Annotations:  cni.projectcalico.org/podIP: 10.42.2.36/32
Status:       Running
IP:           10.42.2.36
IPs:          <none>
Containers:
  front-end:
    Container ID:   docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
    Port:           80/TCP
    Host Port:      0/TCP

Check pod nginx
> kubectl get pod/nginx
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          30m

Check pod YAML
> kubectl get pod/nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.42.2.36/32
  creationTimestamp: "2019-09-29T03:12:23Z"
  labels:
    app: web
  name: nginx
  namespace: default
  resourceVersion: "644756"
  selfLink: /api/v1/namespaces/default/pods/nginx
  uid: f5183fbd-e266-11e9-89f1-080027609f67
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: front-end
    ports:
    - containerPort: 80
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-q8b8g
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: rancher-worker2
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-q8b8g
    secret:
      defaultMode: 420
      secretName: default-token-q8b8g
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-09-28T17:30:34Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-09-28T17:31:03Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-09-28T17:31:03Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-09-29T03:12:23Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://fedbe8f015fe8be1718cb80bf64f9e66689fdfe9cfb1a70915bc864f60f150dd
    image: nginx:latest
    imageID: docker-pullable://nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1
    lastState: {}
    name: front-end
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-09-28T17:31:03Z"
  hostIP: 10.0.3.15
  phase: Running
  podIP: 10.42.2.36
  qosClass: BestEffort
  startTime: "2019-09-28T17:30:34Z"

We can delete that
> kubectl delete -f nginxpod.yaml
pod "nginx" deleted

Create Deployment
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-site
spec:
  replicas: 2

apiVersion: https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html

The basic deployment
> cat nginxdeployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-site
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: front-end
          image: nginx
          ports:
            - containerPort: 80

Create the deployment
> kubectl create -f nginxdeployment.yaml
deployment.extensions/nginx-site created

> get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nginx-site   2/2     2            2           4m36s

> kubectl get deployments -o wide
NAME         READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR
nginx-site   2/2     2            2           4m54s   front-end    nginx    app=nginx

It will deploy the Workloads on the UI, it will show nginx-site with 2 nginx running on rancher-worker1 and rancher-worker2.
> kubectl get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE    IP           NODE              NOMINATED NODE   READINESS GATES
nginx-site-7ff6d945f6-lvcwp   1/1     Running   0          124m   10.42.1.65   rancher-worker1   <none>           <none>
nginx-site-7ff6d945f6-pdsng   1/1     Running   0          124m   10.42.2.38   rancher-worker2   <none>           <none>

We can easily access these service on the rancher-worker1
> curl -G http://10.42.1.65

On the rancher-worker2
> curl -G http://10.42.2.38

This website can validate your YAML file
http://www.yamllint.com/

Deploy from our Private Registry
https://blog.csdn.net/wucong60/article/details/81586272
I configure my harbor private registry on the UI
[Resources] —> [Registry] —>
Address: 192.168.56.110:8088
Username: sillycat
Password:

> kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-q8b8g   kubernetes.io/service-account-token   3      11d
sillycatharbor           kubernetes.io/dockerconfigjson        1      3d20h

Latest deployment configuration
> cat nginxdeployment2.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-site
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: front-end
          image: rancher-home:8088/sillycat/nginx:v1
          ports:
            - containerPort: 80
      imagePullSecrets:
        - name: sillycatharbor

> kubectl create -f nginxdeployment2.yaml
deployment.extensions/nginx-site created

It works well.

Create Service
My basic configuration YAML is as follow:
> cat nginxservice.yaml
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    field.cattle.io/targetWorkloadIds: '["deployment:default:nginx-site"]'
  name: nginx-service
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Create the service
> kubectl create -f nginxservice.yaml
service/nginx-service created

Create Load balance
My basic load balance configuration is as follow:
> cat nginxloadbalancing.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginxproxy
  namespace: default
spec:
  rules:
  - host: rancher.sillycat.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
status:
  loadBalancer: {}

Create the Load Balancing
> kubectl create -f nginxloadbalancing.yaml
ingress.extensions/nginxproxy created



References:
https://www.hi-linux.com/posts/21466.html
https://www.qikqiak.com/k8s-book/docs/18.YAML%20%E6%96%87%E4%BB%B6.html
https://stackoverflow.com/questions/45714658/need-to-do-ssh-to-kubernetes-pod
https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html
https://blog.csdn.net/wucong60/article/details/81586272
分享到:
评论

相关推荐

    helm部署应用到k8s集群(helm+k8s)-详细文档

    helm 部署应用到 k8s 集群详细文档 本文档详细介绍了使用 Helm 部署应用到 Kubernetes 集群的过程。Helm 是一个 Kubernetes 的包管理工具,能够方便地将之前打包好的 YAML 文件部署到 Kubernetes 上。 Helm 有三个...

    k8s-v1.24.1启动prometheus监控的yaml文件

    k8s-v1.24.1启动prometheus监控的yaml文件

    K8S工具helm&tille镜像 rv2.9.1.tar

    k8s helm工具镜像,2.9.1版本tiller不好下载,这是我从docker里考出来的,linux系统适用

    k8s heml v3.9.0

    Helm Chart 是用来封装 Kubernetes 原生应用程序的一系列 YAML 文件。可以在你部署应用的时候自定义应用程序的一些 Metadata,以便于应用程序的分发。 使用 Helm 后不用需要编写复杂的应用部署文件,可以以简单的...

    k8s helm2.10.0安装文档

    touch /root/.helm/repository/repositories.yaml ``` 2. **使用阿里云源配置 Tiller**:使用阿里云提供的 Helm 源来配置 Tiller 服务,确保 Helm 能够顺利连接到 Kubernetes 集群。 ```bash helm init --...

    k8s tutorials - k8s 教程( pod, deployment, service, ingress, conf)

    k8s tutorials | k8s 教程( pod, deployment, service, ingress, conf) 在学习本教程前,需要注意本教程侧重于实战引导,以渐进式修改代码的方式,将从最基础的 container 容器的定义开始,经过 pod, deployment, ...

    install-helm.yaml

    手动在gitlab管理的K8s集群中安装Helm Tiller需要的yaml配置文件,修改仓库为阿里仓库同时升级Helm版本和Tiller的版本以匹配k8s-1.16及其子版本。

    k8s-helm-资料笔记-v3版部署k8s集群-超详细,超全面(带文档和相关软件包)

    Kubernetes(简称K8s)是目前最流行的容器编排系统,而Helm则是Kubernetes的应用包管理工具,它使得在K8s上部署和管理应用变得更加简单和高效。本资料笔记将详细介绍如何使用Helm v3来部署Kubernetes集群,提供超...

    K8S入门基础课件docx版本

    1. **Pod**: Pod是K8S的基本执行单元,它可以包含一个或多个紧密相关的容器。Pod代表了运行时的实例,如应用程序进程或数据库。 2. **Service**: Service是K8S中的抽象层,它定义了一种访问Pod的方式,提供了负载...

    helm下载,helm-v3.8.1

    Charts是Helm的包管理系统,它们包含了描述Kubernetes资源的YAML模板,如Deployment、Service、ConfigMap等。这些模板可以被参数化,通过Values来定制化部署。Releases则是Helm用来跟踪每次Chart安装的实例,它记录...

    企业实战-k8s中的yml文件

    在企业级IT应用中,Kubernetes(简称k8s)已成为容器编排的首选平台,而YAML(YAML Ain't Markup Language)则是k8s中定义和管理资源的标准化配置语言。YML文件在k8s生态系统中扮演着至关重要的角色,它们描述了k8s...

    k8s部署RocketMQ主从

    个人使用的资源包,包含dockerfile文件,以及各个组件所使用的yaml

    jenkin部署在k8s上相关yaml及流程

    在Kubernetes(简称k8s)平台上部署Jenkins是一个常见的实践,这允许自动化持续集成和持续部署(CI/CD)流程。以下是对标题和描述中所述知识点的详细说明: 1. **Jenkins简介**:Jenkins是一个开源的持续集成工具,...

    k8s视频教程入门到进阶(基于V1.19版本).rar

    2. **核心概念**:了解K8s的基本单元,如Pod(应用实例)、Service(服务发现和负载均衡)、Deployment(应用部署)、ReplicaSet(副本集,保证应用的高可用性)等。 3. **集群架构**:K8s集群由Master节点和Worker...

    k8s上部署harbor并暴露访问

    使用cfssl工具配置证书并在kubernetes上部署harbor并暴露访问。

    kubectl 1.8.0

    kubectl,作为Kubernetes(简称k8s)集群管理的命令行工具,是每个Kubernetes使用者不可或缺的武器。在这个数字化时代,容器化技术的崛起使得kubectl的重要性日益凸显。本文将深入探讨kubectl 1.8.0版本的功能、特性...

    k8s下部署rabbitmq集群部署方式

    包含k8s下部署rabbitmq集群部署方式的说明,有pv.yaml, svc.yaml, statefulset.yaml

    k8s-proxysql-cluster:Kubernetes ProxySQL集群

    kubectl create -f ./k8s/proxysql.service.yaml 状态集 要部署实际节点,应将一个有状态集添加到kubernetes集群中。 /k8s/proxysql.statefulset.yaml 使用以下命令在kubernetes中使用statefulset kubectl ...

    k8s学习中使用的_helm文件.zip

    在Kubernetes(k8s)生态系统中,Helm是一个非常重要的工具,它被广泛用于管理和部署应用程序。Helm简化了 Kubernetes 应用程序的打包、安装、升级和回滚过程,使得开发者和运维人员能够更高效地操作复杂的...

    k8s中Jenkins流水线配置自动化部署java+vue等服务.rar

    通过Jenkins流水线发布Java服务到k8s集群上,主要是通过在Jenkins slave上集成宿主机docker,helm,kubectl工具来打包镜像部署服务到k8s集群上。

Global site tag (gtag.js) - Google Analytics