`
m635674608
  • 浏览: 5028033 次
  • 性别: Icon_minigender_1
  • 来自: 南京
社区版块
存档分类
最新评论

Kubernetes 1.5.1 部署

 
阅读更多

> kubernetes 1.5.0 , 配置文档


# 1 初始化环境


## 1.1 环境:

 

| 节 点  |      I P      |
|--------|-------------|
|node-1|10.6.0.140|
|node-2|10.6.0.187|
|node-3|10.6.0.188|


## 1.2 设置hostname

hostnamectl --static set-hostname hostname

 

|       I P     | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

 


## 1.3 配置 hosts

```
vi /etc/hosts
```

|     I P       | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

 

# 2.0 部署 kubernetes master

 

## 2.1 添加yum

 

复制代码
# 使用我朋友的 yum 源,嘿嘿

cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[mritdrepo]
name=Mritd Repository
baseurl=https://yum.mritd.me/centos/7/x86_64
enabled=1
gpgcheck=1
gpgkey=https://cdn.mritd.me/keys/rpm.public.key
EOF



yum makecache

yum install -y socat kubelet kubeadm kubectl kubernetes-cni
复制代码

 


## 2.2 安装docker

 

wget -qO- https://get.docker.com/ | sh


systemctl enable docker
systemctl start docker

 


## 2.3 安装 etcd 集群

复制代码
yum -y install etcd

# 创建etcd data 目录

mkdir -p /opt/etcd/data

chown -R etcd:etcd /opt/etcd/


# 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数:


ETCD_NAME=etcd1
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"
复制代码

 

 

# 修改 etcd 启动文件

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service

 


 

复制代码
# 启动 etcd

systemctl enable etcd

systemctl start etcd

systemctl status etcd


# 查看集群状态

etcdctl cluster-health
复制代码

 

 

## 2.4 下载镜像

 

复制代码
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
  docker pull jicki/$imageName
  docker tag jicki/$imageName gcr.io/google_containers/$imageName
  docker rmi jicki/$imageName
done

```

```
# 如果速度很慢,可配置一下加速

docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io"

```
复制代码

 

 

 


## 2.4 启动 kubernetes

 

```
systemctl enable kubelet
systemctl start kubelet
```

 

 

## 2.5 创建集群

 

复制代码
```
kubeadm init --api-advertise-addresses=10.6.0.140 \
--external-etcd-endpoints=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379 \
--use-kubernetes-version v1.5.1 \
--pod-network-cidr 10.244.0.0/16

```

```
Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "c53ef2.d257d49589d634f0"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 15.299235 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.002937 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.502881 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140


```
复制代码

 

 

## 2.6 记录 token
 

You can now join any number of machines by running the following on each node:

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

 

 

## 2.7 配置网络

 

 

复制代码
```
# 建议先下载镜像,否则容易下载不到

docker pull quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64

# 或者这样

docker pull jicki/flannel-git:v0.6.1-28-g5dde68d-amd64
docker tag jicki/flannel-git:v0.6.1-28-g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64
docker rmi jicki/flannel-git:v0.6.1-28-g5dde68d-amd64


```


```
# http://kubernetes.io/docs/admin/addons/  这里有多种网络模式,选择一种

# 这里选择 Flannel  选择 Flannel  init 时必须配置 --pod-network-cidr

kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

```
复制代码

 

 

## 2.8 检查 kubelet 状态

 

systemctl status kubelet

 

 

 


# 3.0 部署 kubernetes node


## 3.1 安装docker

 

复制代码
```
wget -qO- https://get.docker.com/ | sh


systemctl enable docker
systemctl start docker
```
复制代码

 

 

## 3.2 下载镜像

复制代码
```
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
  docker pull jicki/$imageName
  docker tag jicki/$imageName gcr.io/google_containers/$imageName
  docker rmi jicki/$imageName
done

```
复制代码

 


## 3.3 启动 kubernetes

 

systemctl enable kubelet
systemctl start kubelet

 


## 3.4 加入集群

 

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

 

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

 

 

## 3.5 查看集群状态

 

[root@k8s-node-1 ~]#kubectl get node
NAME         STATUS         AGE
k8s-node-1   Ready,master   27m
k8s-node-2   Ready          6s
k8s-node-3   Ready          9s

 


## 3.6 查看服务状态

 

复制代码
[root@k8s-node-1 ~]#kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-qrp68               1/1       Running   1          1h
kube-system   kube-apiserver-k8s-node-1            1/1       Running   2          1h
kube-system   kube-controller-manager-k8s-node-1   1/1       Running   2          1h
kube-system   kube-discovery-1769846148-g2lpc      1/1       Running   1          1h
kube-system   kube-dns-2924299975-xbhv4            4/4       Running   3          1h
kube-system   kube-flannel-ds-39g5n                2/2       Running   2          1h
kube-system   kube-flannel-ds-dwc82                2/2       Running   2          1h
kube-system   kube-flannel-ds-qpkm0                2/2       Running   2          1h
kube-system   kube-proxy-16c50                     1/1       Running   2          1h
kube-system   kube-proxy-5rkc8                     1/1       Running   2          1h
kube-system   kube-proxy-xwrq0                     1/1       Running   2          1h
kube-system   kube-scheduler-k8s-node-1            1/1       Running   2          1h
复制代码

 

 

 


# 4.0 设置 kubernetes

## 4.1 其他主机控制集群

 

复制代码
```
# 备份master节点的 配置文件

/etc/kubernetes/admin.conf

# 保存至 其他电脑, 通过执行配置文件控制集群

kubectl --kubeconfig ./admin.conf get nodes

```
复制代码

 


## 4.2 配置dashboard

 

复制代码
```
#下载 yaml 文件, 直接导入会去官方拉取images

curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml


#编辑 yaml 文件

vi kubernetes-dashboard.yaml

image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0

修改为

image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0


imagePullPolicy: Always

修改为

imagePullPolicy: IfNotPresent

```


```
kubectl create -f ./kubernetes-dashboard.yaml

deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
```


```
# 查看 NodePort ,既外网访问端口

kubectl describe svc kubernetes-dashboard --namespace=kube-system

NodePort:               <unset> 31736/TCP

```


```
# 访问 dashboard

http://10.6.0.140:31736

```
复制代码

 

 

 

 


# 5.0 kubernetes 应用部署


## 5.1 部署一个 nginx rc


> 编写 一个 nginx yaml

 

复制代码
```
apiVersion: v1 
kind: ReplicationController 
metadata: 
  name: nginx-rc 
spec: 
  replicas: 2 
  selector: 
    name: nginx 
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: nginx:alpine
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80
```

```
[root@k8s-node-1 ~]#kubectl get rc
NAME       DESIRED   CURRENT   READY     AGE
nginx-rc   2         2         2         2m


[root@k8s-node-1 ~]#kubectl get pod -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP          NODE
nginx-rc-2s8k9   1/1       Running   0          10m       10.32.0.3   k8s-node-1
nginx-rc-s16cm   1/1       Running   0          10m       10.40.0.1   k8s-node-2
复制代码

 

 

> 编写一个 nginx service 让集群内部容器可以访问 (ClusterIp)

 

复制代码
```
apiVersion: v1 
kind: Service 
metadata: 
  name: nginx-svc 
spec: 
  ports: 
    - port: 80
      targetPort: 80
      protocol: TCP 
  selector: 
    name: nginx
```


```
[root@k8s-node-1 ~]#kubectl create -f nginx-svc.yaml 
service "nginx-svc" created


[root@k8s-node-1 ~]#kubectl get svc -o wide
NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE       SELECTOR
kubernetes   10.0.0.1      <none>        443/TCP   2d        <none>
nginx-svc    10.6.164.79   <none>        80/TCP    29s       name=nginx

```



> 编写一个 curl 的pods

```
apiVersion: v1
kind: Pod
metadata:
  name: curl
spec:
  containers:
  - name: curl
    image: radial/busyboxplus:curl
    command:
    - sh
    - -c
    - while true; do sleep 1; done
```


```
# 测试pods 内部通信
[root@k8s-node-1 ~]#kubectl exec curl curl nginx
```



```
# 在任何node节点中,可使用ip访问

[root@k8s-node-2 ~]# curl 10.6.164.79
[root@k8s-node-3 ~]# curl 10.6.164.79

```
复制代码

 

 

> 编写一个 nginx service 让外部可以访问 (NodePort)

 

复制代码
```
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc-node
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
```


```
[root@k8s-node-1 ~]#kubectl get svc -o wide
NAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE       SELECTOR
kubernetes       10.0.0.1       <none>        443/TCP   2d        <none>
nginx-svc        10.6.164.79    <none>        80/TCP    29m       name=nginx
nginx-svc-node   10.12.95.227   <nodes>       80/TCP    17s       name=nginx


[root@k8s-node-1 ~]#kubectl describe svc nginx-svc-node |grep NodePort
Type:                   NodePort
NodePort:               <unset> 32669/TCP
```



```
# 使用 ALL node节点物理IP + 端口访问

http://10.6.0.140:32669

http://10.6.0.187:32669

http://10.6.0.188:32669
```
复制代码

 

 

 


## 5.2 部署一个 zookeeper 集群


> 编写 一个 zookeeper-cluster.yaml

 

复制代码
```
apiVersion: extensions/v1beta1
kind: Deployment 
metadata: 
  name: zookeeper-1
spec: 
  replicas: 1
  template: 
    metadata: 
      labels: 
        name: zookeeper-1 
    spec: 
      containers: 
        - name: zookeeper-1
          image: zk:alpine 
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "1"
          - name: NODES
            value: "0.0.0.0,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 2181

---

apiVersion: extensions/v1beta1 
kind: Deployment
metadata:
  name: zookeeper-2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: zookeeper-2
    spec:
      containers:
        - name: zookeeper-2
          image: zk:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "2"
          - name: NODES
            value: "zookeeper-1,0.0.0.0,zookeeper-3"
          ports:
          - containerPort: 2181

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: zookeeper-3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: zookeeper-3
    spec:
      containers:
        - name: zookeeper-3
          image: zk:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "3"
          - name: NODES
            value: "zookeeper-1,zookeeper-2,0.0.0.0"
          ports:
          - containerPort: 2181
---

apiVersion: v1 
kind: Service 
metadata: 
  name: zookeeper-1 
  labels:
    name: zookeeper-1
spec: 
  ports: 
    - name: client
      port: 2181
      protocol: TCP
    - name: followers
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
  selector: 
    name: zookeeper-1

---

apiVersion: v1 
kind: Service 
metadata: 
  name: zookeeper-2
  labels:
    name: zookeeper-2
spec: 
  ports: 
    - name: client
      port: 2181
      protocol: TCP
    - name: followers
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
  selector: 
    name: zookeeper-2

---

apiVersion: v1 
kind: Service 
metadata: 
  name: zookeeper-3
  labels:
    name: zookeeper-3
spec: 
  ports: 
    - name: client
      port: 2181
      protocol: TCP
    - name: followers
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
  selector: 
    name: zookeeper-3
    
```


```
[root@k8s-node-1 ~]#kubectl create -f zookeeper-cluster.yaml --record



[root@k8s-node-1 ~]#kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP          NODE
zookeeper-1-2149121414-cfyt4   1/1       Running   0          51m       10.32.0.3   k8s-node-2
zookeeper-2-2653289864-0bxee   1/1       Running   0          51m       10.40.0.1   k8s-node-3
zookeeper-3-3158769034-5csqy   1/1       Running   0          51m       10.40.0.2   k8s-node-3


[root@k8s-node-1 ~]#kubectl get deployment -o wide    
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
zookeeper-1   1         1         1            1           51m
zookeeper-2   1         1         1            1           51m
zookeeper-3   1         1         1            1           51m


[root@k8s-node-1 ~]#kubectl get svc -o wide
NAME          CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE       SELECTOR
zookeeper-1   10.8.111.19    <none>        2181/TCP,2888/TCP,3888/TCP   51m       name=zookeeper-1
zookeeper-2   10.6.10.124    <none>        2181/TCP,2888/TCP,3888/TCP   51m       name=zookeeper-2
zookeeper-3   10.0.146.143   <none>        2181/TCP,2888/TCP,3888/TCP   51m       name=zookeeper-3
复制代码

 

 

## 5.3 部署一个 kafka 集群


> 编写 一个 kafka-cluster.yaml

 

复制代码
```

apiVersion: extensions/v1beta1
kind: Deployment 
metadata: 
  name: kafka-deployment-1
spec: 
  replicas: 1
  template: 
    metadata: 
      labels: 
        name: kafka-1 
    spec: 
      containers: 
        - name: kafka-1
          image: kafka:alpine 
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "1"
          - name: ZK_NODES
            value: "zookeeper-1,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 9092

---

apiVersion: extensions/v1beta1 
kind: Deployment
metadata:
  name: kafka-deployment-2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kafka-2  
    spec:
      containers:
        - name: kafka-2
          image: kafka:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "2"
          - name: ZK_NODES
            value: "zookeeper-1,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 9092

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kafka-deployment-3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kafka-3  
    spec:
      containers:
        - name: kafka-3
          image: kafka:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "3"
          - name: ZK_NODES
            value: "zookeeper-1,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 9092

---

apiVersion: v1 
kind: Service 
metadata: 
  name: kafka-1 
  labels:
    name: kafka-1
spec: 
  ports: 
    - name: client
      port: 9092
      protocol: TCP
  selector: 
    name: kafka-1

---

apiVersion: v1 
kind: Service 
metadata: 
  name: kafka-2
  labels:
    name: kafka-2
spec: 
  ports: 
    - name: client
      port: 9092
      protocol: TCP
  selector: 
    name: kafka-2

---

apiVersion: v1 
kind: Service 
metadata: 
  name: kafka-3
  labels:
    name: kafka-3
spec: 
  ports: 
    - name: client
      port: 9092
      protocol: TCP
  selector: 
    name: kafka-3

```
复制代码

 

 http://www.cnblogs.com/jicki/p/6208271.html

 http://blog.csdn.net/wenwst/article/details/54409205

 

 


# FAQ:


## kube-discovery error

 

    failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]
    

kubeadm reset

kubeadm init
分享到:
评论

相关推荐

    axis2-1.5.1-bin.zip axis2-1.5.1-war.zip axis2部署使用

    标题提到的“axis2-1.5.1-bin.zip”和“axis2-1.5.1-war.zip”是Apache Axis2的两个不同版本的发行包,分别用于不同的部署场景。 `axis2-1.5.1-bin.zip`是Axis2的二进制包,它包含了运行和开发Web服务所需的所有...

    seata-1.5.1 交互式自动部署Shell脚本

    本文将详细讲解如何利用提供的 "seata-1.5.1 交互式自动部署Shell脚本" 在 CentOS 7.9 操作系统上进行安装。 首先,让我们理解 "seata-install.sh" 这个脚本的作用。这是一个 Shell 脚本,通常用于自动化执行一系列...

    CnPlugin_1.5.1.rar

    《PLSQL Developer插件CnPlugin 1.5.1:解决输入法和QQ冲突问题》 PLSQL Developer是一款广泛使用的Oracle数据库管理工具,它为数据库管理员和开发人员提供了便捷的环境来编写、测试和调试SQL及PL/SQL代码。然而,...

    zencart 1.5.1中文版

    zencart 1.5.1中文版 PCI 兼容 v1.5.0 通过 PA-DSS 认证 v1.5.1 是可选更新,不会提交正式的认证 最低要求 Zen Cart® v1.5.1 对系统的最低要求如下: PHP 5.2.14 或以上 MySQL 4.1.3 或更高 Apache 2.0 或更高...

    HackBGRT-1.5.1.zip

    【HackBGRT 1.5.1:启动屏幕自定义工具】 HackBGRT(Boot Graphical Boot Manager)是一款专门用于自定义Windows系统启动画面的工具,版本1.5.1提供了一种简单的方法来替换默认的Boot Graphical Boot Record (BGRT)...

    axis2-1.5.1

    标题“axis2-1.5.1”指向的是Apache Axis2的一个特定版本,这是一个流行的开源Web服务引擎,用于创建和部署Web服务。Apache Axis2是Axis1.x的下一代产品,它在性能、可扩展性和模块化方面有了显著的提升。 描述中的...

    classmate-1.5.1-API文档-中文版.zip

    赠送jar包:classmate-1.5.1.jar; 赠送原API文档:classmate-1.5.1-javadoc.jar; 赠送源代码:classmate-1.5.1-sources.jar; 赠送Maven依赖信息文件:classmate-1.5.1.pom; 包含翻译后的API文档:classmate-...

    photoscan1.5.1.rar

    《Photoscan 1.5.1:开启三维建模的新篇章》 Photoscan 1.5.1 是一款强大的三维扫描与建模软件,专为专业用户设计,用于创建高精度的数字地形模型(DOM)和三维模型。该版本作为Photoscan的最新更新,其性能和功能...

    Mycat-server-1.5.1

    《Mycat-server-1.5.1:分布式数据库中间件详解》 Mycat-server-1.5.1是一款广泛应用在大数据环境下的分布式数据库中间件,它为开发者提供了强大的数据库分片、读写分离以及数据复制等功能。在这个版本中,Mycat...

    torchsummary-1.5.1.tar.gz

    《torchsummary-1.5.1:Python深度学习利器》 在深度学习领域,高效的模型理解和优化至关重要。TorchSummary库就是为此目的而设计的,它是一个强大的工具,可以帮助我们可视化和理解PyTorch模型的内部工作原理。...

    GitHubDesktopSetup1.5.1.zip

    标题中的“GitHubDesktopSetup1.5.1.zip”指的是GitHub Desktop的1.5.1版本的离线安装程序包,它被压缩在一个ZIP文件里。GitHub Desktop是一款图形用户界面(GUI)工具,专为GitHub用户设计,便于进行版本控制和协作...

    superoneclick1.5.1

    superoneclick1.5.1

    FlexPaper_1.5.1_flash

    在"FlexPaper_1.5.1_flash"这个版本中,我们主要关注的是它提供的PDF到SWF的转换能力和在线浏览的用户体验。 PDF(Portable Document Format)是一种通用的文件格式,用于存储和交换文档,包括文本、图像和排版信息...

    prototype1.5.1

    prototype1.5.1

    jQuery EasyUI 1.5.1 版 API 中文版

    版本 1.5.1 是一个重要的更新,它不仅包含了之前版本的所有功能,还进行了一些错误修复和新功能的添加。 在 `jQuery EasyUI 1.5.1 版 API 中文版` 文档中,开发者可以找到详尽的指南和参考信息。这份文档分为几个...

    axis2-1.5.1-war

    标题“axis2-1.5.1-war”指的是Apache Axis2的一个特定版本,它是Web服务框架的实现,专为构建和部署SOAP(简单对象访问协议)和RESTful Web服务而设计。1.5.1是该软件的版本号,而“war”表示这是一个Web应用程序...

    jquery1.5.1

    《jQuery 1.5.1:深入解析与应用》 jQuery 1.5.1是JavaScript库jQuery的一个重要版本,自2011年发布以来,它为开发者提供了更为高效和便捷的DOM操作、事件处理、动画制作以及Ajax交互等功能。这个版本在前一版的...

    javax.wsdl-1.5.1.jar

    javax.wsdl-1.5.1.jar

    uTopsites 1.5.1

    《uTopsites 1.5.1:文件管理类软件的应用与解析》 uTopsites 1.5.1是一款专属于文件管理类的软件,它在用户日常的文件操作、组织和检索中扮演着重要角色。作为一款高效且易用的工具,uTopsites致力于提供卓越的...

Global site tag (gtag.js) - Google Analytics