- 浏览: 2566458 次
- 性别:
- 来自: 成都
-
文章分类
最新评论
-
nation:
你好,在部署Mesos+Spark的运行环境时,出现一个现象, ...
Spark(4)Deal with Mesos -
sillycat:
AMAZON Relatedhttps://www.godad ...
AMAZON API Gateway(2)Client Side SSL with NGINX -
sillycat:
sudo usermod -aG docker ec2-use ...
Docker and VirtualBox(1)Set up Shared Disk for Virtual Box -
sillycat:
Every Half an Hour30 * * * * /u ...
Build Home NAS(3)Data Redundancy -
sillycat:
3 List the Cron Job I Have>c ...
Build Home NAS(3)Data Redundancy
ElasticSearch(3)Version Upgrade and Cluster
Haha, the first version I start with Elastic Search is 1.4.0.
Last time, I install the softwares are version 6.2.4. Right now, it is 7.0.1. Haha. Let’s try the latest on my systems.
First of All, on MAC
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-darwin-x86_64.tar.gz
Unzip the file and install in the default directory
> sudo ln -s /Users/hluo/tool/elasticsearch-7.0.1 /opt/elasticsearch-7.0.1
> sudo ln -s /opt/elasticsearch-7.0.1 /opt/elasticsearch
It is already in my path
export PATH=/opt/elasticsearch/bin:$PATH
This command is coming from he old version, it does not be needed here now.
> bin/elasticsearch-plugin install x-pack
ERROR: this distribution of Elasticsearch contains X-Pack by default
Start Elasticsearch with this command
> bin/elasticsearch
After start, visit the Web Page
http://localhost:9200/
Download and Install kibana
https://artifacts.elastic.co/downloads/kibana/kibana-7.0.1-darwin-x86_64.tar.gz
> sudo ln -s /Users/hluo/tool/kibana-7.0.1 /opt/kibana-7.0.1
> sudo ln -s /opt/kibana-7.0.1 /opt/kibana
Kibana is added to my path as well
export PATH=/opt/kibana/bin:$PATH
Edit and Check the configuration
> vi config/kibana.yml
elasticsearch.hosts: ["http://localhost:9200"]
Start the Kibana
> bin/kibana
Visit the WebPage
http://localhost:5601/app/kibana
Elastic Search and Kibana on Ubuntu
Find the Elastic Search download here
https://www.elastic.co/downloads/elasticsearch
> wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-linux-x86_64.tar.gz
Find the Kibana download from here
https://www.elastic.co/downloads/kibana
> wget https://artifacts.elastic.co/downloads/kibana/kibana-7.0.1-linux-x86_64.tar.gz
Unzip these 2 files and place in the working directory
> sudo ln -s /home/carl/tool/elasticsearch-7.0.1 /opt/elasticsearch-7.0.1
> sudo ln -s /home/carl/tool/kibana-7.0.1 /opt/kibana-7.0.1
> sudo ln -s /opt/elasticsearch-7.0.1 /opt/elasticsearch
> sudo ln -s /opt/kibana-7.0.1 /opt/kibana
When I change the binding IP for Elastic Search, I get error when I start the instance.
> cat config/elasticsearch.yml
network.host: 192.168.56.101
http.port: 9200
bound or publishing to a non-loopback address, enforcing bootstrap checks
Add these to the configuration works
transport.host: localhost
transport.tcp.port: 9300
http.port: 9200
network.host: 0.0.0.0
Kibana Network Configuration
The configuration file is as follow
> cat config/kibana.yml
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
After that, we can visit these pages:
http://ubuntu-master:9200/
http://ubuntu-master:5601
Here is one Compose Configuration to Run elasticsearch1, elasticsearch2, elasticsearch3 and kibana
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch1
environment:
- node.name=elasticsearch1
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512M -Xmx512M"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 10s
volumes:
# - "./data/logs:/var/log"
# - "./data/loc_esdata1:/usr/share/elasticsearch/data"
- type: volume
source: logs
target: /var/log
- type: volume
source: loc_esdata1
target: /usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9200:9200
- 9300:9300
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512M -Xmx512M"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 10s
volumes:
# - "./data/logs:/var/log"
# - "./data/loc_esdata2:/usr/share/elasticsearch/data"
- type: volume
source: logs
target: /var/log
- type: volume
source: loc_esdata2
target: /usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9201:9200
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512M -Xmx512M"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 10s
volumes:
# - "./data/logs:/var/log"
# - "./data/loc_esdata3:/usr/share/elasticsearch/data"
- type: volume
source: logs
target: /var/log
- type: volume
source: loc_esdata3
target: /usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9202:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch1:9200/
ports:
- 5601:5601
volumes:
- type: volume
source: logs
target: /var/log
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 30s
max_attempts: 3
window: 120s
networks:
- elastic
- ingress
headPlugin:
image: 'mobz/elasticsearch-head:5'
container_name: head
ports:
- '9100:9100'
networks:
- elastic
volumes:
loc_esdata1:
loc_esdata2:
loc_esdata3:
logs:
networks:
elastic:
ingress:
Then we can build and run
> docker-compose up -d
Start the service
> docker-compose start
Some other command
# up
docker-compose up -d
# down
docker-compose down
# down and remove volume
docker-compose down -v
# stop
docker-compose stop
# pause
docker-compose pause
# start
docker-compose start
# remove
docker-compose rm
Then we can visit
http://localhost:5601/app/kibana#/home?_g=()
Cluster Info
http://localhost:9100/
Elastic search node
http://localhost:9201/
Near Realtime(NRT) - Elasticsearch is a near-realtime search platform. There is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.
Cluster - multiple nodes with a cluster name.
Node - its name is an UUID. ( a random Universally Unique IDentifier ) A note can be configured to join a specific Cluster by the cluster name.
If you start the first node, it will by default form a new single-node cluster named elasticsearch.
Index - An index is a collection of documents. An index is identified by a name( that must be all lowercase)
Document - A document is a basic unit of information that can be indexed.
Shards & Replicas - Elasticsearch provides the ability to subdivide your index into multiple pieces called shards. Replica provides high availability in case a shard/node fails. The number of shards and replicas can be defined per index at the time the index is created. You can change the number of shards for an existing index using the _shrink and _split APIs.
The latest Version I get from the Document, but I think we can only download the 7.0.1 Version.
> curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
I should use this document
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/getting-started-install.html
Start the First Node
> bin/elasticsearch -Ecluster.name=sillycatcluster -Enode.name=elastic1
Here is the information after started
http://ubuntu-master:9200/
{
"name": "elastic1",
"cluster_name": "sillycatcluster",
"cluster_uuid": "szdcUtvTQaK4Taz63-nKGQ",
"version": {
"number": "7.0.1",
"build_flavor": "default",
"build_type": "tar",
"build_hash": "e4efcb5",
"build_date": "2019-04-29T12:56:03.145736Z",
"build_snapshot": false,
"lucene_version": "8.0.0",
"minimum_wire_compatibility_version": "6.7.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
References:
https://www.elastic.co/downloads/elasticsearch
https://www.elastic.co/downloads/kibana
https://stackoverflow.com/questions/34661210/how-to-bind-kibana-to-multiple-host-names-ips
https://www.elastic.co/guide/en/elasticsearch/reference/7.x/index.html
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/getting-started-install.html
Haha, the first version I start with Elastic Search is 1.4.0.
Last time, I install the softwares are version 6.2.4. Right now, it is 7.0.1. Haha. Let’s try the latest on my systems.
First of All, on MAC
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-darwin-x86_64.tar.gz
Unzip the file and install in the default directory
> sudo ln -s /Users/hluo/tool/elasticsearch-7.0.1 /opt/elasticsearch-7.0.1
> sudo ln -s /opt/elasticsearch-7.0.1 /opt/elasticsearch
It is already in my path
export PATH=/opt/elasticsearch/bin:$PATH
This command is coming from he old version, it does not be needed here now.
> bin/elasticsearch-plugin install x-pack
ERROR: this distribution of Elasticsearch contains X-Pack by default
Start Elasticsearch with this command
> bin/elasticsearch
After start, visit the Web Page
http://localhost:9200/
Download and Install kibana
https://artifacts.elastic.co/downloads/kibana/kibana-7.0.1-darwin-x86_64.tar.gz
> sudo ln -s /Users/hluo/tool/kibana-7.0.1 /opt/kibana-7.0.1
> sudo ln -s /opt/kibana-7.0.1 /opt/kibana
Kibana is added to my path as well
export PATH=/opt/kibana/bin:$PATH
Edit and Check the configuration
> vi config/kibana.yml
elasticsearch.hosts: ["http://localhost:9200"]
Start the Kibana
> bin/kibana
Visit the WebPage
http://localhost:5601/app/kibana
Elastic Search and Kibana on Ubuntu
Find the Elastic Search download here
https://www.elastic.co/downloads/elasticsearch
> wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.0.1-linux-x86_64.tar.gz
Find the Kibana download from here
https://www.elastic.co/downloads/kibana
> wget https://artifacts.elastic.co/downloads/kibana/kibana-7.0.1-linux-x86_64.tar.gz
Unzip these 2 files and place in the working directory
> sudo ln -s /home/carl/tool/elasticsearch-7.0.1 /opt/elasticsearch-7.0.1
> sudo ln -s /home/carl/tool/kibana-7.0.1 /opt/kibana-7.0.1
> sudo ln -s /opt/elasticsearch-7.0.1 /opt/elasticsearch
> sudo ln -s /opt/kibana-7.0.1 /opt/kibana
When I change the binding IP for Elastic Search, I get error when I start the instance.
> cat config/elasticsearch.yml
network.host: 192.168.56.101
http.port: 9200
bound or publishing to a non-loopback address, enforcing bootstrap checks
Add these to the configuration works
transport.host: localhost
transport.tcp.port: 9300
http.port: 9200
network.host: 0.0.0.0
Kibana Network Configuration
The configuration file is as follow
> cat config/kibana.yml
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
After that, we can visit these pages:
http://ubuntu-master:9200/
http://ubuntu-master:5601
Here is one Compose Configuration to Run elasticsearch1, elasticsearch2, elasticsearch3 and kibana
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch1
environment:
- node.name=elasticsearch1
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512M -Xmx512M"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 10s
volumes:
# - "./data/logs:/var/log"
# - "./data/loc_esdata1:/usr/share/elasticsearch/data"
- type: volume
source: logs
target: /var/log
- type: volume
source: loc_esdata1
target: /usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9200:9200
- 9300:9300
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512M -Xmx512M"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 10s
volumes:
# - "./data/logs:/var/log"
# - "./data/loc_esdata2:/usr/share/elasticsearch/data"
- type: volume
source: logs
target: /var/log
- type: volume
source: loc_esdata2
target: /usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9201:9200
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512M -Xmx512M"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 10s
volumes:
# - "./data/logs:/var/log"
# - "./data/loc_esdata3:/usr/share/elasticsearch/data"
- type: volume
source: logs
target: /var/log
- type: volume
source: loc_esdata3
target: /usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9202:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
container_name: kibana
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch1:9200/
ports:
- 5601:5601
volumes:
- type: volume
source: logs
target: /var/log
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
delay: 30s
max_attempts: 3
window: 120s
networks:
- elastic
- ingress
headPlugin:
image: 'mobz/elasticsearch-head:5'
container_name: head
ports:
- '9100:9100'
networks:
- elastic
volumes:
loc_esdata1:
loc_esdata2:
loc_esdata3:
logs:
networks:
elastic:
ingress:
Then we can build and run
> docker-compose up -d
Start the service
> docker-compose start
Some other command
# up
docker-compose up -d
# down
docker-compose down
# down and remove volume
docker-compose down -v
# stop
docker-compose stop
# pause
docker-compose pause
# start
docker-compose start
# remove
docker-compose rm
Then we can visit
http://localhost:5601/app/kibana#/home?_g=()
Cluster Info
http://localhost:9100/
Elastic search node
http://localhost:9201/
Near Realtime(NRT) - Elasticsearch is a near-realtime search platform. There is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.
Cluster - multiple nodes with a cluster name.
Node - its name is an UUID. ( a random Universally Unique IDentifier ) A note can be configured to join a specific Cluster by the cluster name.
If you start the first node, it will by default form a new single-node cluster named elasticsearch.
Index - An index is a collection of documents. An index is identified by a name( that must be all lowercase)
Document - A document is a basic unit of information that can be indexed.
Shards & Replicas - Elasticsearch provides the ability to subdivide your index into multiple pieces called shards. Replica provides high availability in case a shard/node fails. The number of shards and replicas can be defined per index at the time the index is created. You can change the number of shards for an existing index using the _shrink and _split APIs.
The latest Version I get from the Document, but I think we can only download the 7.0.1 Version.
> curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
I should use this document
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/getting-started-install.html
Start the First Node
> bin/elasticsearch -Ecluster.name=sillycatcluster -Enode.name=elastic1
Here is the information after started
http://ubuntu-master:9200/
{
"name": "elastic1",
"cluster_name": "sillycatcluster",
"cluster_uuid": "szdcUtvTQaK4Taz63-nKGQ",
"version": {
"number": "7.0.1",
"build_flavor": "default",
"build_type": "tar",
"build_hash": "e4efcb5",
"build_date": "2019-04-29T12:56:03.145736Z",
"build_snapshot": false,
"lucene_version": "8.0.0",
"minimum_wire_compatibility_version": "6.7.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
References:
https://www.elastic.co/downloads/elasticsearch
https://www.elastic.co/downloads/kibana
https://stackoverflow.com/questions/34661210/how-to-bind-kibana-to-multiple-host-names-ips
https://www.elastic.co/guide/en/elasticsearch/reference/7.x/index.html
https://www.elastic.co/guide/en/elasticsearch/reference/7.0/getting-started-install.html
发表评论
-
Update Site will come soon
2021-06-02 04:10 1694I am still keep notes my tech n ... -
Stop Update Here
2020-04-28 09:00 331I will stop update here, and mo ... -
NodeJS12 and Zlib
2020-04-01 07:44 491NodeJS12 and Zlib It works as ... -
Docker Swarm 2020(2)Docker Swarm and Portainer
2020-03-31 23:18 377Docker Swarm 2020(2)Docker Swar ... -
Docker Swarm 2020(1)Simply Install and Use Swarm
2020-03-31 07:58 381Docker Swarm 2020(1)Simply Inst ... -
Traefik 2020(1)Introduction and Installation
2020-03-29 13:52 351Traefik 2020(1)Introduction and ... -
Portainer 2020(4)Deploy Nginx and Others
2020-03-20 12:06 439Portainer 2020(4)Deploy Nginx a ... -
Private Registry 2020(1)No auth in registry Nginx AUTH for UI
2020-03-18 00:56 449Private Registry 2020(1)No auth ... -
Docker Compose 2020(1)Installation and Basic
2020-03-15 08:10 392Docker Compose 2020(1)Installat ... -
VPN Server 2020(2)Docker on CentOS in Ubuntu
2020-03-02 08:04 474VPN Server 2020(2)Docker on Cen ... -
Buffer in NodeJS 12 and NodeJS 8
2020-02-25 06:43 400Buffer in NodeJS 12 and NodeJS ... -
NodeJS ENV Similar to JENV and PyENV
2020-02-25 05:14 495NodeJS ENV Similar to JENV and ... -
Prometheus HA 2020(3)AlertManager Cluster
2020-02-24 01:47 438Prometheus HA 2020(3)AlertManag ... -
Serverless with NodeJS and TencentCloud 2020(5)CRON and Settings
2020-02-24 01:46 346Serverless with NodeJS and Tenc ... -
GraphQL 2019(3)Connect to MySQL
2020-02-24 01:48 262GraphQL 2019(3)Connect to MySQL ... -
GraphQL 2019(2)GraphQL and Deploy to Tencent Cloud
2020-02-24 01:48 463GraphQL 2019(2)GraphQL and Depl ... -
GraphQL 2019(1)Apollo Basic
2020-02-19 01:36 336GraphQL 2019(1)Apollo Basic Cl ... -
Serverless with NodeJS and TencentCloud 2020(4)Multiple Handlers and Running wit
2020-02-19 01:19 322Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(3)Build Tree and Traverse Tree
2020-02-19 01:19 330Serverless with NodeJS and Tenc ... -
Serverless with NodeJS and TencentCloud 2020(2)Trigger SCF in SCF
2020-02-19 01:18 306Serverless with NodeJS and Tenc ...
相关推荐
- [官方文档链接](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html) **2. 检查Java环境** - **兼容性要求**:Elasticsearch 2.0 要求 Java 环境至少为 Java 1.7.0 或更高...
3. **安装Elasticsearch** - 如果使用`.deb`包,可以通过`sudo dpkg -i /path/to/elasticsearch-6.x.x.deb`进行安装。 - 若使用解压后的文件夹,可以将`es6.x`移动到 `/usr/share/` 目录下,并创建软链接到 `/etc/...
Si tenga presente che, se si sta utilizzando una di queste versioni di Windows, per effettuare un upgrade potrebbe essere necessario cambiare sistema operativo. -------------------------------------...
pimpinella_3cd_01_0716
FIB English learning
X86-jq安装包
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
大圣挪车小程序1.3.5 前端
Manus.im 产品及开发团队研究报告.pdf
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
sun_3ck_01a_0918
下载 1. 单击“立即下载”,以下载该文件。 2. 出现“文件下载”窗口后,单击“保存”,以将文件保存到硬盘。 安装 1. 浏览至文件下载目标位置并双击新下载的文件。 2. 仔细阅读对话窗口中显示的发布信息。 3. 下载并安装对话窗口中标识的任何必备项,然后再继续。 4. 单击“Install”(安装)按钮。 5. 按照其余提示执行更新。 安装 1. 将解压的文件复制到可访问Windows的介质。 2. 将系统重新引导至Windows操作系统。 3. 打开“服务器管理器”->“设备管理器”->“存储控制器”,然后单击“PERC控制器”。 5. 单击“更新驱动程序软件”,并按照提示更新驱动程序。 4. 重新引导系统以使更改生效。
支持所有操作系统一键安装。
matlab程序代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
swanson_01_1106
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
AB PLC例程代码项目案例 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载!欢迎交流学习!不清楚的可以私信问我!
sun_3ck_01_0919
各城市方言距离数据-中山大学岭南学院产业与区域经济研究中心 方言距离是指两种或多种方言之间的相似程度或差异程度。参考中山大学岭南学院产业与区域经济研究中心的刘毓芸等(2015)文献。他们基于方言树图,并参考《汉语方言大词典》和《中国语言地图集》对方言的划分,将汉语方言从宽泛到具体分为以下几个层级:汉语→方言大区→方言区→方言片。为了量化县与县之间的方言差异,他们采用了一种赋值方法: 若它们分属不同方言大区,则距离为3。: 若两个县同属一个方言片,则它们之间的方言距离为0; 若两个县属于同一方言区但不同方言片,则距离为1; 若它们属于同一方言大区但不同方言区,则距离为2; 方言距离是一个反映方言之间相似程度或差异程度的重要指标,它在语音识别、方言研究等领域具有广泛的应用价值。 参考文献:[1]刘毓芸, 徐现祥, 肖泽凯. 2015. 劳动力跨方言流动的倒U型模式[J]. 经济研究, 50(10): 134-146+162. 指标 语系、语族、方言大区、方言区/语支、方言片/语种、Supergroup、Dialect、group、Sub-dialect、groupPref_1、Pref_2、DiaDist、PrefCode_1、PrefCode_2等等。