- 浏览: 1492199 次
- 性别:
- 来自: 北京
-
文章分类
- 全部博客 (691)
- linux (207)
- shell (33)
- java (42)
- 其他 (22)
- javascript (33)
- cloud (16)
- python (33)
- c (48)
- sql (12)
- 工具 (6)
- 缓存 (16)
- ubuntu (7)
- perl (3)
- lua (2)
- 超级有用 (2)
- 服务器 (2)
- mac (22)
- nginx (34)
- php (2)
- 内核 (2)
- gdb (13)
- ICTCLAS (2)
- mac android (0)
- unix (1)
- android (1)
- vim (1)
- epoll (1)
- ios (21)
- mysql (3)
- systemtap (1)
- 算法 (2)
- 汇编 (2)
- arm (3)
- 我的数据结构 (8)
- websocket (12)
- hadoop (5)
- thrift (2)
- hbase (1)
- graphviz (1)
- redis (1)
- raspberry (2)
- qemu (31)
- opencv (4)
- socket (1)
- opengl (1)
- ibeacons (1)
- emacs (6)
- openstack (24)
- docker (1)
- webrtc (11)
- angularjs (2)
- neutron (23)
- jslinux (18)
- 网络 (13)
- tap (9)
- tensorflow (8)
- nlu (4)
- asm.js (5)
- sip (3)
- xl2tp (5)
- conda (1)
- emscripten (6)
- ffmpeg (10)
- srt (1)
- wasm (5)
- bert (3)
- kaldi (4)
- 知识图谱 (1)
最新评论
-
wahahachuang8:
我喜欢代码简洁易读,服务稳定的推送服务,前段时间研究了一下go ...
websocket的helloworld -
q114687576:
http://www.blue-zero.com/WebSoc ...
websocket的helloworld -
zhaoyanzimm:
感谢您的分享,给我提供了很大的帮助,在使用过程中发现了一个问题 ...
nginx的helloworld模块的helloworld -
haoningabc:
leebyte 写道太NB了,期待早日用上Killinux!么 ...
qemu+emacs+gdb调试内核 -
leebyte:
太NB了,期待早日用上Killinux!
qemu+emacs+gdb调试内核
参考
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-private.html
安装,多了一些,问题的处理
写了个通用的数据脚本mysql_openstack.sh
用来跟踪数据库的变化,
比如我们看keystone库有哪些数据,就是用
./mysql_openstack.sh keystone
脚本如下
centos7.2升级iproute到3.10.0-54.el7后导致ip netns的结果后面带了个(id: 0),以至于openstack的L版本的neutron在建立router的时候解析不了,一直报错“ Cannot create namespace file” 解决办法
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
基础:
其实191没用上
网络节点也安装在了controller节点上
安装mysql
vim /etc/my.cnf.d/mariadb_openstack.cnf
消息队列
安装mongo,似乎没用上
一下密码全都设置成haoning
----------------------------------------------------------------------
■■■■■■■■■■■■■■■■■■keystone begin■■■■■■■■■■■■■■■■■■
keystone安装,在controller节点:
$ openssl rand -hex 10
06a0afd32e5265a9eba8
需要把memcache也装上,作为keystone的缓存。
这里keystone是通过阿帕奇发布
如果清空memcache使用
#echo "flush_all" | nc 127.0.0.1 11211
#清空
修改配置文件后得到
sed -i '/^#/d' /etc/keystone/keystone.conf
sed -i '/^$/d' /etc/keystone/keystone.conf
---------------------------
/etc/keystone/keystone.conf
同步数据库
-------------------------
安http服务器
systemctl enable httpd.service
systemctl start httpd.service
###创建服务:
第一次使用,由于keystone没完成,所以手动写上token
后续keystone安装后就另一种用法,source admin...
向数据库里加 keystone 的endpoint
可以使用上面的脚本./mysql_openstack.sh keystone查看数据库变化
###创建用户角色等 projects, users, and roles
■■■■■■■■■■■■■■■■■■keystone end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■glance begin■■■■■■■■■■■■■■■■■■
#换个窗口 安装glance
---------------------------
配置/etc/glance/glance-api.conf
------------------------------------
配置 /etc/glance/glance-registry.conf
#★★★★★这里要注意一下,一些默认配置需要手工的去掉
#Comment out or remove any other options in the [keystone_authtoken] section
写入数据库,并启动和验证
■■■■■■■■■■■■■■■■■■glance end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■nova begin■■■■■■■■■■■■■■■■■■
#★★★★在controller节点 安装nova的客户端功能,真正起作用的在compute节点
-------------
配置 /etc/nova/nova.conf
-----------------------------------
-----------------------------------------------
同步数据库并在controller节点上启动
#★★★在compute节点--------------上装nova-compute功能
安装
#openstack-utils 才有openstack-config 可以rpm -qa openstack*
查看
配置/etc/nova/nova.conf
----------------------------------------
-------------------
/etc/nova/nova.conf
如果没有进行kvm穿透,这里得到的结果是0,就需要设置成qemu
如何kvm穿透
自行google
KVM虚拟化之嵌套虚拟化nested
kvm穿透,在libvirt的配置文件里面修改,查看其他的
#egrep -c '(vmx|svm)' /proc/cpuinfo
#virt_type qemu
启动libvert和nova
■■■■■■■■■■■■■■■■■■nova end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■neutron begin■■■■■■■■■■■■■■■■■■
在controller节点安装neutron
数据库
每一步变化都可以./mysql_openstack.sh neutron 查看数据库变化
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
使用官方文档的linuxbridge和vxlan方式
建立共有网络public,和私有网络private
共有网络的vm使用flat模式和本地局域网一样就可以
私有网络,类似nat,
可以建一个浮动ip挂载有私有网络的vm上,让一个vm既有私有网络又有共有网络
安装
---------------------
配置文件
/etc/neutron/neutron.conf
neutron的基础网络配置
---------------
配置
/etc/neutron/plugins/ml2/ml2_conf.ini
二层网络用来配置vxlan
--------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
linuxbridge相关的
brctl show 和ip netns查看
--------------------
配置
/etc/neutron/l3_agent.ini
三层网络
#The external_network_bridge option intentionally lacks a value to enable multiple external networks on a single agent
#☆☆☆☆★★★★★
#Comment out or remove any other options in the [keystone_authtoken] section.
------------------------------------
配置获取ip的dhcp服务,
/etc/neutron/dhcp_agent.ini
-----------------------------------
dhcp服务是通过dnsmasq进程起的
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
-------------------------------
配置vm的metadata
/etc/neutron/metadata_agent.ini
#★★★★★★★★★★注意这里metadata_agent.ini要删掉一部分东西
------------------------------------------------------------------------------------------------------------
在nova的配置文件上加上neutron的关联,
所有节点都要加上
/etc/nova/nova.conf
------------------------------------------------------------------------------------------------------------
同步数据库并启动
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
##########For networking option 2, also enable and start the layer-3 service:
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
检查错误:
cd /var/log/neutron
grep ERROR *
★★★★★★★★★★★★★★★compute 节点★☆★★★★★★★★★★★★★
在compute节点上添加neutron的agent服务
配合vm启动的时候关联网络
安装:
-------------------------------------
配置
/etc/neutron/neutron.conf
-------------------
-----------------------
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
------------------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
----------------------这里controller节点和compute节点都要加★★★★---------
上面说过的,compute节点上也要把nova的配置文件上添加neutron的关联
注意改完配需要重启
/etc/nova/nova.conf
------------------
-----------------
compute几点只有nova的compute服务和neutron的aget服务
启动:
###Verify operation验证:
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
开始配置neutron的网络
如果安装过程中
/var/log/keystone /var/log/nova /var/log/neutron都没有错误
即可
source admin-openrc.sh
neutron agent-list
#public network建立共有网络的相对简单:
#private network建立一个私有的网络
#Verify operation验证
#建立vm ,先简历秘钥对
source admin......(demo为啥不好使?)
#安全组
#Launch an instance on the public network--------begin----------
source admin-openrc.sh
nova flavor-list
nova image-list
neutron net-list
nova secgroup-list
#nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance
#Replace PUBLIC_NET_ID with the ID of the public provider network.
nova boot --flavor m1.tiny --image cirros --nic net-id=89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 --security-group default --key-name mykey public-instance
nova list
nova get-vnc-console public-instance novnc
#Launch an instance on the public network--------end----------
如果遇到问题,需要把简历的网络一次删除
这里有顺序,先删router相关的再删子网,再删网络
################public network begin#############
完整的简历共有网络
★★★★★★★★★★★★★★★★★★★★★★★
#建立个私有网络的vm
这个最重要,容易出错
遇到的问题
#如果升级了iproute
#如果报错:l3-agent.log
#2016-03-12 21:29:26.103 1170 ERROR neutron.agent.l3.agent Stderr: Cannot create namespace file "/var/run/netns/qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106": File exists
#问题是
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
#需要修改[url]https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py[/url]
#Replace PRIVATE_NET_ID with the ID of the private project network
#nova boot --flavor m1.tiny --image cirros --nic net-id=PRIVATE_NET_ID --security-group default --key-name mykey private-instance
################public network end#############
###★★★★★Networking Option 2: Self-service networks-------end★★★★★★★★
■■■■■■■■■■■■■■■■■■neutron end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon begin■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon end■■■■■■■■■■■■■■■■■■
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-private.html
安装,多了一些,问题的处理
写了个通用的数据脚本mysql_openstack.sh
用来跟踪数据库的变化,
比如我们看keystone库有哪些数据,就是用
./mysql_openstack.sh keystone
脚本如下
#!/bin/sh #for i in `awk ' {if(NR>4 && NR<40)print $2};' a.log ` #sed -i '/^#/d' cinder.conf #sed -i '/^$/d' cinder.conf mysql_user=root mysql_password=haoning mysql_host=ocontrol if [ "$1" = "" ] then echo "please use ./mysql_openstack.sh [dbname], for example: ./mysql_openstack.sh keystone"; echo "this will exit." exit 0; fi echo "use db " $1 for i in ` mysql -u$mysql_user -h$mysql_host -p$mysql_password $1 -e "show tables" |awk ' {if(NR>1)print $1};'|grep -v ml2_vxlan_allocations` do echo "\"select * from \`$i\`\""; mysql -u$mysql_user -h$mysql_host -p$mysql_password $1 -e "select * from \`$i\`"; done
centos7.2升级iproute到3.10.0-54.el7后导致ip netns的结果后面带了个(id: 0),以至于openstack的L版本的neutron在建立router的时候解析不了,一直报错“ Cannot create namespace file” 解决办法
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
基础:
yum install centos-release-openstack-liberty yum upgrade -y yum install python-openstackclient openstack-selinux -y rm -f /etc/localtime cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime #ntpdate ntp.ubuntu.com 配置/etc/hostname /etc/hosts 192.168.139.193 controller 192.168.139.192 compute 192.168.139.191 net
其实191没用上
网络节点也安装在了controller节点上
systemctl stop firewalld.service systemctl disable firewalld.service yum install mariadb mariadb-server MySQL-python -y systemctl start mariadb.service systemctl enable mariadb.service
安装mysql
vim /etc/my.cnf.d/mariadb_openstack.cnf
[mysqld] bind-address 192.168.139.193 default-storage-engine innodb innodb_file_per_table collation-server utf8_general_ci init-connect 'SET NAMES utf8' character-set-server utf8 GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
消息队列
yum install rabbitmq-server -y systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service #如果清空 #rabbitmqctl stop_app #rabbitmqctl reset rabbitmqctl add_user openstack haoning rabbitmqctl set_permissions openstack ".*" ".*" ".*"
安装mongo,似乎没用上
yum install mongodb-server mongodb -y vim /etc/mongod.conf bind_ip 192.168.139.193 systemctl enable mongod.service systemctl start mongod.service
一下密码全都设置成haoning
----------------------------------------------------------------------
■■■■■■■■■■■■■■■■■■keystone begin■■■■■■■■■■■■■■■■■■
keystone安装,在controller节点:
CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
$ openssl rand -hex 10
06a0afd32e5265a9eba8
需要把memcache也装上,作为keystone的缓存。
这里keystone是通过阿帕奇发布
yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y systemctl enable memcached.service systemctl start memcached.service
如果清空memcache使用
#echo "flush_all" | nc 127.0.0.1 11211
#清空
修改配置文件后得到
sed -i '/^#/d' /etc/keystone/keystone.conf
sed -i '/^$/d' /etc/keystone/keystone.conf
---------------------------
/etc/keystone/keystone.conf
[DEFAULT] [DEFAULT] admin_token=26d3c805d5033f6052b9 verbose=True [assignment] [auth] [cache] [catalog] [cors] [cors.subdomain] [credential] [database] connection=mysql://keystone:haoning@controller/keystone [domain_config] [endpoint_filter] [endpoint_policy] [eventlet_server] [eventlet_server_ssl] [federation] [fernet_tokens] [identity] [identity_mapping] [kvs] [ldap] [matchmaker_redis] [matchmaker_ring] [memcache] servers=localhost:11211 [oauth1] [os_inherit] [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit] [oslo_middleware] [oslo_policy] [paste_deploy] [policy] [resource] [revoke] driver=sql [role] [saml] [signing] [ssl] [token] provider=uuid driver=memcache [tokenless_auth] [trust]
同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
-------------------------
安http服务器
/etc/httpd/conf/httpd.conf ServerName controller /etc/httpd/conf.d/wsgi-keystone.conf Listen 5000 Listen 35357 <VirtualHost *:5000> WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost> <VirtualHost *:35357> WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost>
systemctl enable httpd.service
systemctl start httpd.service
###创建服务:
第一次使用,由于keystone没完成,所以手动写上token
后续keystone安装后就另一种用法,source admin...
export OS_TOKEN=e9fc0e473e1b3072fc66 export OS_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3
向数据库里加 keystone 的endpoint
可以使用上面的脚本./mysql_openstack.sh keystone查看数据库变化
openstack service create --name keystone --description "OpenStack Identity" identity openstack endpoint create --region wuhan identity public http://controller:5000/v2.0 openstack endpoint create --region wuhan identity internal http://controller:5000/v2.0 openstack endpoint create --region wuhan identity admin http://controller:35357/v2.0
###创建用户角色等 projects, users, and roles
openstack project create --domain default --description "Admin Project" admin #openstack user create --domain default --password-prompt admin openstack user create --domain default --password haoning admin openstack role create admin openstack role add --project admin --user admin admin openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo #openstack user create --domain default --password-prompt demo openstack user create --domain default --password haoning demo openstack role create user openstack role add --project demo --user demo user ###Verify operation Edit the /usr/share/keystone/keystone-dist-paste.ini file and remove admin_token_auth from the [pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] sections #unset OS_TOKEN OS_URL openstack --os-auth-url http://controller:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue openstack --os-auth-url http://controller:5000/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name demo --os-username demo --os-auth-type password token issue ###Create OpenStack client environment scripts unset OS_TOKEN OS_URL [root@controller ~]# cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=haoning export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 export PS1='[\u@\h \W(keystone_admin_v3)]\$ ' [root@controller ~]# cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=demo export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=haoning export OS_AUTH_URL=http://controller:5000/v3 export OS_IMAGE_API_VERSION=2 export OS_IDENTITY_API_VERSION=3 export PS1='[\u@\h \W(keystone_demo_v3)]\$ ' source admin-openrc.sh unset OS_TOKEN OS_URL openstack token issue 执行这个之后再使用 openstack user list 等命令
■■■■■■■■■■■■■■■■■■keystone end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■glance begin■■■■■■■■■■■■■■■■■■
#换个窗口 安装glance
CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
source admin-openrc.sh openstack user create --domain default --password haoning glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image service" image openstack endpoint create --region wuhan image public http://controller:9292 openstack endpoint create --region wuhan image internal http://controller:9292 openstack endpoint create --region wuhan image admin http://controller:9292 yum install openstack-glance python-glance python-glanceclient -y
---------------------------
配置/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver noop openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:haoning@controller/glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_plugin password openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_id default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_id default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password haoning openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-api.conf glance_store default_store file openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
------------------------------------
配置 /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver noop openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:haoning@controller/glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_plugin password openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_id default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_id default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password haoning openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
#★★★★★这里要注意一下,一些默认配置需要手工的去掉
#Comment out or remove any other options in the [keystone_authtoken] section
写入数据库,并启动和验证
su -s /bin/sh -c "glance-manage db_sync" glance systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service #Verify operation echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress glance image-list
■■■■■■■■■■■■■■■■■■glance end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■nova begin■■■■■■■■■■■■■■■■■■
#★★★★在controller节点 安装nova的客户端功能,真正起作用的在compute节点
CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'haoning'; flush privileges; source admin-openrc.sh openstack user create --domain default --password haoning nova openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region wuhan compute public http://controller:8774/v2/%\(tenant_id\)s openstack endpoint create --region wuhan compute internal http://controller:8774/v2/%\(tenant_id\)s openstack endpoint create --region wuhan compute admin http://controller:8774/v2/%\(tenant_id\)s yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
-------------
配置 /etc/nova/nova.conf
-----------------------------------
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.139.193 openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata openstack-config --set /etc/nova/nova.conf DEFAULT verbose True openstack-config --set /etc/nova/nova.conf database connection mysql://nova:haoning@controller/nova openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova openstack-config --set /etc/nova/nova.conf keystone_authtoken password haoning openstack-config --set /etc/nova/nova.conf vnc vncserver_listen $my_ip openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip openstack-config --set /etc/nova/nova.conf glance host controller openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
-----------------------------------------------
同步数据库并在controller节点上启动
su -s /bin/sh -c "nova-manage db sync" nova #/var/log/nova 检查log是否成功 systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl restart openstack-nova-api.service systemctl restart openstack-nova-cert.service systemctl restart openstack-nova-consoleauth.service systemctl restart openstack-nova-scheduler.service systemctl restart openstack-nova-conductor.service systemctl restart openstack-nova-novncproxy.service
#★★★在compute节点--------------上装nova-compute功能
安装
yum install openstack-nova-compute sysfsutils openstack-utils -y
#openstack-utils 才有openstack-config 可以rpm -qa openstack*
查看
配置/etc/nova/nova.conf
----------------------------------------
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.139.192 openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT verbose True openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova openstack-config --set /etc/nova/nova.conf keystone_authtoken password haoning openstack-config --set /etc/nova/nova.conf vnc enabled True openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html openstack-config --set /etc/nova/nova.conf glance host controller openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
-------------------
/etc/nova/nova.conf
如果没有进行kvm穿透,这里得到的结果是0,就需要设置成qemu
如何kvm穿透
自行google
KVM虚拟化之嵌套虚拟化nested
kvm穿透,在libvirt的配置文件里面修改,查看其他的
#egrep -c '(vmx|svm)' /proc/cpuinfo
#virt_type qemu
启动libvert和nova
systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service #Verify operation source admin-openrc.sh nova service-list nova endpoints nova image-list
■■■■■■■■■■■■■■■■■■nova end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■neutron begin■■■■■■■■■■■■■■■■■■
在controller节点安装neutron
数据库
CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
每一步变化都可以./mysql_openstack.sh neutron 查看数据库变化
openstack user create --domain default --password haoning neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region wuhan network public http://controller:9696 openstack endpoint create --region wuhan network internal http://controller:9696 openstack endpoint create --region wuhan network admin http://controller:9696
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
使用官方文档的linuxbridge和vxlan方式
建立共有网络public,和私有网络private
共有网络的vm使用flat模式和本地局域网一样就可以
私有网络,类似nat,
可以建一个浮动ip挂载有私有网络的vm上,让一个vm既有私有网络又有共有网络
安装
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
---------------------
配置文件
/etc/neutron/neutron.conf
neutron的基础网络配置
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:haoning@controller/neutron openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password haoning openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_plugin password openstack-config --set /etc/neutron/neutron.conf nova project_domain_id default openstack-config --set /etc/neutron/neutron.conf nova user_domain_id default openstack-config --set /etc/neutron/neutron.conf nova region_name wuhan openstack-config --set /etc/neutron/neutron.conf nova project_name service openstack-config --set /etc/neutron/neutron.conf nova username nova openstack-config --set /etc/neutron/neutron.conf nova password haoning openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
---------------
配置
/etc/neutron/plugins/ml2/ml2_conf.ini
二层网络用来配置vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks public openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
--------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
linuxbridge相关的
brctl show 和ip netns查看
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:eth1 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.139.193 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
--------------------
配置
/etc/neutron/l3_agent.ini
三层网络
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge openstack-config --set /etc/neutron/l3_agent.ini DEFAULT verbose True
#The external_network_bridge option intentionally lacks a value to enable multiple external networks on a single agent
#☆☆☆☆★★★★★
#Comment out or remove any other options in the [keystone_authtoken] section.
------------------------------------
配置获取ip的dhcp服务,
/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
-----------------------------------
dhcp服务是通过dnsmasq进程起的
/etc/neutron/dnsmasq-neutron.conf echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
-------------------------------
配置vm的metadata
/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_uri http://controller:5000 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:35357 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region wuhan openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_plugin password openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_domain_id default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT user_domain_id default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_name service openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT username neutron openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT password haoning openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
#★★★★★★★★★★注意这里metadata_agent.ini要删掉一部分东西
#admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD%
------------------------------------------------------------------------------------------------------------
在nova的配置文件上加上neutron的关联,
所有节点都要加上
/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf neutron auth_plugin password openstack-config --set /etc/nova/nova.conf neutron project_domain_id default openstack-config --set /etc/nova/nova.conf neutron user_domain_id default openstack-config --set /etc/nova/nova.conf neutron region_name wuhan openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password haoning openstack-config --set /etc/nova/nova.conf service_metadata_proxy True openstack-config --set /etc/nova/nova.conf metadata_proxy_shared_secret METADATA_SECRET
------------------------------------------------------------------------------------------------------------
同步数据库并启动
引用
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
##########For networking option 2, also enable and start the layer-3 service:
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
检查错误:
cd /var/log/neutron
grep ERROR *
★★★★★★★★★★★★★★★compute 节点★☆★★★★★★★★★★★★★
在compute节点上添加neutron的agent服务
配合vm启动的时候关联网络
安装:
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
-------------------------------------
配置
/etc/neutron/neutron.conf
-------------------
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password haoning openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
-----------------------
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
------------------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:eth1 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.139.192 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
----------------------这里controller节点和compute节点都要加★★★★---------
上面说过的,compute节点上也要把nova的配置文件上添加neutron的关联
注意改完配需要重启
/etc/nova/nova.conf
------------------
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf neutron auth_plugin password openstack-config --set /etc/nova/nova.conf neutron project_domain_id default openstack-config --set /etc/nova/nova.conf neutron user_domain_id default openstack-config --set /etc/nova/nova.conf neutron region_name wuhan openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password haoning
-----------------
compute几点只有nova的compute服务和neutron的aget服务
启动:
systemctl restart openstack-nova-compute.service systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service
###Verify operation验证:
[root@controller neutron(keystone_admin_v3)]# neutron ext-list +-----------------------+--------------------------+ | alias | name | +-----------------------+--------------------------+ | flavors | Neutron Service Flavors | | security-group | security-group | | dns-integration | DNS Integration | | net-mtu | Network MTU | | port-security | Port Security | | binding | Port Binding | | provider | Provider Network | | agent | agent | | quotas | Quota management support | | subnet_allocation | Subnet Allocation | | dhcp_agent_scheduler | DHCP Agent Scheduler | | rbac-policies | RBAC Policies | | external-net | Neutron external network | | multi-provider | Multi Provider Network | | allowed-address-pairs | Allowed Address Pairs | | extra_dhcp_opt | Neutron Extra DHCP opts | +-----------------------+--------------------------+
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
开始配置neutron的网络
如果安装过程中
/var/log/keystone /var/log/nova /var/log/neutron都没有错误
即可
source admin-openrc.sh
neutron agent-list
#public network建立共有网络的相对简单:
neutron net-create public --shared --provider:physical_network public --provider:network_type flat #neutron subnet-create public PUBLIC_NETWORK_CIDR --name public --allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS --dns-nameserver DNS_RESOLVER --gateway PUBLIC_NETWORK_GATEWAY neutron subnet-create public 192.168.139.0/20 --name public --allocation-pool start=192.168.139.201,end=192.168.139.210 --dns-nameserver 8.8.4.4 --gateway 192.168.128.1
#private network建立一个私有的网络
neutron net-create private neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 #Create a router neutron net-update public --router:external neutron router-create router neutron router-interface-add router private #Added interface 65b58347-09fa-43dd-914d-31b4885d84ef to router router. neutron router-gateway-set router public
#Verify operation验证
ip netns neutron router-port-list router ping -c 4 192.168.139.202 brctl show ip netns
#建立vm ,先简历秘钥对
source admin......(demo为啥不好使?)
ssh-keygen -q -N "" nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey nova keypair-list
#安全组
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
#Launch an instance on the public network--------begin----------
source admin-openrc.sh
nova flavor-list
nova image-list
neutron net-list
nova secgroup-list
#nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance
#Replace PUBLIC_NET_ID with the ID of the public provider network.
nova boot --flavor m1.tiny --image cirros --nic net-id=89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 --security-group default --key-name mykey public-instance
nova list
nova get-vnc-console public-instance novnc
#Launch an instance on the public network--------end----------
#删除网络
如果遇到问题,需要把简历的网络一次删除
这里有顺序,先删router相关的再删子网,再删网络
neutron router-list neutron router-gateway-clear f16bd408-181d-40d9-8998-5d556fec7e0f neutron router-interface-delete f16bd408-181d-40d9-8998-5d556fec7e0f private neutron router-delete f16bd408-181d-40d9-8998-5d556fec7e0f neutron subnet-list neutron subnet-delete 7b03ef7d-144f-479e-bf3c-4a880a48ac3d neutron subnet-delete f3eb1841-6666-4821-8fea-0d8d98352c73 neutron net-list neutron net-delete 89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 neutron net-delete 6ac13027-8e87-4696-b01f-5198a3ffa509
################public network begin#############
完整的简历共有网络
★★★★★★★★★★★★★★★★★★★★★★★
neutron net-create public --shared --provider:physical_network public --provider:network_type flat neutron net-list neutron subnet-create public 192.168.139.0/20 --name public --allocation-pool start=192.168.139.221,end=192.168.139.230 --dns-nameserver 8.8.4.4 --gateway 192.168.128.1 neutron subnet-list $ ssh-keygen -q -N "" $ nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey nova keypair-list nova secgroup-list nova secgroup-list-rules 1f676a35-7a31-4265-aa2b-cc4317de8633 nova help|grep secgroup nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 nova flavor-list nova image-list neutron net-list nova secgroup-list #nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance #Replace PUBLIC_NET_ID with the ID of the public provider network. nova boot --flavor m1.tiny --image cirros --nic net-id=79fa9460-1f32-4141-9a16-08dea9355e2a --security-group default --key-name mykey public-instance nova list #能看到ip nova get-vnc-console public-instance novnc 默认密码 cirros cubswin:) ping 192.168.139.222 ssh cirros@192.168.139.222 ip netns ifconfig
#建立个私有网络的vm
这个最重要,容易出错
neutron net-create private #neutron subnet-create private PRIVATE_NETWORK_CIDR --name private --dns-nameserver DNS_RESOLVER --gateway PRIVATE_NETWORK_GATEWAY neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 neutron net-list neutron subnet-list #Private project networks connect to public provider networks using a virtual router. Each router contains an interface to at least one private project network and a gateway on a public provider network. #The public provider network must include the router: external option to enable project routers to use it for connectivity to external networks such as the Internet. The admin or other privileged user must include this option during network creation or add it later. In this case, we can add it to the existing public provider network. #Add the router: external option to the public provider network: neutron net-update public --router:external [root@controller ~(keystone_admin_v3)]# neutron router-create router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | distributed | False | | external_gateway_info | | | ha | False | | id | fc43e7ee-44d1-483b-a5b2-6622637bb106 | | name | router | | routes | | | status | ACTIVE | | tenant_id | a847d63d35e54622b641ea6b74c3c126 | +-----------------------+--------------------------------------+ [root@controller ~(keystone_admin_v3)]# neutron router-list +--------------------------------------+--------+-----------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+--------+-----------------------+-------------+-------+ | fc43e7ee-44d1-483b-a5b2-6622637bb106 | router | null | False | False | +--------------------------------------+--------+-----------------------+-------------+-------+ [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# neutron router-interface-add router private Added interface 9d9f73e7-a5e4-4d89-93c7-df135ac39a26 to router router. [root@controller ~(keystone_admin_v3)]# neutron router-gateway-set router public Set gateway for router router [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# neutron router-list +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | fc43e7ee-44d1-483b-a5b2-6622637bb106 | router | {"network_id": "79fa9460-1f32-4141-9a16-08dea9355e2a", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "1403be6d-fb25-4789-80ce-d570f291c6e4", "ip_address": "192.168.139.223"}]} | False | False | +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ [root@controller ~(keystone_admin_v3)]# neutron net-list +--------------------------------------+---------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+-------------------------------------------------------+ | 79fa9460-1f32-4141-9a16-08dea9355e2a | public | 1403be6d-fb25-4789-80ce-d570f291c6e4 192.168.128.0/20 | | 1fd72b95-0264-4fca-8173-f321239a55fa | private | 97d8a9a1-d1b3-4091-9ee0-51af01c84b4b 172.16.1.0/24 | +--------------------------------------+---------+-------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# neutron subnet-list +--------------------------------------+---------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+---------+------------------+--------------------------------------------------------+ | 1403be6d-fb25-4789-80ce-d570f291c6e4 | public | 192.168.128.0/20 | {"start": "192.168.139.221", "end": "192.168.139.230"} | | 97d8a9a1-d1b3-4091-9ee0-51af01c84b4b | private | 172.16.1.0/24 | {"start": "172.16.1.2", "end": "172.16.1.254"} | +--------------------------------------+---------+------------------+--------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# ip netns qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106 (id: 2) qdhcp-1fd72b95-0264-4fca-8173-f321239a55fa (id: 1) qdhcp-79fa9460-1f32-4141-9a16-08dea9355e2a (id: 0) [root@controller ~(keystone_admin_v3)]# neutron router-port-list router +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | 8b18370f-345e-42bc-b4eb-30391866e757 | | fa:16:3e:78:33:40 | {"subnet_id": "1403be6d-fb25-4789-80ce-d570f291c6e4", "ip_address": "192.168.139.223"} | | 9d9f73e7-a5e4-4d89-93c7-df135ac39a26 | | fa:16:3e:6b:08:b5 | {"subnet_id": "97d8a9a1-d1b3-4091-9ee0-51af01c84b4b", "ip_address": "172.16.1.1"} | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# brctl show bridge name bridge id STP enabled interfaces brq1fd72b95-02 8000.063507f2cee3 no tap65ad6fb9-ea tap9d9f73e7-a5 vxlan-18 brq79fa9460-1f 8000.505112aa8214 no eth1 tap4b63544c-9b virbr0 8000.5254009c2b11 yes virbr0-nic [root@controller ~(keystone_admin_v3)]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 50:52:18:aa:81:11 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master brq79fa9460-1f state UP mode DEFAULT qlen 1000 link/ether 50:51:12:aa:82:14 brd ff:ff:ff:ff:ff:ff 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT link/ether 52:54:00:9c:2b:11 brd ff:ff:ff:ff:ff:ff 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500 link/ether 52:54:00:9c:2b:11 brd ff:ff:ff:ff:ff:ff 13: tap4b63544c-9b@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master brq79fa9460-1f state UP mode DEFAULT qlen 1000 link/ether b6:67:ca:38:ff:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 14: brq79fa9460-1f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 50:51:12:aa:82:14 brd ff:ff:ff:ff:ff:ff 15: tap65ad6fb9-ea@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master brq1fd72b95-02 state UP mode DEFAULT qlen 1000 link/ether e6:b7:1f:bc:bb:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 1 16: vxlan-18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq1fd72b95-02 state UNKNOWN mode DEFAULT link/ether 06:35:07:f2:ce:e3 brd ff:ff:ff:ff:ff:ff 17: brq1fd72b95-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT link/ether 06:35:07:f2:ce:e3 brd ff:ff:ff:ff:ff:ff 18: tap9d9f73e7-a5@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master brq1fd72b95-02 state UP mode DEFAULT qlen 1000 link/ether 5e:84:a7:84:27:5b brd ff:ff:ff:ff:ff:ff link-netnsid 2 [root@controller ~(keystone_admin_v3)]#
遇到的问题
#如果升级了iproute
#如果报错:l3-agent.log
#2016-03-12 21:29:26.103 1170 ERROR neutron.agent.l3.agent Stderr: Cannot create namespace file "/var/run/netns/qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106": File exists
#问题是
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
#需要修改[url]https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py[/url]
#Replace PRIVATE_NET_ID with the ID of the private project network
#nova boot --flavor m1.tiny --image cirros --nic net-id=PRIVATE_NET_ID --security-group default --key-name mykey private-instance
nova boot --flavor m1.tiny --image cirros --nic net-id=1fd72b95-0264-4fca-8173-f321239a55fa --security-group default --key-name mykey private-instance [root@controller linux(keystone_admin_v3)]# nova list +--------------------------------------+------------------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------+--------+------------+-------------+------------------------+ | edd82475-1585-469f-aa43-95fa80f1f812 | private-instance | ACTIVE | - | Running | private=172.16.1.3 | | bad46269-19d1-423a-88be-652a06c08723 | public-instance | ACTIVE | - | Running | public=192.168.139.222 | +--------------------------------------+------------------+--------+------------+-------------+------------------------+ nova get-vnc-console private-instance novnc neutron floatingip-create public [root@controller linux(keystone_admin_v3)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 2cf23d2c-748f-4242-89eb-1d53721560a1 | | 192.168.139.225 | | +--------------------------------------+------------------+---------------------+---------+ nova floating-ip-associate private-instance 192.168.139.225 neutron floatingip-create public nova floating-ip-associate private-instance 203.0.113.104 [root@controller linux(keystone_admin_v3)]# nova list +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ | edd82475-1585-469f-aa43-95fa80f1f812 | private-instance | ACTIVE | - | Running | private=172.16.1.3, 192.168.139.225 | | bad46269-19d1-423a-88be-652a06c08723 | public-instance | ACTIVE | - | Running | public=192.168.139.222 | +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ ssh root@192.168.139.225 [root@controller linux(keystone_admin_v3)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | 2cf23d2c-748f-4242-89eb-1d53721560a1 | 172.16.1.3 | 192.168.139.225 | af79694b-2f59-4bf0-a0d0-6c619de49941 | +--------------------------------------+------------------+---------------------+--------------------------------------+ [root@controller linux(keystone_admin_v3)]#
################public network end#############
###★★★★★Networking Option 2: Self-service networks-------end★★★★★★★★
■■■■■■■■■■■■■■■■■■neutron end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon begin■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon end■■■■■■■■■■■■■■■■■■
发表评论
-
建立tap设备的c的代码
2019-01-08 19:09 497tapper.c #include <stdio.h& ... -
br0和tap0的互相影响
2019-01-02 19:17 844转载 http://www.cnblogs.com/wlei/ ... -
M版openstack(ovs,dvr,动态迁移)
2017-06-09 10:30 1821主要内容 1.先搭建三个节点的环境,dvr模式 2.建一个vm ... -
M版本的openstack的例子(linuxbridge)
2017-05-23 15:05 572做两个节点控制节点和计算节点 mcontroller521 ... -
vxlan多台主机的vm之间不同网段互通
2016-09-19 21:06 4434组播: 试验: 在三台机器上 192.168.139.251 ... -
vxlan多台主机的vm之间相同网段互通
2016-09-19 16:30 2253三台机器 建立namespace ... -
qemu用tap方式启动vm的网络试验(ip route)
2016-09-14 11:29 2880ip route add 192.168.8.0/24 via ... -
openstack的topo图
2016-09-07 14:07 651http://haoningabc.iteye.com/blo ... -
openstack的M版本的neutron的实验
2016-09-01 20:00 3170试验步骤: 1.创建内部 ... -
openstack的M版本安装
2016-08-17 13:33 1085参考 http://docs.openstack.org/mi ... -
can't initialize iptables table错误
2016-04-26 10:05 820can't initialize iptables table ... -
linux下TUN/TAP虚拟网卡的使用
2016-03-31 18:46 4908tun在网络层 tap在二层 ls ... -
openstack L版本(openvswitch的安装和应用)
2016-03-24 15:04 3058参考L版本的linuxbridge的安装方式 和k版本的ov ... -
openstack试验(linux vxlan)
2016-03-22 22:27 2750yum install centos-release-open ... -
backup a libvirt xml
2016-03-18 21:23 580<domain type='kvm' id='2'> ... -
neutron router试验
2016-03-17 20:41 977上接 http://haoningabc.iteye.com/ ... -
openstack的L版本安装(flat网络)
2016-03-07 17:55 1013参考http://docs.openstack.org ... -
openstack调试 数据库跟踪
2016-03-04 18:07 734查看openstack代码 openstack每个命令之后,数 ... -
neutron基础九(qemu nat网络)
2016-02-06 17:21 1656接上基础八,kvm透传nested忽略 1.在主机ce ... -
neutron基础八(qemu 桥接网络)
2016-02-06 13:13 1563qemu的桥接和nat的qemu启动命令是一样的,但是后续的脚 ...
相关推荐
c语言学习
人脸识别项目源码实战
人脸识别项目源码实战
本图书进销存管理系统管理员功能有个人中心,用户管理,图书类型管理,进货订单管理,商品退货管理,批销订单管理,图书信息管理,客户信息管理,供应商管理,库存分析管理,收入金额管理,应收金额管理,我的收藏管理。 用户功能有个人中心,图书类型管理,进货订单管理,商品退货管理,批销订单管理,图书信息管理,客户信息管理,供应商管理,库存分析管理,收入金额管理,应收金额管理。因而具有一定的实用性。 本站是一个B/S模式系统,采用Spring Boot框架,MYSQL数据库设计开发,充分保证系统的稳定性。系统具有界面清晰、操作简单,功能齐全的特点,使得图书进销存管理系统管理工作系统化、规范化。本系统的使用使管理人员从繁重的工作中解脱出来,实现无纸化办公,能够有效的提高图书进销存管理系统管理效率。 关键词:图书进销存管理系统;Spring Boot框架;MYSQL数据库
基于动态规划和模型预测控制的并联混合电动汽车最佳控制 简介:利用动态规划,使用模型预测控制,实现对并联混合动力电动汽车的最佳控制,并降低总体成本函数 使用动态规划可以实现混合动力电动汽车的优化控制 混合动力电动汽车的模型预测控制是通过使用动态规划在缩短的时域内实现的 代码为纯matlab脚本,附带说明电子文档 ,并联混合电动汽车; 动态规划; 模型预测控制; 最佳控制; 总体成本函数; Matlab脚本。,动态规划与模型预测控制在并联混合动力电动汽车的最优控制策略
人脸识别项目实战
2025 DeepSeek技术全景解析-重塑全球AI生态的中国力量.pdf
能够爬取非会员视频和音频资源,可通过ffmpeg等工具将视频资源和音频资源合并
基于差分进化算法DE的机器人路径规划 本产品基于优化的差分进化算法,专为机器人山地路径规划而设计 通过模拟差分进化过程中的变异、交叉与选择机制,算法能够智能探索并确定最优行进路线,全面考量路径长度、能量消耗及地形适应性 优化之处在于融合了动态差分权重与精英保留策略,显著增强了算法的搜索效率和求解质量,有效规避了早熟收敛的风险 该算法在山地这一复杂且多变的自然环境中展现出卓越性能,完美适配于机器人探险、山地救援、环境监测等多种应用场景 我们矢志为用户提供卓越、稳健的机器人路径规划方案,推动各类山地作业迈向更为精确与高效的路径规划新时代 ,差分进化算法DE; 机器人路径规划; 山地路径规划; 算法优化; 早熟收敛风险规避; 山地探险应用场景; 环境监测场景。,DE算法赋能机器人,优化山地路径规划方案
情侣游戏情侣飞行棋10元真心话大冒险情侣情趣骰子php源码 ----- 程序特色 ----- 1、完整的分销制度,可自定义多种不同的返佣比例 2、支持情侣飞行棋、情趣骰子,多种等级 3、无感微信自动授权登录,支持微信第三方授权登录 4、完全开源无加密
HeidiSQL的12.2.0.6576安装压缩包
监护人,小孩和玩具数据集 4647张原始图片 监护人 食物 孩子 玩具 精确率可达85.4% yolov5pytorch格式
本课程是 PHP 进阶系列之 Swoole 入门精讲,系统讲解 Swoole 在 PHP 高性能开发中的应用,涵盖 协程、异步编程、WebSocket、TCP/UDP 通信、任务投递、定时器等核心功能。通过理论解析和实战案例相结合,帮助开发者掌握 Swoole 的基本使用方法及其在高并发场景下的应用。 适用人群: 适合 有一定 PHP 基础的开发者、希望提升后端性能优化能力的工程师,以及 对高并发、异步编程感兴趣的学习者。 能学到什么: 掌握 Swoole 基础——理解 Swoole 的核心概念,如协程、异步编程、事件驱动等。 高并发处理——学习如何使用 Swoole 构建高并发的 Web 服务器、TCP/UDP 服务器。 实战项目经验——通过案例实践,掌握 Swoole 在 WebSocket、消息队列、微服务等场景的应用。 阅读建议: 建议先掌握 PHP 基础,了解 HTTP 服务器和并发处理相关概念。学习过程中,结合 官方文档和实际项目 进行实践,加深理解,逐步提升 Swoole 开发能力。
机器人先进视觉赛-基于深度学习yolov8的3D识别项目源码含gui界面(最新发布).zip 实现机器人的3D目标识别和分割功能 支持深度图像的处理和分析 【资源详情说明】 【1】该项目为近期精心打造开发,完整代码。同时,配套资料一应俱全,涵盖详细的设计文档 【2】项目上传前源码经过严格测试,在多种环境下均能稳定运行,功能完善且稳定运行,技术研究、教学演示还是项目实践,都能轻松复现,节省时间和精力。 【3】本项目面向计算机相关专业领域的各类人群,对于高校学生,可作为毕业设计、课程设计、日常作业的优质参考;对于科研工作者和行业从业者,可作为项目初期立项演示,助力快速搭建原型,验证思路。 【4】若具备一定技术基础,可在此代码上进行修改,以实现其他功能,也可直接用于毕设、课设、作业等。 【5】小白,在配置环境或运行项目时遇到困难,可提供远程指导和全方位技术支持。 欢迎下载学习本项目资源,期待与你共同探讨技术问题,交流项目经验!
Matlab实现TSO-XGBoost多变量回归预测 Matlab实现TSO-XGBoost多变量回归预测,金枪鱼算法优化XGBoost多变量回归预测 1.data为数据集,7个输入特征,1个输出特征 2.MainTSO XGboost.m为主程序文件,其他为函数文件,无需运行 3.命令窗口输出R2、MAE、MAE和RMSEP等评价指标,可在下载区获取数据和程序内容 注意程序和数据放在一个文件夹,文件夹不可以XGBoost命名,因为有函数已经用过,运行环境为 Matlab2018及以上,预测效果如下 ,TSO-XGBoost; 多变量回归预测; Matlab实现; 金枪鱼算法优化; 评价指标; 预测效果; 文件夹结构; 运行环境,Matlab中TSO-XGBoost多变量回归预测优化实践
实时音视频SRT协议中文完整版
学习WiFi,入手资料
c语言学习
jl5104开发板的代码,sdk
二级建造师电子证照.ofd.zip