- 浏览: 1482918 次
- 性别:
- 来自: 北京
文章分类
- 全部博客 (691)
- linux (207)
- shell (33)
- java (42)
- 其他 (22)
- javascript (33)
- cloud (16)
- python (33)
- c (48)
- sql (12)
- 工具 (6)
- 缓存 (16)
- ubuntu (7)
- perl (3)
- lua (2)
- 超级有用 (2)
- 服务器 (2)
- mac (22)
- nginx (34)
- php (2)
- 内核 (2)
- gdb (13)
- ICTCLAS (2)
- mac android (0)
- unix (1)
- android (1)
- vim (1)
- epoll (1)
- ios (21)
- mysql (3)
- systemtap (1)
- 算法 (2)
- 汇编 (2)
- arm (3)
- 我的数据结构 (8)
- websocket (12)
- hadoop (5)
- thrift (2)
- hbase (1)
- graphviz (1)
- redis (1)
- raspberry (2)
- qemu (31)
- opencv (4)
- socket (1)
- opengl (1)
- ibeacons (1)
- emacs (6)
- openstack (24)
- docker (1)
- webrtc (11)
- angularjs (2)
- neutron (23)
- jslinux (18)
- 网络 (13)
- tap (9)
- tensorflow (8)
- nlu (4)
- asm.js (5)
- sip (3)
- xl2tp (5)
- conda (1)
- emscripten (6)
- ffmpeg (10)
- srt (1)
- wasm (5)
- bert (3)
- kaldi (4)
- 知识图谱 (1)
最新评论
-
wahahachuang8:
我喜欢代码简洁易读,服务稳定的推送服务,前段时间研究了一下go ...
websocket的helloworld -
q114687576:
http://www.blue-zero.com/WebSoc ...
websocket的helloworld -
zhaoyanzimm:
感谢您的分享,给我提供了很大的帮助,在使用过程中发现了一个问题 ...
nginx的helloworld模块的helloworld -
haoningabc:
leebyte 写道太NB了,期待早日用上Killinux!么 ...
qemu+emacs+gdb调试内核 -
leebyte:
太NB了,期待早日用上Killinux!
qemu+emacs+gdb调试内核
参考
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-private.html
安装,多了一些,问题的处理
写了个通用的数据脚本mysql_openstack.sh
用来跟踪数据库的变化,
比如我们看keystone库有哪些数据,就是用
./mysql_openstack.sh keystone
脚本如下
centos7.2升级iproute到3.10.0-54.el7后导致ip netns的结果后面带了个(id: 0),以至于openstack的L版本的neutron在建立router的时候解析不了,一直报错“ Cannot create namespace file” 解决办法
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
基础:
其实191没用上
网络节点也安装在了controller节点上
安装mysql
vim /etc/my.cnf.d/mariadb_openstack.cnf
消息队列
安装mongo,似乎没用上
一下密码全都设置成haoning
----------------------------------------------------------------------
■■■■■■■■■■■■■■■■■■keystone begin■■■■■■■■■■■■■■■■■■
keystone安装,在controller节点:
$ openssl rand -hex 10
06a0afd32e5265a9eba8
需要把memcache也装上,作为keystone的缓存。
这里keystone是通过阿帕奇发布
如果清空memcache使用
#echo "flush_all" | nc 127.0.0.1 11211
#清空
修改配置文件后得到
sed -i '/^#/d' /etc/keystone/keystone.conf
sed -i '/^$/d' /etc/keystone/keystone.conf
---------------------------
/etc/keystone/keystone.conf
同步数据库
-------------------------
安http服务器
systemctl enable httpd.service
systemctl start httpd.service
###创建服务:
第一次使用,由于keystone没完成,所以手动写上token
后续keystone安装后就另一种用法,source admin...
向数据库里加 keystone 的endpoint
可以使用上面的脚本./mysql_openstack.sh keystone查看数据库变化
###创建用户角色等 projects, users, and roles
■■■■■■■■■■■■■■■■■■keystone end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■glance begin■■■■■■■■■■■■■■■■■■
#换个窗口 安装glance
---------------------------
配置/etc/glance/glance-api.conf
------------------------------------
配置 /etc/glance/glance-registry.conf
#★★★★★这里要注意一下,一些默认配置需要手工的去掉
#Comment out or remove any other options in the [keystone_authtoken] section
写入数据库,并启动和验证
■■■■■■■■■■■■■■■■■■glance end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■nova begin■■■■■■■■■■■■■■■■■■
#★★★★在controller节点 安装nova的客户端功能,真正起作用的在compute节点
-------------
配置 /etc/nova/nova.conf
-----------------------------------
-----------------------------------------------
同步数据库并在controller节点上启动
#★★★在compute节点--------------上装nova-compute功能
安装
#openstack-utils 才有openstack-config 可以rpm -qa openstack*
查看
配置/etc/nova/nova.conf
----------------------------------------
-------------------
/etc/nova/nova.conf
如果没有进行kvm穿透,这里得到的结果是0,就需要设置成qemu
如何kvm穿透
自行google
KVM虚拟化之嵌套虚拟化nested
kvm穿透,在libvirt的配置文件里面修改,查看其他的
#egrep -c '(vmx|svm)' /proc/cpuinfo
#virt_type qemu
启动libvert和nova
■■■■■■■■■■■■■■■■■■nova end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■neutron begin■■■■■■■■■■■■■■■■■■
在controller节点安装neutron
数据库
每一步变化都可以./mysql_openstack.sh neutron 查看数据库变化
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
使用官方文档的linuxbridge和vxlan方式
建立共有网络public,和私有网络private
共有网络的vm使用flat模式和本地局域网一样就可以
私有网络,类似nat,
可以建一个浮动ip挂载有私有网络的vm上,让一个vm既有私有网络又有共有网络
安装
---------------------
配置文件
/etc/neutron/neutron.conf
neutron的基础网络配置
---------------
配置
/etc/neutron/plugins/ml2/ml2_conf.ini
二层网络用来配置vxlan
--------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
linuxbridge相关的
brctl show 和ip netns查看
--------------------
配置
/etc/neutron/l3_agent.ini
三层网络
#The external_network_bridge option intentionally lacks a value to enable multiple external networks on a single agent
#☆☆☆☆★★★★★
#Comment out or remove any other options in the [keystone_authtoken] section.
------------------------------------
配置获取ip的dhcp服务,
/etc/neutron/dhcp_agent.ini
-----------------------------------
dhcp服务是通过dnsmasq进程起的
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
-------------------------------
配置vm的metadata
/etc/neutron/metadata_agent.ini
#★★★★★★★★★★注意这里metadata_agent.ini要删掉一部分东西
------------------------------------------------------------------------------------------------------------
在nova的配置文件上加上neutron的关联,
所有节点都要加上
/etc/nova/nova.conf
------------------------------------------------------------------------------------------------------------
同步数据库并启动
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
##########For networking option 2, also enable and start the layer-3 service:
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
检查错误:
cd /var/log/neutron
grep ERROR *
★★★★★★★★★★★★★★★compute 节点★☆★★★★★★★★★★★★★
在compute节点上添加neutron的agent服务
配合vm启动的时候关联网络
安装:
-------------------------------------
配置
/etc/neutron/neutron.conf
-------------------
-----------------------
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
------------------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
----------------------这里controller节点和compute节点都要加★★★★---------
上面说过的,compute节点上也要把nova的配置文件上添加neutron的关联
注意改完配需要重启
/etc/nova/nova.conf
------------------
-----------------
compute几点只有nova的compute服务和neutron的aget服务
启动:
###Verify operation验证:
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
开始配置neutron的网络
如果安装过程中
/var/log/keystone /var/log/nova /var/log/neutron都没有错误
即可
source admin-openrc.sh
neutron agent-list
#public network建立共有网络的相对简单:
#private network建立一个私有的网络
#Verify operation验证
#建立vm ,先简历秘钥对
source admin......(demo为啥不好使?)
#安全组
#Launch an instance on the public network--------begin----------
source admin-openrc.sh
nova flavor-list
nova image-list
neutron net-list
nova secgroup-list
#nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance
#Replace PUBLIC_NET_ID with the ID of the public provider network.
nova boot --flavor m1.tiny --image cirros --nic net-id=89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 --security-group default --key-name mykey public-instance
nova list
nova get-vnc-console public-instance novnc
#Launch an instance on the public network--------end----------
如果遇到问题,需要把简历的网络一次删除
这里有顺序,先删router相关的再删子网,再删网络
################public network begin#############
完整的简历共有网络
★★★★★★★★★★★★★★★★★★★★★★★
#建立个私有网络的vm
这个最重要,容易出错
遇到的问题
#如果升级了iproute
#如果报错:l3-agent.log
#2016-03-12 21:29:26.103 1170 ERROR neutron.agent.l3.agent Stderr: Cannot create namespace file "/var/run/netns/qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106": File exists
#问题是
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
#需要修改[url]https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py[/url]
#Replace PRIVATE_NET_ID with the ID of the private project network
#nova boot --flavor m1.tiny --image cirros --nic net-id=PRIVATE_NET_ID --security-group default --key-name mykey private-instance
################public network end#############
###★★★★★Networking Option 2: Self-service networks-------end★★★★★★★★
■■■■■■■■■■■■■■■■■■neutron end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon begin■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon end■■■■■■■■■■■■■■■■■■
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-private.html
安装,多了一些,问题的处理
写了个通用的数据脚本mysql_openstack.sh
用来跟踪数据库的变化,
比如我们看keystone库有哪些数据,就是用
./mysql_openstack.sh keystone
脚本如下
#!/bin/sh #for i in `awk ' {if(NR>4 && NR<40)print $2};' a.log ` #sed -i '/^#/d' cinder.conf #sed -i '/^$/d' cinder.conf mysql_user=root mysql_password=haoning mysql_host=ocontrol if [ "$1" = "" ] then echo "please use ./mysql_openstack.sh [dbname], for example: ./mysql_openstack.sh keystone"; echo "this will exit." exit 0; fi echo "use db " $1 for i in ` mysql -u$mysql_user -h$mysql_host -p$mysql_password $1 -e "show tables" |awk ' {if(NR>1)print $1};'|grep -v ml2_vxlan_allocations` do echo "\"select * from \`$i\`\""; mysql -u$mysql_user -h$mysql_host -p$mysql_password $1 -e "select * from \`$i\`"; done
centos7.2升级iproute到3.10.0-54.el7后导致ip netns的结果后面带了个(id: 0),以至于openstack的L版本的neutron在建立router的时候解析不了,一直报错“ Cannot create namespace file” 解决办法
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
基础:
yum install centos-release-openstack-liberty yum upgrade -y yum install python-openstackclient openstack-selinux -y rm -f /etc/localtime cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime #ntpdate ntp.ubuntu.com 配置/etc/hostname /etc/hosts 192.168.139.193 controller 192.168.139.192 compute 192.168.139.191 net
其实191没用上
网络节点也安装在了controller节点上
systemctl stop firewalld.service systemctl disable firewalld.service yum install mariadb mariadb-server MySQL-python -y systemctl start mariadb.service systemctl enable mariadb.service
安装mysql
vim /etc/my.cnf.d/mariadb_openstack.cnf
[mysqld] bind-address 192.168.139.193 default-storage-engine innodb innodb_file_per_table collation-server utf8_general_ci init-connect 'SET NAMES utf8' character-set-server utf8 GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
消息队列
yum install rabbitmq-server -y systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service #如果清空 #rabbitmqctl stop_app #rabbitmqctl reset rabbitmqctl add_user openstack haoning rabbitmqctl set_permissions openstack ".*" ".*" ".*"
安装mongo,似乎没用上
yum install mongodb-server mongodb -y vim /etc/mongod.conf bind_ip 192.168.139.193 systemctl enable mongod.service systemctl start mongod.service
一下密码全都设置成haoning
----------------------------------------------------------------------
■■■■■■■■■■■■■■■■■■keystone begin■■■■■■■■■■■■■■■■■■
keystone安装,在controller节点:
CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
$ openssl rand -hex 10
06a0afd32e5265a9eba8
需要把memcache也装上,作为keystone的缓存。
这里keystone是通过阿帕奇发布
yum install openstack-keystone httpd mod_wsgi memcached python-memcached -y systemctl enable memcached.service systemctl start memcached.service
如果清空memcache使用
#echo "flush_all" | nc 127.0.0.1 11211
#清空
修改配置文件后得到
sed -i '/^#/d' /etc/keystone/keystone.conf
sed -i '/^$/d' /etc/keystone/keystone.conf
---------------------------
/etc/keystone/keystone.conf
[DEFAULT] [DEFAULT] admin_token=26d3c805d5033f6052b9 verbose=True [assignment] [auth] [cache] [catalog] [cors] [cors.subdomain] [credential] [database] connection=mysql://keystone:haoning@controller/keystone [domain_config] [endpoint_filter] [endpoint_policy] [eventlet_server] [eventlet_server_ssl] [federation] [fernet_tokens] [identity] [identity_mapping] [kvs] [ldap] [matchmaker_redis] [matchmaker_ring] [memcache] servers=localhost:11211 [oauth1] [os_inherit] [oslo_messaging_amqp] [oslo_messaging_qpid] [oslo_messaging_rabbit] [oslo_middleware] [oslo_policy] [paste_deploy] [policy] [resource] [revoke] driver=sql [role] [saml] [signing] [ssl] [token] provider=uuid driver=memcache [tokenless_auth] [trust]
同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
-------------------------
安http服务器
/etc/httpd/conf/httpd.conf ServerName controller /etc/httpd/conf.d/wsgi-keystone.conf Listen 5000 Listen 35357 <VirtualHost *:5000> WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost> <VirtualHost *:35357> WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost>
systemctl enable httpd.service
systemctl start httpd.service
###创建服务:
第一次使用,由于keystone没完成,所以手动写上token
后续keystone安装后就另一种用法,source admin...
export OS_TOKEN=e9fc0e473e1b3072fc66 export OS_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3
向数据库里加 keystone 的endpoint
可以使用上面的脚本./mysql_openstack.sh keystone查看数据库变化
openstack service create --name keystone --description "OpenStack Identity" identity openstack endpoint create --region wuhan identity public http://controller:5000/v2.0 openstack endpoint create --region wuhan identity internal http://controller:5000/v2.0 openstack endpoint create --region wuhan identity admin http://controller:35357/v2.0
###创建用户角色等 projects, users, and roles
openstack project create --domain default --description "Admin Project" admin #openstack user create --domain default --password-prompt admin openstack user create --domain default --password haoning admin openstack role create admin openstack role add --project admin --user admin admin openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo #openstack user create --domain default --password-prompt demo openstack user create --domain default --password haoning demo openstack role create user openstack role add --project demo --user demo user ###Verify operation Edit the /usr/share/keystone/keystone-dist-paste.ini file and remove admin_token_auth from the [pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] sections #unset OS_TOKEN OS_URL openstack --os-auth-url http://controller:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue openstack --os-auth-url http://controller:5000/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name demo --os-username demo --os-auth-type password token issue ###Create OpenStack client environment scripts unset OS_TOKEN OS_URL [root@controller ~]# cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=haoning export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 export PS1='[\u@\h \W(keystone_admin_v3)]\$ ' [root@controller ~]# cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=demo export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=haoning export OS_AUTH_URL=http://controller:5000/v3 export OS_IMAGE_API_VERSION=2 export OS_IDENTITY_API_VERSION=3 export PS1='[\u@\h \W(keystone_demo_v3)]\$ ' source admin-openrc.sh unset OS_TOKEN OS_URL openstack token issue 执行这个之后再使用 openstack user list 等命令
■■■■■■■■■■■■■■■■■■keystone end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■glance begin■■■■■■■■■■■■■■■■■■
#换个窗口 安装glance
CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
source admin-openrc.sh openstack user create --domain default --password haoning glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image service" image openstack endpoint create --region wuhan image public http://controller:9292 openstack endpoint create --region wuhan image internal http://controller:9292 openstack endpoint create --region wuhan image admin http://controller:9292 yum install openstack-glance python-glance python-glanceclient -y
---------------------------
配置/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver noop openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:haoning@controller/glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_plugin password openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_id default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_id default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password haoning openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-api.conf glance_store default_store file openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
------------------------------------
配置 /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver noop openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:haoning@controller/glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_plugin password openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_id default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_id default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password haoning openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
#★★★★★这里要注意一下,一些默认配置需要手工的去掉
#Comment out or remove any other options in the [keystone_authtoken] section
写入数据库,并启动和验证
su -s /bin/sh -c "glance-manage db_sync" glance systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service #Verify operation echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress glance image-list
■■■■■■■■■■■■■■■■■■glance end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■nova begin■■■■■■■■■■■■■■■■■■
#★★★★在controller节点 安装nova的客户端功能,真正起作用的在compute节点
CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY 'haoning'; flush privileges; source admin-openrc.sh openstack user create --domain default --password haoning nova openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region wuhan compute public http://controller:8774/v2/%\(tenant_id\)s openstack endpoint create --region wuhan compute internal http://controller:8774/v2/%\(tenant_id\)s openstack endpoint create --region wuhan compute admin http://controller:8774/v2/%\(tenant_id\)s yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
-------------
配置 /etc/nova/nova.conf
-----------------------------------
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.139.193 openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata openstack-config --set /etc/nova/nova.conf DEFAULT verbose True openstack-config --set /etc/nova/nova.conf database connection mysql://nova:haoning@controller/nova openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova openstack-config --set /etc/nova/nova.conf keystone_authtoken password haoning openstack-config --set /etc/nova/nova.conf vnc vncserver_listen $my_ip openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip openstack-config --set /etc/nova/nova.conf glance host controller openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
-----------------------------------------------
同步数据库并在controller节点上启动
su -s /bin/sh -c "nova-manage db sync" nova #/var/log/nova 检查log是否成功 systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl restart openstack-nova-api.service systemctl restart openstack-nova-cert.service systemctl restart openstack-nova-consoleauth.service systemctl restart openstack-nova-scheduler.service systemctl restart openstack-nova-conductor.service systemctl restart openstack-nova-novncproxy.service
#★★★在compute节点--------------上装nova-compute功能
安装
yum install openstack-nova-compute sysfsutils openstack-utils -y
#openstack-utils 才有openstack-config 可以rpm -qa openstack*
查看
配置/etc/nova/nova.conf
----------------------------------------
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.139.192 openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT verbose True openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova openstack-config --set /etc/nova/nova.conf keystone_authtoken password haoning openstack-config --set /etc/nova/nova.conf vnc enabled True openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html openstack-config --set /etc/nova/nova.conf glance host controller openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
-------------------
/etc/nova/nova.conf
如果没有进行kvm穿透,这里得到的结果是0,就需要设置成qemu
如何kvm穿透
自行google
KVM虚拟化之嵌套虚拟化nested
kvm穿透,在libvirt的配置文件里面修改,查看其他的
#egrep -c '(vmx|svm)' /proc/cpuinfo
#virt_type qemu
启动libvert和nova
systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service #Verify operation source admin-openrc.sh nova service-list nova endpoints nova image-list
■■■■■■■■■■■■■■■■■■nova end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■neutron begin■■■■■■■■■■■■■■■■■■
在controller节点安装neutron
数据库
CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'haoning'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller' IDENTIFIED BY 'haoning'; flush privileges;
每一步变化都可以./mysql_openstack.sh neutron 查看数据库变化
openstack user create --domain default --password haoning neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region wuhan network public http://controller:9696 openstack endpoint create --region wuhan network internal http://controller:9696 openstack endpoint create --region wuhan network admin http://controller:9696
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
使用官方文档的linuxbridge和vxlan方式
建立共有网络public,和私有网络private
共有网络的vm使用flat模式和本地局域网一样就可以
私有网络,类似nat,
可以建一个浮动ip挂载有私有网络的vm上,让一个vm既有私有网络又有共有网络
安装
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
---------------------
配置文件
/etc/neutron/neutron.conf
neutron的基础网络配置
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:haoning@controller/neutron openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password haoning openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_plugin password openstack-config --set /etc/neutron/neutron.conf nova project_domain_id default openstack-config --set /etc/neutron/neutron.conf nova user_domain_id default openstack-config --set /etc/neutron/neutron.conf nova region_name wuhan openstack-config --set /etc/neutron/neutron.conf nova project_name service openstack-config --set /etc/neutron/neutron.conf nova username nova openstack-config --set /etc/neutron/neutron.conf nova password haoning openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
---------------
配置
/etc/neutron/plugins/ml2/ml2_conf.ini
二层网络用来配置vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks public openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
--------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
linuxbridge相关的
brctl show 和ip netns查看
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:eth1 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.139.193 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
--------------------
配置
/etc/neutron/l3_agent.ini
三层网络
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge openstack-config --set /etc/neutron/l3_agent.ini DEFAULT verbose True
#The external_network_bridge option intentionally lacks a value to enable multiple external networks on a single agent
#☆☆☆☆★★★★★
#Comment out or remove any other options in the [keystone_authtoken] section.
------------------------------------
配置获取ip的dhcp服务,
/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
-----------------------------------
dhcp服务是通过dnsmasq进程起的
/etc/neutron/dnsmasq-neutron.conf echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
-------------------------------
配置vm的metadata
/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_uri http://controller:5000 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:35357 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region wuhan openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_plugin password openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_domain_id default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT user_domain_id default openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_name service openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT username neutron openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT password haoning openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
#★★★★★★★★★★注意这里metadata_agent.ini要删掉一部分东西
#admin_tenant_name = %SERVICE_TENANT_NAME% #admin_user = %SERVICE_USER% #admin_password = %SERVICE_PASSWORD%
------------------------------------------------------------------------------------------------------------
在nova的配置文件上加上neutron的关联,
所有节点都要加上
/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf neutron auth_plugin password openstack-config --set /etc/nova/nova.conf neutron project_domain_id default openstack-config --set /etc/nova/nova.conf neutron user_domain_id default openstack-config --set /etc/nova/nova.conf neutron region_name wuhan openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password haoning openstack-config --set /etc/nova/nova.conf service_metadata_proxy True openstack-config --set /etc/nova/nova.conf metadata_proxy_shared_secret METADATA_SECRET
------------------------------------------------------------------------------------------------------------
同步数据库并启动
引用
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
##########For networking option 2, also enable and start the layer-3 service:
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
检查错误:
cd /var/log/neutron
grep ERROR *
★★★★★★★★★★★★★★★compute 节点★☆★★★★★★★★★★★★★
在compute节点上添加neutron的agent服务
配合vm启动的时候关联网络
安装:
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
-------------------------------------
配置
/etc/neutron/neutron.conf
-------------------
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password haoning openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password haoning openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
-----------------------
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
------------------------------
配置
/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:eth1 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.139.192 openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
###★★★★★Networking Option 2: Self-service networks------end★★★★★★★★
----------------------这里controller节点和compute节点都要加★★★★---------
上面说过的,compute节点上也要把nova的配置文件上添加neutron的关联
注意改完配需要重启
/etc/nova/nova.conf
------------------
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357 openstack-config --set /etc/nova/nova.conf neutron auth_plugin password openstack-config --set /etc/nova/nova.conf neutron project_domain_id default openstack-config --set /etc/nova/nova.conf neutron user_domain_id default openstack-config --set /etc/nova/nova.conf neutron region_name wuhan openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password haoning
-----------------
compute几点只有nova的compute服务和neutron的aget服务
启动:
systemctl restart openstack-nova-compute.service systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service
###Verify operation验证:
[root@controller neutron(keystone_admin_v3)]# neutron ext-list +-----------------------+--------------------------+ | alias | name | +-----------------------+--------------------------+ | flavors | Neutron Service Flavors | | security-group | security-group | | dns-integration | DNS Integration | | net-mtu | Network MTU | | port-security | Port Security | | binding | Port Binding | | provider | Provider Network | | agent | agent | | quotas | Quota management support | | subnet_allocation | Subnet Allocation | | dhcp_agent_scheduler | DHCP Agent Scheduler | | rbac-policies | RBAC Policies | | external-net | Neutron external network | | multi-provider | Multi Provider Network | | allowed-address-pairs | Allowed Address Pairs | | extra_dhcp_opt | Neutron Extra DHCP opts | +-----------------------+--------------------------+
###★★★★★Networking Option 2: Self-service networks-------begin★★★★★★★★
开始配置neutron的网络
如果安装过程中
/var/log/keystone /var/log/nova /var/log/neutron都没有错误
即可
source admin-openrc.sh
neutron agent-list
#public network建立共有网络的相对简单:
neutron net-create public --shared --provider:physical_network public --provider:network_type flat #neutron subnet-create public PUBLIC_NETWORK_CIDR --name public --allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS --dns-nameserver DNS_RESOLVER --gateway PUBLIC_NETWORK_GATEWAY neutron subnet-create public 192.168.139.0/20 --name public --allocation-pool start=192.168.139.201,end=192.168.139.210 --dns-nameserver 8.8.4.4 --gateway 192.168.128.1
#private network建立一个私有的网络
neutron net-create private neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 #Create a router neutron net-update public --router:external neutron router-create router neutron router-interface-add router private #Added interface 65b58347-09fa-43dd-914d-31b4885d84ef to router router. neutron router-gateway-set router public
#Verify operation验证
ip netns neutron router-port-list router ping -c 4 192.168.139.202 brctl show ip netns
#建立vm ,先简历秘钥对
source admin......(demo为啥不好使?)
ssh-keygen -q -N "" nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey nova keypair-list
#安全组
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
#Launch an instance on the public network--------begin----------
source admin-openrc.sh
nova flavor-list
nova image-list
neutron net-list
nova secgroup-list
#nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance
#Replace PUBLIC_NET_ID with the ID of the public provider network.
nova boot --flavor m1.tiny --image cirros --nic net-id=89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 --security-group default --key-name mykey public-instance
nova list
nova get-vnc-console public-instance novnc
#Launch an instance on the public network--------end----------
#删除网络
如果遇到问题,需要把简历的网络一次删除
这里有顺序,先删router相关的再删子网,再删网络
neutron router-list neutron router-gateway-clear f16bd408-181d-40d9-8998-5d556fec7e0f neutron router-interface-delete f16bd408-181d-40d9-8998-5d556fec7e0f private neutron router-delete f16bd408-181d-40d9-8998-5d556fec7e0f neutron subnet-list neutron subnet-delete 7b03ef7d-144f-479e-bf3c-4a880a48ac3d neutron subnet-delete f3eb1841-6666-4821-8fea-0d8d98352c73 neutron net-list neutron net-delete 89f9ac2b-d7cf-4f45-8819-487c3b1c4fc7 neutron net-delete 6ac13027-8e87-4696-b01f-5198a3ffa509
################public network begin#############
完整的简历共有网络
★★★★★★★★★★★★★★★★★★★★★★★
neutron net-create public --shared --provider:physical_network public --provider:network_type flat neutron net-list neutron subnet-create public 192.168.139.0/20 --name public --allocation-pool start=192.168.139.221,end=192.168.139.230 --dns-nameserver 8.8.4.4 --gateway 192.168.128.1 neutron subnet-list $ ssh-keygen -q -N "" $ nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey nova keypair-list nova secgroup-list nova secgroup-list-rules 1f676a35-7a31-4265-aa2b-cc4317de8633 nova help|grep secgroup nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 nova flavor-list nova image-list neutron net-list nova secgroup-list #nova boot --flavor m1.tiny --image cirros --nic net-id=PUBLIC_NET_ID --security-group default --key-name mykey public-instance #Replace PUBLIC_NET_ID with the ID of the public provider network. nova boot --flavor m1.tiny --image cirros --nic net-id=79fa9460-1f32-4141-9a16-08dea9355e2a --security-group default --key-name mykey public-instance nova list #能看到ip nova get-vnc-console public-instance novnc 默认密码 cirros cubswin:) ping 192.168.139.222 ssh cirros@192.168.139.222 ip netns ifconfig
#建立个私有网络的vm
这个最重要,容易出错
neutron net-create private #neutron subnet-create private PRIVATE_NETWORK_CIDR --name private --dns-nameserver DNS_RESOLVER --gateway PRIVATE_NETWORK_GATEWAY neutron subnet-create private 172.16.1.0/24 --name private --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 neutron net-list neutron subnet-list #Private project networks connect to public provider networks using a virtual router. Each router contains an interface to at least one private project network and a gateway on a public provider network. #The public provider network must include the router: external option to enable project routers to use it for connectivity to external networks such as the Internet. The admin or other privileged user must include this option during network creation or add it later. In this case, we can add it to the existing public provider network. #Add the router: external option to the public provider network: neutron net-update public --router:external [root@controller ~(keystone_admin_v3)]# neutron router-create router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | distributed | False | | external_gateway_info | | | ha | False | | id | fc43e7ee-44d1-483b-a5b2-6622637bb106 | | name | router | | routes | | | status | ACTIVE | | tenant_id | a847d63d35e54622b641ea6b74c3c126 | +-----------------------+--------------------------------------+ [root@controller ~(keystone_admin_v3)]# neutron router-list +--------------------------------------+--------+-----------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+--------+-----------------------+-------------+-------+ | fc43e7ee-44d1-483b-a5b2-6622637bb106 | router | null | False | False | +--------------------------------------+--------+-----------------------+-------------+-------+ [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# neutron router-interface-add router private Added interface 9d9f73e7-a5e4-4d89-93c7-df135ac39a26 to router router. [root@controller ~(keystone_admin_v3)]# neutron router-gateway-set router public Set gateway for router router [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# [root@controller ~(keystone_admin_v3)]# neutron router-list +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | id | name | external_gateway_info | distributed | ha | +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ | fc43e7ee-44d1-483b-a5b2-6622637bb106 | router | {"network_id": "79fa9460-1f32-4141-9a16-08dea9355e2a", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "1403be6d-fb25-4789-80ce-d570f291c6e4", "ip_address": "192.168.139.223"}]} | False | False | +--------------------------------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+ [root@controller ~(keystone_admin_v3)]# neutron net-list +--------------------------------------+---------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+-------------------------------------------------------+ | 79fa9460-1f32-4141-9a16-08dea9355e2a | public | 1403be6d-fb25-4789-80ce-d570f291c6e4 192.168.128.0/20 | | 1fd72b95-0264-4fca-8173-f321239a55fa | private | 97d8a9a1-d1b3-4091-9ee0-51af01c84b4b 172.16.1.0/24 | +--------------------------------------+---------+-------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# neutron subnet-list +--------------------------------------+---------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+---------+------------------+--------------------------------------------------------+ | 1403be6d-fb25-4789-80ce-d570f291c6e4 | public | 192.168.128.0/20 | {"start": "192.168.139.221", "end": "192.168.139.230"} | | 97d8a9a1-d1b3-4091-9ee0-51af01c84b4b | private | 172.16.1.0/24 | {"start": "172.16.1.2", "end": "172.16.1.254"} | +--------------------------------------+---------+------------------+--------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# ip netns qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106 (id: 2) qdhcp-1fd72b95-0264-4fca-8173-f321239a55fa (id: 1) qdhcp-79fa9460-1f32-4141-9a16-08dea9355e2a (id: 0) [root@controller ~(keystone_admin_v3)]# neutron router-port-list router +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ | 8b18370f-345e-42bc-b4eb-30391866e757 | | fa:16:3e:78:33:40 | {"subnet_id": "1403be6d-fb25-4789-80ce-d570f291c6e4", "ip_address": "192.168.139.223"} | | 9d9f73e7-a5e4-4d89-93c7-df135ac39a26 | | fa:16:3e:6b:08:b5 | {"subnet_id": "97d8a9a1-d1b3-4091-9ee0-51af01c84b4b", "ip_address": "172.16.1.1"} | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+ [root@controller ~(keystone_admin_v3)]# brctl show bridge name bridge id STP enabled interfaces brq1fd72b95-02 8000.063507f2cee3 no tap65ad6fb9-ea tap9d9f73e7-a5 vxlan-18 brq79fa9460-1f 8000.505112aa8214 no eth1 tap4b63544c-9b virbr0 8000.5254009c2b11 yes virbr0-nic [root@controller ~(keystone_admin_v3)]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 50:52:18:aa:81:11 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master brq79fa9460-1f state UP mode DEFAULT qlen 1000 link/ether 50:51:12:aa:82:14 brd ff:ff:ff:ff:ff:ff 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT link/ether 52:54:00:9c:2b:11 brd ff:ff:ff:ff:ff:ff 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500 link/ether 52:54:00:9c:2b:11 brd ff:ff:ff:ff:ff:ff 13: tap4b63544c-9b@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master brq79fa9460-1f state UP mode DEFAULT qlen 1000 link/ether b6:67:ca:38:ff:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 14: brq79fa9460-1f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 50:51:12:aa:82:14 brd ff:ff:ff:ff:ff:ff 15: tap65ad6fb9-ea@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master brq1fd72b95-02 state UP mode DEFAULT qlen 1000 link/ether e6:b7:1f:bc:bb:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 1 16: vxlan-18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master brq1fd72b95-02 state UNKNOWN mode DEFAULT link/ether 06:35:07:f2:ce:e3 brd ff:ff:ff:ff:ff:ff 17: brq1fd72b95-02: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT link/ether 06:35:07:f2:ce:e3 brd ff:ff:ff:ff:ff:ff 18: tap9d9f73e7-a5@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master brq1fd72b95-02 state UP mode DEFAULT qlen 1000 link/ether 5e:84:a7:84:27:5b brd ff:ff:ff:ff:ff:ff link-netnsid 2 [root@controller ~(keystone_admin_v3)]#
遇到的问题
#如果升级了iproute
#如果报错:l3-agent.log
#2016-03-12 21:29:26.103 1170 ERROR neutron.agent.l3.agent Stderr: Cannot create namespace file "/var/run/netns/qrouter-fc43e7ee-44d1-483b-a5b2-6622637bb106": File exists
#问题是
https://bugzilla.redhat.com/show_bug.cgi?id=1292587
#需要修改[url]https://review.openstack.org/#/c/258493/1/neutron/agent/linux/ip_lib.py
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py[/url]
#Replace PRIVATE_NET_ID with the ID of the private project network
#nova boot --flavor m1.tiny --image cirros --nic net-id=PRIVATE_NET_ID --security-group default --key-name mykey private-instance
nova boot --flavor m1.tiny --image cirros --nic net-id=1fd72b95-0264-4fca-8173-f321239a55fa --security-group default --key-name mykey private-instance [root@controller linux(keystone_admin_v3)]# nova list +--------------------------------------+------------------+--------+------------+-------------+------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------+--------+------------+-------------+------------------------+ | edd82475-1585-469f-aa43-95fa80f1f812 | private-instance | ACTIVE | - | Running | private=172.16.1.3 | | bad46269-19d1-423a-88be-652a06c08723 | public-instance | ACTIVE | - | Running | public=192.168.139.222 | +--------------------------------------+------------------+--------+------------+-------------+------------------------+ nova get-vnc-console private-instance novnc neutron floatingip-create public [root@controller linux(keystone_admin_v3)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 2cf23d2c-748f-4242-89eb-1d53721560a1 | | 192.168.139.225 | | +--------------------------------------+------------------+---------------------+---------+ nova floating-ip-associate private-instance 192.168.139.225 neutron floatingip-create public nova floating-ip-associate private-instance 203.0.113.104 [root@controller linux(keystone_admin_v3)]# nova list +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ | edd82475-1585-469f-aa43-95fa80f1f812 | private-instance | ACTIVE | - | Running | private=172.16.1.3, 192.168.139.225 | | bad46269-19d1-423a-88be-652a06c08723 | public-instance | ACTIVE | - | Running | public=192.168.139.222 | +--------------------------------------+------------------+--------+------------+-------------+-------------------------------------+ ssh root@192.168.139.225 [root@controller linux(keystone_admin_v3)]# neutron floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | 2cf23d2c-748f-4242-89eb-1d53721560a1 | 172.16.1.3 | 192.168.139.225 | af79694b-2f59-4bf0-a0d0-6c619de49941 | +--------------------------------------+------------------+---------------------+--------------------------------------+ [root@controller linux(keystone_admin_v3)]#
################public network end#############
###★★★★★Networking Option 2: Self-service networks-------end★★★★★★★★
■■■■■■■■■■■■■■■■■■neutron end■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon begin■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■horizon end■■■■■■■■■■■■■■■■■■
发表评论
-
建立tap设备的c的代码
2019-01-08 19:09 480tapper.c #include <stdio.h& ... -
br0和tap0的互相影响
2019-01-02 19:17 820转载 http://www.cnblogs.com/wlei/ ... -
M版openstack(ovs,dvr,动态迁移)
2017-06-09 10:30 1800主要内容 1.先搭建三个节点的环境,dvr模式 2.建一个vm ... -
M版本的openstack的例子(linuxbridge)
2017-05-23 15:05 561做两个节点控制节点和计算节点 mcontroller521 ... -
vxlan多台主机的vm之间不同网段互通
2016-09-19 21:06 4403组播: 试验: 在三台机器上 192.168.139.251 ... -
vxlan多台主机的vm之间相同网段互通
2016-09-19 16:30 2243三台机器 建立namespace ... -
qemu用tap方式启动vm的网络试验(ip route)
2016-09-14 11:29 2832ip route add 192.168.8.0/24 via ... -
openstack的topo图
2016-09-07 14:07 638http://haoningabc.iteye.com/blo ... -
openstack的M版本的neutron的实验
2016-09-01 20:00 3147试验步骤: 1.创建内部 ... -
openstack的M版本安装
2016-08-17 13:33 1065参考 http://docs.openstack.org/mi ... -
can't initialize iptables table错误
2016-04-26 10:05 800can't initialize iptables table ... -
linux下TUN/TAP虚拟网卡的使用
2016-03-31 18:46 4894tun在网络层 tap在二层 ls ... -
openstack L版本(openvswitch的安装和应用)
2016-03-24 15:04 3026参考L版本的linuxbridge的安装方式 和k版本的ov ... -
openstack试验(linux vxlan)
2016-03-22 22:27 2736yum install centos-release-open ... -
backup a libvirt xml
2016-03-18 21:23 576<domain type='kvm' id='2'> ... -
neutron router试验
2016-03-17 20:41 970上接 http://haoningabc.iteye.com/ ... -
openstack的L版本安装(flat网络)
2016-03-07 17:55 995参考http://docs.openstack.org ... -
openstack调试 数据库跟踪
2016-03-04 18:07 722查看openstack代码 openstack每个命令之后,数 ... -
neutron基础九(qemu nat网络)
2016-02-06 17:21 1631接上基础八,kvm透传nested忽略 1.在主机ce ... -
neutron基础八(qemu 桥接网络)
2016-02-06 13:13 1550qemu的桥接和nat的qemu启动命令是一样的,但是后续的脚 ...
相关推荐
"OpenStack 使用 Linux Bridge+VXLAN 模式的网络变化与分析" OpenStack 是一个开源的云计算平台,提供了丰富的网络功能,以满足不同的应用场景。其中,Neutron 是 OpenStack 的网络组件,负责管理和维护网络资源。...
在探讨OpenStack中使用Linux Bridge实现VXLAN网络的过程中,我们将涉及以下几个方面的知识点: 1. OpenStack环境部署和网络组件 OpenStack版本为ocata,系统采用Ubuntu 16.04.2,部署了一个控制节点和一个计算节点...
使用 Vagrant 部署 OpenStack Juno(Linux-Bridge + VXLAN)特征三个节点(控制器、网络、计算)- Ubuntu 14.04 带有 VXLAN 和 VLAN 租户网络的 Linux-Bridge 适用于 VMware Fusion 或 VirtualBox 网络节点包括...
而Linux Bridge则需要更高版本的内核和额外工具。 - **流量捕获/SPAN功能**:Open vSwitch支持原生实现,可以通过GRE封装等方式实现RSPAN;Linux Bridge则需要额外配置。 - **流监测**:如NetFlow、sFlow、IPFIX等...
- **Linuxbridge.conf.ini**:Linux Bridge是OpenStack中的一种虚拟网络设备,它在控制节点上也需要相应的配置以支持VxLAN。可能需要配置的参数包括VxLAN端口、VLAN隔离设置等。 2. **创建网络**: - **外部网络*...
在OpenStack环境中,Overlay网络技术被广泛用于解决虚拟机跨物理主机通信的问题,VXLAN(Virtual eXtensible Local Area Network)便是其中一种流行的方式。Open vSwitch (OVS) 是一个开源的虚拟交换机,它支持VXLAN...
6. 编辑 neutron 配置文件,例如 ml2_conf.ini 和 linuxbridge_agent.ini 文件,添加相应的配置信息。 三、实例通信的重要性 OpenStack 实例通信对整个云计算系统的性能和安全性产生了重要影响。实例之间的通信...
Neutron支持多种插件,如Open vSwitch(OVS)、Linux Bridge等,这些插件实现了虚拟交换功能,并与物理网络设备交互。 Open vSwitch(OVS)是Neutron常用的网络插件之一,它是一个开源的虚拟交换机,可以在多种操作...
其次,理解Linux Bridge环境中涉及的各种网络设备至关重要,因为它们是构建OpenStack虚拟网络的基础。 **检查初始网络状态** 在开始配置Neutron之前,查看网络的状态是非常必要的。这包括识别物理网卡(ethX)、...
网络架构设计是关键一环,本配置采用LinuxBridge作为基础,同时也支持Open vSwitch。租户隔离通过VxLan实现。每个节点至少需要两个网络接口,一个用于公共网络,另一个用于内部私有网络。具体的网络拓扑如图1所示。 ...
在OpenStack这样的云环境中,虚拟机(instance)的网络配置是一项关键任务,它涉及到网络的连通性和资源的高效利用。本节我们将深入探讨如何部署instance到VXLAN1网络,以及实例间的通信验证。 VXLAN(Virtual ...
只需实现 Open vSwitch Plugin 和相应的 OVS Agent,就可以让 Neutron 使用 Open vSwitch 作为网络提供商,实现更高级的网络功能,如 VXLAN 隧道、QoS 等。 然而,随着支持的 network provider 增多,Neutron ...
Neutron支持多种插件,如Open vSwitch(OVS)和Linux Bridge,这些插件可以实现虚拟机之间的网络通信。在部署过程中,理解网络模型(如Flat、VLAN、VxLAN)和网络策略至关重要。例如,Flat网络模型适用于所有虚拟机...
例如,当使用vlan Type Driver和linux bridge Mechanism Driver时,Mechanism Driver会在每个节点上配置Linux Bridge代理,创建并桥接VLAN设备。如果换成Cisco的Mechanism Driver,配置操作将发生在Cisco物理交换机...
在后续的学习中,我们将深入探讨Neutron的这些核心组件和功能,包括如何配置Linux Bridge和Open vSwitch,设置router实现跨网段通信,使用LBaaS进行负载均衡,以及配置Security Group和FWaaS以增强网络安全。...
在OpenStack中,Linux Bridge常被用来创建虚拟局域网(VLAN),实现不同虚拟机之间的通信隔离。为了更好地理解Neutron与Nova的关系,掌握Linux Bridge的基本原理是非常必要的。 #### 三、Neutron与Nova的网络逻辑关系...
当使用ML2的linux-bridge机制时,对于每一个local network,都会在主机上创建一个独立的bridge(网桥)。实例(instance)的tap设备(TAP设备是一种虚拟网络设备,用于连接虚拟机和物理网络)会被连接到这个bridge上...
ML2(Multiprotocol Label Switching)是Neutron的一种插件,它支持多种机制驱动,比如Linux Bridge和Open vSwitch。本节主要讨论如何在OpenStack环境中启用Open vSwitch作为Neutron的mechanism driver。 首先,...
- **机制驱动**:Linux Bridge和L2人群驱动。 - **扩展驱动**:端口安全。 7. **平面网络配置**: - **物理接口映射**:将逻辑网络映射到物理网络接口。 8. **VXLAN配置**: - **VNI范围**:定义可用的VNI编号...