`
yuky1327
  • 浏览: 125015 次
  • 性别: Icon_minigender_2
  • 来自: 广州
社区版块
存档分类
最新评论

VMWare上安装OpenStack

 
阅读更多
软件:
VMware® Workstation 9.0
ubuntu-12.04.1-server-amd64.iso
参考网址:
http://docs.openstack.org/essex/openstack-compute/starter/content/Server1-d1e537.html

一.创建虚拟机
注意:需要2个虚拟网卡,2块硬盘,一个30G,一个10G。
用于创建nova-volume和swift。

二.安装ubuntu-server
注意:选择手动分区,对30G的硬盘进行以下分区,10G的硬盘暂时不用操作
1.创建根分区,15GB
2.创建交换分区,2GB
3.剩余空间,创建逻辑分区,在文件系统中选择最后一项不使用,留着物理卷给nova-volume.

三.开始OpenStack安装(以下操作使用root)
由于脚本比较长,所以没列出来,请http://yuky1327.iteye.com/blog/1696604在下载附件的脚本,辅助执行。
1.开启并设置root密码
sudo passwd root


2. Network Configuration
Edit the /etc/network/interfaces file so as to looks like this:
注意:请在虚拟机里面操作静态地址,不要SSH登录到机器上修改,否则可能会ping不通外网
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.1.50
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 202.96.128.166

auto eth1
iface eth1 inet static
address 10.0.1.1
netmask 255.255.255.0
network 10.0.1.0
broadcast 10.0.1.255


Restart the network now
sudo /etc/init.d/networking restart


3.Install Base OS & bridge-utils
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bridge-utils


4.NTP Server
sudo apt-get install ntp


Open the file /etc/ntp.conf and add the following lines to make sure that the time on the server stays in sync with an external server. If the Internet connectivity is down, the NTP server uses its own hardware clock as the fallback.
server ntp.ubuntu.com
server 127.127.1.0
fudge 127.127.1.0 stratum 10

Restart the NTP server
sudo service ntp restart


5.Install mysql-server and python-mysqldb package
Create the root password for mysql. The password used in this guide is "mygreatsecret"
sudo apt-get install mysql-server python-mysqldb


Change the bind address from 127.0.0.1 to 0.0.0.0 in /etc/mysql/my.cnf. It should be identical to this:
bind-address = 0.0.0.0


Restart MySQL server to ensure that it starts listening on all interfaces.
sudo restart mysql


Create MySQL databases to be used with nova, glance and keystone.
sudo mysql -uroot -pmygreatsecret -e 'CREATE DATABASE nova;'
sudo mysql -uroot -pmygreatsecret -e 'CREATE USER novadbadmin;'
sudo mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%';"
sudo mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'novadbadmin'@'%' = PASSWORD('novasecret');"
sudo mysql -uroot -pmygreatsecret -e 'CREATE DATABASE glance;'
sudo mysql -uroot -pmygreatsecret -e 'CREATE USER glancedbadmin;'
sudo mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'%';"
sudo mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'glancedbadmin'@'%' = PASSWORD('glancesecret');"
sudo mysql -uroot -pmygreatsecret -e 'CREATE DATABASE keystone;'
sudo mysql -uroot -pmygreatsecret -e 'CREATE USER keystonedbadmin;'
sudo mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystonedbadmin'@'%';"
sudo mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'keystonedbadmin'@'%' = PASSWORD('keystonesecret');"


6.Install Keystone
sudo apt-get install keystone python-keystone python-keystoneclient

Open /etc/keystone/keystone.conf and change the line
admin_token = ADMIN
改为
admin_token = admin

Since MySQL database is used to store keystone configuration, replace the following line in /etc/keystone/keystone.conf
connection = sqlite:////var/lib/keystone/keystone.db
改为
connection = mysql://keystonedbadmin:keystonesecret@192.168.1.200/keystone

Restart Keystone:
sudo service keystone restart

Run the following command to synchronise the database:
sudo keystone-manage db_sync

add these variables to ~/.bashrc
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=admin

source .bashrc 

Creating Tenants,Creating Users,Creating Roles,Listing Tenants, Users and Roles,Adding Roles to Users in Tenants,Creating Services,Creating Endpoints
其中会要求输入邮箱地址和本机IP地址
./create_keystone_data.sh

7.Install glance
sudo apt-get install glance glance-api glance-client glance-common glance-registry python-glance

Glance uses SQLite by default. MySQL and PostgreSQL can also be configured to work with Glance.


修改/etc/glance/glance-api-paste.ini 和 /etc/glance/glance-registry-paste.ini
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
改为
admin_tenant_name = service
admin_user = glance
admin_password = glance


Open the file /etc/glance/glance-registry.conf and edit the line which contains the option "sql_connection =" to this:
sql_connection = mysql://glancedbadmin:glancesecret@192.168.1.200/glance
....
#末尾追加
[paste_deploy]
flavor = keystone


Open /etc/glance/glance-api.conf and add the following lines at the end of the document.
[paste_deploy]
flavor = keystone

Create glance schema in the MySQL database.:
sudo glance-manage version_control 0
sudo glance-manage db_sync

Restart glance-api and glance-registry after making the above changes.
sudo restart glance-api
sudo restart glance-registry

add these variables to ~/.bashrc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL="http://localhost:5000/v2.0/"

source .bashrc

To test if glance is setup correectly execute the following command.
glance index

成功是不会显示任何信息,不成功则会显示错误信息.

8.Install nova
sudo apt-get install nova-api nova-cert nova-compute nova-compute-kvm nova-doc nova-network nova-objectstore nova-scheduler nova-volume rabbitmq-server novnc nova-consoleauth

Run edit_nova_conf.sh to edit the /etc/nova/nova.conf file
./edit_nova_conf.sh
输入mysql的地址
输入本机IP
输入浮动IP的开始,默认192.168.1.225

Create a Physical Volume.
sudo pvcreate /dev/sda5

Create a Volume Group named nova-volumes.
sudo vgcreate nova-volumes /dev/sda5

Change the ownership of the /etc/nova folder and permissions for /etc/nova/nova.conf:
sudo chown -R nova:nova /etc/nova
sudo chmod 644 /etc/nova/nova.conf

Open /etc/nova/api-paste.ini and at the end of the file, edit the following lines:
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
改为
admin_tenant_name = service
admin_user = nova
admin_password = nova

设置ipv4转发,否则外面能连接虚拟机,虚拟机访问不了外面
sysctl -w net.ipv4.ip_forward=1

Create nova schema in the MySQL database.
sudo nova-manage db sync

创建网络
nova-manage network create private --fixed_range_v4=10.0.1.1/27 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32 

设定floating IP,与输入的floating_range值一致
nova-manage floating create --ip_range=192.168.1.225/27

Restart nova services.
sudo restart libvirt-bin; sudo restart nova-network; sudo restart nova-compute; sudo restart nova-api; sudo restart nova-objectstore; sudo restart nova-scheduler; sudo restart nova-volume; sudo restart nova-consoleauth;

To test if nova is setup correctly run the following command.
sudo nova-manage service list
Binary           Host              Zone             Status     State Updated_At
nova-network     server1           nova             enabled    :-)   2012-04-20 08:58:43
nova-scheduler   server1           nova             enabled    :-)   2012-04-20 08:58:44
nova-volume      server1           nova             enabled    :-)   2012-04-20 08:58:44
nova-compute     server1           nova             enabled    :-)   2012-04-20 08:58:45
nova-cert        server1           nova             enabled    :-)   2012-04-20 08:58:43

7.1  Install OpenStack Dashboard
sudo apt-get install openstack-dashboard

Restart apache with the following command:
sudo service apache2 restart

打开浏览器,输入http://192.168.1.200,输入admin@admin登录。
7.2 Install Swift
sudo apt-get install swift swift-proxy swift-account swift-container swift-object
sudo apt-get install xfsprogs curl python-pastedeploy

Swift Storage Backends For Partition as a storage device
If you had set aside a partition for Swift during the installation of the OS, you can use it directly. If you have an unused/unpartitioned physical partition (e.g. /dev/sdb5), you have to format it to xfs filesystem using parted or fdisk and use it as the backend. You need to specify the mount point in /etc/fstab.
CAUTION: Replace /dev/sdb to your appropriate device. I'm assuming that there is an unused/un-formatted partition section in /dev/sdb
root@begon:/dev# sudo fdisk /dev/sdb

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): e
Partition number (1-4, default 1): 3
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519

Command (m for help): n
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (4096-20971519, default 4096): 
Using default value 4096
Last sector, +sectors or +size{K,M,G} (4096-20971519, default 20971519): 
Using default value 20971519

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

查看是否创建成功
root@bogon:/dev# fdisk /dev/sdb

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes
107 heads, 17 sectors/track, 11529 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x937847e1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    20971519    10484736    5  Extended
/dev/sdb5            4096    20971519    10483712   83  Linux


This would have created a partition (something like /dev/sdb5) that we can now format to XFS filesystem. Do 'sudo fdisk -l' in the terminal to view and verify the partion table. Find the partition Make sure that the one that you want to use is listed there. This would work only if you have xfsprogs installed.
sudo mkfs.xfs -i size=1024 /dev/sdb5

Create a directory /mnt/swift_backend that can be used as a mount point to the partion tha we created.
sudo mkdir /mnt/swift_backend

以下添加到 /etc/fstab
/dev/sdb5 /mnt/swift_backend xfs noatime,nodiratime,nobarrier,logbufs=8 0 0

Now before mounting the backend that will be used, create some nodes to be used as storage devices and set ownership to 'swift' user and group.
sudo mount /mnt/swift_backend
pushd /mnt/swift_backend
sudo mkdir node1 node2 node3 node4
popd
sudo chown swift.swift /mnt/swift_backend/*
for i in {1..4}; do sudo ln -s /mnt/swift_backend/node$i /srv/node$i; done;
sudo mkdir -p /etc/swift/account-server /etc/swift/container-server /etc/swift/object-server /srv/node1/device /srv/node2/device /srv/node3/device /srv/node4/device
sudo mkdir /run/swift
sudo chown -L -R swift.swift /etc/swift /srv/node[1-4]/ /run/swift

把下面添加到/etc/rc.local ,在"exit 0"前;
sudo mkdir /run/swift
sudo chown swift.swift /run/swift

打开/etc/default/rsync 设置  RSYNC_ENABLE=true
RSYNC_ENABLE=true

创建并写入以下内容到/etc/rsyncd.conf
# General stuff
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /run/rsyncd.pid
address = 127.0.0.1

# Account Server replication settings

[account6012]
max connections = 25
path = /srv/node1/
read only = false
lock file = /run/lock/account6012.lock

[account6022]
max connections = 25
path = /srv/node2/
read only = false
lock file = /run/lock/account6022.lock

[account6032]
max connections = 25
path = /srv/node3/
read only = false
lock file = /run/lock/account6032.lock

[account6042]
max connections = 25
path = /srv/node4/
read only = false
lock file = /run/lock/account6042.lock

# Container server replication settings

[container6011]
max connections = 25
path = /srv/node1/
read only = false
lock file = /run/lock/container6011.lock

[container6021]
max connections = 25
path = /srv/node2/
read only = false
lock file = /run/lock/container6021.lock

[container6031]
max connections = 25
path = /srv/node3/
read only = false
lock file = /run/lock/container6031.lock

[container6041]
max connections = 25
path = /srv/node4/
read only = false
lock file = /run/lock/container6041.lock

# Object Server replication settings

[object6010]
max connections = 25
path = /srv/node1/
read only = false
lock file = /run/lock/object6010.lock

[object6020]
max connections = 25
path = /srv/node2/
read only = false
lock file = /run/lock/object6020.lock

[object6030]
max connections = 25
path = /srv/node3/
read only = false
lock file = /run/lock/object6030.lock

[object6040]
max connections = 25
path = /srv/node4/
read only = false
lock file = /run/lock/object6040.lock

Restart rsync.
sudo service rsync restart


Configure Swift Components
运行以下命令获取一个随机码
root@bogon:/srv# od -t x8 -N 8 -A n < /dev/random
7736e3116c693239

创建 /etc/swift/swift.conf and 把随机码写入:
[swift-hash]
# random unique string that can never change (DO NOT LOSE). I'm using 7736e3116c693239. 
# od -t x8 -N 8 -A n < /dev/random
# The above command can be used to generate random a string.
swift_hash_path_suffix = 7736e3116c693239

把以下内容写入到/etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
# Order of execution of modules defined below
pipeline = catch_errors healthcheck cache authtoken keystone proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
set log_name = swift-proxy
set log_facility = LOG_LOCAL0
set log_level = INFO
set access_log_name = swift-proxy
set access_log_facility = SYSLOG
set access_log_level = INFO
set log_headers = True
account_autocreate = True

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:cache]
use = egg:swift#memcache
set log_name = cache

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_protocol = http
auth_host = 127.0.0.1
auth_port = 35357
auth_token = admin
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
admin_token = admin
admin_tenant_name = service
admin_user = swift
admin_password = swift
delay_auth_decision = 0

[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
is_admin = true

Configure Swift Account Server,Swift Container Server,Swift Object Server
./swift_account_server.sh
./swift_container_server.sh
./swift_object_server.sh

vi /etc/swift/container-server.conf 在末尾添加以下
[container-sync]

Configure Swift Rings
pushd /etc/swift
sudo swift-ring-builder object.builder create 18 3 1
sudo swift-ring-builder container.builder create 18 3 1
sudo swift-ring-builder account.builder create 18 3 1
sudo swift-ring-builder object.builder add z1-127.0.0.1:6010/device 1
sudo swift-ring-builder object.builder add z2-127.0.0.1:6020/device 1
sudo swift-ring-builder object.builder add z3-127.0.0.1:6030/device 1
sudo swift-ring-builder object.builder add z4-127.0.0.1:6040/device 1
sudo swift-ring-builder object.builder rebalance
sudo swift-ring-builder container.builder add z1-127.0.0.1:6011/device 1
sudo swift-ring-builder container.builder add z2-127.0.0.1:6021/device 1
sudo swift-ring-builder container.builder add z3-127.0.0.1:6031/device 1
sudo swift-ring-builder container.builder add z4-127.0.0.1:6041/device 1
sudo swift-ring-builder container.builder rebalance
sudo swift-ring-builder account.builder add z1-127.0.0.1:6012/device 1
sudo swift-ring-builder account.builder add z2-127.0.0.1:6022/device 1
sudo swift-ring-builder account.builder add z3-127.0.0.1:6032/device 1
sudo swift-ring-builder account.builder add z4-127.0.0.1:6042/device 1
sudo swift-ring-builder account.builder rebalance

To start swift and the REST API, run the following commands.
sudo swift-init main start
sudo swift-init rest start

Testing Swift
sudo chown -R swift.swift /etc/swift

Then run the following command and verify if you get the appropriate account information. The number of containers and objects stored within are displayed as well.
root@server1:~# swift -v -V 2.0 -A http://127.0.0.1:5000/v2.0/ -U service:swift -K swift stat
StorageURL: http://192.168.1.200:8080/v1/AUTH_4b0de95572044eb49345930225d81752
Auth Token: e6955ec2e6ca4059aba6bafc6c0d6473
   Account: AUTH_4b0de95572044eb49345930225d81752
Containers: 0
   Objects: 0
     Bytes: 0
Accept-Ranges: bytes
X-Trans-Id: tx051c25a362534266a4583f49fa44558d


到这里已经完成安装OpenStack了,里面提到的脚本,可以在附件下载.本次操作主要参考官方例子,有几个小地方与官网不一致.
打开http://192.168.1.200,输入admin@admin登录到系统中,可以通过这个平台创建镜像,实例等操作。
分享到:
评论
3 楼 yuky1327 2012-10-16  
你看看系统重启之后还正常不,我们研究的时候是win7重启之后再打开,虚拟机就上不了网
2 楼 dotapinkcat 2012-10-15  

2. Network Configuration
Edit the /etc/network/interfaces file so as to looks like this:
注意:请在虚拟机里面操作静态地址,不要SSH登录到机器上修改,否则可能会ping不通外网
Java代码  收藏代码

    # This file describes the network interfaces available on your system 
    # and how to activate them. For more information, see interfaces(5). 
     
    # The loopback network interface 
    auto lo 
    iface lo inet loopback 
     
    # The primary network interface 
    auto eth0 
    iface eth0 inet static 
    address 192.168.1.200 
    netmask 255.255.255.0 
    broadcast 192.168.1.255 
    gateway 192.168.1.1 
    dns-nameservers 192.168.1.1 

这用SSH登录修改并不会造成任何影响,呵呵.
1 楼 daduedie 2012-10-12  
嗯,不错~~~

相关推荐

    OpenStack-VMware部署文档

    ML2 VMware drivers已经合入自研neutron标准版,参照相应文档进行部署,然后修改参数。 本文描述的是对接分布式交换机DVS的步骤。

    VMware Integrated OpenStack解决方案指南.pdf

    《VMware Integrated OpenStack 解决方案指南》是针对联盟企业混合云环境的一份详细文档,旨在阐述如何在这样的环境中有效地部署和使用VMware的OpenStack集成解决方案。此指南涵盖了从体系结构到运行状况的多个关键...

    Vmware 12 中安装 fuel openstack的网络设置方法

    最后,完成上述步骤后,就可以在Fuel MasterNode上启动OpenStack的安装过程,通过网络引导安装其他计算节点。在安装过程中,Fuel会自动检测并配置网络,以满足OpenStack服务的需求。在整个过程中,网络配置的正确性...

    Oslo VMware library for OpenStack projects分享给需要的同学

    Oslo VMware library for OpenStack projects分享给需要的同学

    VMware Integrated OpenStack Administrator Guide - VMware Integra

    VMware Integrated OpenStack 4.1是该版本,它提供了在VMware基础设施上部署和管理OpenStack服务的方法。 1. **更新信息**:此文档会定期更新,最新的技术文档可以在VMware的官方网站上找到,网址为&lt;https://docs....

    openstack在ubuntu16.04安装最详细教程

    在Ubuntu 16.04上安装OpenStack,并使用OpenvSwitch作为网络组件是一种常见的架构选择,用以提供灵活和可扩展的网络服务。 ### 安装OpenStack的基本环境 首先,需要安装虚拟机管理器如Vmware来创建虚拟环境。在...

    详解VMware接入Openstack—使用Openstack创建vCenter虚拟机

    前言 从虚拟化的层面来说,Guest...本质上就是 Openstack 的 Nova Service 能够使用这些 API/Dirver 去连接 VMware Hypervisor 并达到控制的目的。在了解 VMware 接入 Openstack 的两种方式之前,首先要了解一些关于 VM

    RDO一键快速安装OpenStack-Rocky版

    VMware CentOS 7 CPU核心数:4 RAM:8G DISK:35G 准备 关于这个配置,是我在试了一天得出的结论,一直卡着 192.168.1.106_controller.pp: Testing if puppet apply is finished: 192.168.1.106_controller.pp [ | ]...

    使用此脚本自动在VMware虚拟机中一键安装OpenStack.zip

    使用此脚本自动在VMware虚拟机中一键安装OpenStack

    openstack一体化安装的详细完整步骤

    在本文中,我们将详细探讨如何进行OpenStack的一体化安装,特别是在VMware Workstation 15上。OpenStack是一个开源的云计算平台,用于构建私有云和公有云服务。一体化安装意味着在一个单一的节点上部署所有OpenStack...

    VMware Integrated OpenStack产品介绍.pdf

    VMware Integrated OpenStack是一款由VMware提供支持的OpenStack发行版软件(distro),用于帮助IT部门在现有VMware基础架构之上更加轻松地运行基于生产级OpenStack的部署。该产品基于现有专业技能而构建,管理员...

    OpenStack Victoria版安装部署实例教程

    OpenStack Victoria版安装部署教程详细地涵盖了在CentOS 8.4系统上建立OpenStack云环境的全过程。这个教程由17个章节组成,旨在帮助读者理解并实践OpenStack组件的配置和安装。 首先,安装环境准备阶段,你需要至少...

    VMWare Integrated OpenStack构建私有云.pdf

    VMWare Integrated OpenStack构建私有云.pdf

    openstack安装部署

    本文将详细介绍如何在CentOS系统上安装并部署OpenStack。OpenStack是一个开源的云计算管理平台项目,是一套数据中心云操作系统,旨在为公有云和私有云提供可扩展的弹性的云计算服务。本次安装使用的是Grizzly版本。 ...

    VMware与OpenStack如何最佳整合?

    通过多年的发展,VMWare在虚拟化市场处于领军地位,很多企业部署了VMWare虚拟化方案,随着OpenStack云计算平台的快速崛起,很多企业都面临一个问题:能否、以及如何整合VMWare和OpenStack来最佳化已有的投资和对接...

    Fuel安装部署多节点Openstack实验.docx

    Fuel安装部署多节点Openstack 在VMWare环境配置,多网卡或者双网卡

    openstack 多节点安装完整版

    2. 安装和配置组件:在每个节点上安装OpenStack软件包,配置相关服务。 3. 配置数据库和消息队列服务:如MySQL和RabbitMQ。 4. 设置Keystone身份服务:创建项目、用户和角色,定义权限和访问控制。 5. 部署Nova:...

    openstack安装图解.docx

    OpenStack 的架构非常大,可以包含 VMware sphere、MS Hyper-V、SDN 等。OpenStack 核心组成主要有对象存储(Swift)、计算管理(Nova)、网络管理(Quantum)、块存储(Cinder)、镜像管理(Glance)、身份认证...

Global site tag (gtag.js) - Google Analytics