Installing OpenStack Folsom on Ubuntu

I was finally able to successfully deploy and run OpenStack Folsom on a single physical server for testing. It was a somewhat painful process as so many things can go wrong - there are just so many moving parts. I was lucky enough to attend a two day class on OpenStack that really helped [1].

This post will demonstrate how to go about installing and configuring OpenStack on a single node. At the end you'll be able to setup networking and block storage and create VM's.

Just as a brief overview of OpenStack here are all the parts that I've used [2]:

Object Store (codenamed "Swift") provides object storage. It allows you to store or retrieve files (but not mount directories like a fileserver). In this tutorial I'll not be using it, but I'll write a new one that will only deal with Swift, as it's a beast on it's own.

Image (codenamed "Glance") provides a catalog and repository for virtual disk images. These disk images are mostly commonly used in OpenStack Compute.

Compute (codenamed "Nova") provides virtual servers upon demand - KVM, XEN, LXC, etc.

Identity (codenamed "Keystone") provides authentication and authorization for all the OpenStack services. It also provides a service catalog of services within a particular OpenStack cloud.

Network (codenamed "Quantum") provides "network connectivity as a service" between interface devices managed by other OpenStack services (most likely Nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies. One example is OpenVSwitch which I'll use in this setup.

Block Storage (codenamed "Cinder") provides persistent block storage to guest VMs. This project was born from code originally in Nova (the nova-volume service). In the Folsom release, both the nova-volume service and the separate volume service are available. I'll use iSCSI over LVM to export a block device.

Dashboard (codenamed "Horizon") provides a modular web-based user interface for all the OpenStack services written in Django. With this web GUI, you can perform some operations on your cloud like launching an instance, assigning IP addresses and setting access controls.

Here's a conceptual diagram for OpenStack Folsom and how all the pieces fit together:


And here's the logical architecture:



For this example deployment I'll be using a single physical Ubuntu 12.04 server with hvm support enabled in the BIOS.

1. Prerequisites

Make sure you have the correct repository from which to download all OpenStack components:

As root run:
[root@folsom:~]# apt-get install ubuntu-cloud-keyring
[root@folsom:~]# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main" >> /etc/apt/sources.list
[root@folsom:~]# apt-get update && apt-get upgrade
[root@folsom:~]# reboot
view raw gistfile1.sh hosted with ❤ by GitHub

When the server comes back online execute (replace MY_IP with your IP address):
[root@folsom:~]# useradd -s /bin/bash -m openstack
[root@folsom:~]# echo "%openstack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@folsom:~]# su - openstack
[openstack@folsom:~]$ export MY_IP=10.177.129.121
view raw gistfile1.sh hosted with ❤ by GitHub

Preseed the MySQL install
[openstack@folsom:~]$ cat <<EOF | sudo debconf-set-selections
mysql-server-5.1 mysql-server/root_password password notmysql
mysql-server-5.1 mysql-server/root_password_again password notmysql
mysql-server-5.1 mysql-server/start_on_boot boolean true
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Install packages and dependencies
[openstack@folsom:~]$ sudo apt-get install -y rabbitmq-server mysql-server python-mysqldb
view raw gistfile1.sh hosted with ❤ by GitHub

Configure MySQL to listen on all interfaces
[openstack@folsom:~]$ sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
[openstack@folsom:~]$ sudo service mysql restart
view raw gistfile1.sh hosted with ❤ by GitHub

Synchronize date
[openstack@folsom:~]$ sudo ntpdate -u ntp.ubuntu.com
view raw gistfile1.sh hosted with ❤ by GitHub
>
2. Installing the identity service - Keystone
sudo apt-get install -y keystone
view raw gistfile1.sh hosted with ❤ by GitHub

Create a database for keystone
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE keystone;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'notkeystone';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'notkeystone';"
view raw gistfile1.sh hosted with ❤ by GitHub

Configure keystone to use MySQL
[openstack@folsom:~]$ sudo sed -i "s|connection = sqlite:////var/lib/keystone/keystone.db|connection = mysql://keystone:notkeystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf
view raw gistfile1.sh hosted with ❤ by GitHub

Restart keystone service
[openstack@folsom:~]$ sudo service keystone restart
view raw gistfile1.sh hosted with ❤ by GitHub

Verify keystone service successfully restarted
[openstack@folsom:~]$ pgrep -l keystone
view raw gistfile1.sh hosted with ❤ by GitHub

Initialize the database schema
[openstack@folsom:~]$ sudo -u keystone keystone-manage db_sync
view raw gistfile1.sh hosted with ❤ by GitHub

Add the 'keystone admin' credentials to .bashrc
[openstack@folsom:~]$ cat >> ~/.bashrc <<EOF
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://$MY_IP:35357/v2.0
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Use the 'keystone admin' credentials
[openstack@folsom:~]$ source ~/.bashrc
view raw gistfile1.sh hosted with ❤ by GitHub

Create new tenants (The services tenant will be used later when configuring services to use keystone)
[openstack@folsom:~]$ TENANT_ID=`keystone tenant-create --name MyProject | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ SERVICE_TENANT_ID=`keystone tenant-create --name Services | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Create new roles
[openstack@folsom:~]$ MEMBER_ROLE_ID=`keystone role-create --name member | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ ADMIN_ROLE_ID=`keystone role-create --name admin | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Create new users
[openstack@folsom:~]$ MEMBER_USER_ID=`keystone user-create --tenant-id $TENANT_ID --name myuser --pass mypassword | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ ADMIN_USER_ID=`keystone user-create --tenant-id $TENANT_ID --name myadmin --pass mypassword | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Grant roles to users
[openstack@folsom:~]$ keystone user-role-add --user-id $MEMBER_USER_ID --tenant-id $TENANT_ID --role-id $MEMBER_ROLE_ID
[openstack@folsom:~]$ keystone user-role-add --user-id $ADMIN_USER_ID --tenant-id $TENANT_ID --role-id $ADMIN_ROLE_ID
view raw gistfile1.sh hosted with ❤ by GitHub

List the new tenant, users, roles, and role assigments
[openstack@folsom:~]$ keystone tenant-list
[openstack@folsom:~]$ keystone role-list
[openstack@folsom:~]$ keystone user-list --tenant-id $TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $MEMBER_USER_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $TENANT_ID --user-id $ADMIN_USER_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Populate the services in the service catalog
[openstack@folsom:~]$ KEYSTONE_SVC_ID=`keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ GLANCE_SVC_ID=`keystone service-create --name=glance --type=image --description="Glance Image Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ QUANTUM_SVC_ID=`keystone service-create --name=quantum --type=network --description="Quantum Network Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ NOVA_SVC_ID=`keystone service-create --name=nova --type=compute --description="Nova Compute Service" | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ CINDER_SVC_ID=`keystone service-create --name=cinder --type=volume --description="Cinder Volume Service" | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

List the new services
[openstack@folsom:~]$ keystone service-list
view raw gistfile1.sh hosted with ❤ by GitHub

Populate the endpoints in the service catalog
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$KEYSTONE_SVC_ID --publicurl=http://$MY_IP:5000/v2.0 --internalurl=http://$MY_IP:5000/v2.0 --adminurl=http://$MY_IP:35357/v2.0
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$GLANCE_SVC_ID --publicurl=http://$MY_IP:9292/v1 --internalurl=http://$MY_IP:9292/v1 --adminurl=http://$MY_IP:9292/v1
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$QUANTUM_SVC_ID --publicurl=http://$MY_IP:9696/ --internalurl=http://$MY_IP:9696/ --adminurl=http://$MY_IP:9696/
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$NOVA_SVC_ID --publicurl="http://$MY_IP:8774/v2/%(tenant_id)s" --internalurl="http://$MY_IP:8774/v2/%(tenant_id)s" --adminurl="http://$MY_IP:8774/v2/%(tenant_id)s"
[openstack@folsom:~]$ keystone endpoint-create --region RegionOne --service-id=$CINDER_SVC_ID --publicurl="http://$MY_IP:8776/v1/%(tenant_id)s" --internalurl="http://$MY_IP:8776/v1/%(tenant_id)s" --adminurl="http://$MY_IP:8776/v1/%(tenant_id)s"
view raw gistfile1.sh hosted with ❤ by GitHub

List the new endpoints
[openstack@folsom:~]$ keystone endpoint-list
view raw gistfile1.sh hosted with ❤ by GitHub

Verify identity service is functioning
[openstack@folsom:~]$ curl -d '{"auth": {"tenantName": "MyProject", "passwordCredentials": {"username": "myuser", "password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
[openstack@folsom:~]$ curl -d '{"auth": {"tenantName": "MyProject", "passwordCredentials": {"username": "myadmin", "password": "mypassword"}}}' -H "Content-type: application/json" http://$MY_IP:5000/v2.0/tokens | python -m json.tool
view raw gistfile1.sh hosted with ❤ by GitHub

Create the 'user' and 'admin' credentials
[openstack@folsom:~]$ mkdir ~/credentials
[openstack@folsom:~]$ cat >> ~/credentials/user <<EOF
export OS_USERNAME=myuser
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF
[openstack@folsom:~]$ cat >> ~/credentials/admin <<EOF
export OS_USERNAME=myadmin
export OS_PASSWORD=mypassword
export OS_TENANT_NAME=MyProject
export OS_AUTH_URL=http://$MY_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export OS_NO_CACHE=1
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Use the 'user' credentials
[openstack@folsom:~]$ source ~/credentials/user
view raw gistfile1.sh hosted with ❤ by GitHub

3. Install the image service - Glance

[openstack@folsom:~]$ sudo apt-get install -y glance
view raw gistfile1.sh hosted with ❤ by GitHub

Create glance service user in the services tenant
[openstack@folsom:~]$ GLANCE_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name glance --pass notglance | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Grant admin role to glance service user
[openstack@folsom:~]$ keystone user-role-add --user-id $GLANCE_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
view raw gistfile1.sh hosted with ❤ by GitHub

List the new user and role assigment
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $GLANCE_USER_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Create a database for glance
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE glance;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'notglance';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'notglance';"
view raw gistfile1.sh hosted with ❤ by GitHub

Configure the glance-api service
[openstack@folsom:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/glance/glance.sqlite|sql_connection = mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g' /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g' /etc/glance/glance-api.conf
[openstack@folsom:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g' /etc/glance/glance-api.conf
view raw gistfile1.sh hosted with ❤ by GitHub

Configure the glance-registry service
[openstack@folsom:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/glance/glance.sqlite|sql_connection = mysql://glance:notglance@$MY_IP/glance|g" /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/glance/g' /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notglance/g' /etc/glance/glance-registry.conf
[openstack@folsom:~]$ sudo sed -i 's/#flavor=/flavor = keystone/g' /etc/glance/glance-registry.conf
view raw gistfile1.sh hosted with ❤ by GitHub

Restart glance services
[openstack@folsom:~]$ sudo service glance-registry restart
[openstack@folsom:~]$ sudo service glance-api restart
view raw gistfile1.sh hosted with ❤ by GitHub

Verify glance services successfully restarted
[openstack@folsom:~]$ pgrep -l glance
view raw gistfile1.sh hosted with ❤ by GitHub

Initialize the database schema. Ignore the deprecation warning.
[openstack@folsom:~]$ sudo -u glance glance-manage db_sync
view raw gistfile1.sh hosted with ❤ by GitHub

Download some images
[openstack@folsom:~]$ mkdir ~/images
[openstack@folsom:~]$ wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img -O ~/images/cirros-0.3.0-x86_64-disk.img
view raw gistfile1.sh hosted with ❤ by GitHub

Register a qcow2 image
[openstack@folsom:~]$ IMAGE_ID_1=`glance image-create --name "cirros-qcow2" --disk-format qcow2 --container-format bare --is-public True --file ~/images/cirros-0.3.0-x86_64-disk.img | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Verify the images exist in glance
[openstack@folsom:~]$ glance image-list
view raw gistfile1.sh hosted with ❤ by GitHub

# Examine details of images
[openstack@folsom:~]$ glance image-show $IMAGE_ID_1
view raw gistfile1.sh hosted with ❤ by GitHub

4. Install the network service - Quantum

Install dependencies
[openstack@folsom:~]$ sudo apt-get install -y openvswitch-switch
view raw gistfile1.sh hosted with ❤ by GitHub

Install the network service
[openstack@folsom:~]$ sudo apt-get install -y quantum-server quantum-plugin-openvswitch
view raw gistfile1.sh hosted with ❤ by GitHub

Install the network service agents
[openstack@folsom:~]$ sudo apt-get install -y quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent
view raw gistfile1.sh hosted with ❤ by GitHub

Create a database for quantum
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE quantum;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON quantum.* TO 'quantum'@'localhost' IDENTIFIED BY 'notquantum';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON quantum.* TO 'quantum'@'%' IDENTIFIED BY 'notquantum';"
view raw gistfile1.sh hosted with ❤ by GitHub

Configure the quantum OVS plugin
[openstack@folsom:~]$ sudo sed -i "s|sql_connection = sqlite:////var/lib/quantum/ovs.sqlite|sql_connection = mysql://quantum:notquantum@$MY_IP/quantum|g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i 's/# Default: enable_tunneling = False/enable_tunneling = True/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i 's/# Example: tenant_network_type = gre/tenant_network_type = gre/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i 's/# Example: tunnel_id_ranges = 1:1000/tunnel_id_ranges = 1:1000/g' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[openstack@folsom:~]$ sudo sed -i "s/# Default: local_ip = 10.0.0.3/local_ip = 192.168.1.$MY_NODE_ID/g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
view raw gistfile1.sh hosted with ❤ by GitHub

Create quantum service user in the services tenant
[openstack@folsom:~]$ QUANTUM_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name quantum --pass notquantum | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Grant admin role to quantum service user
[openstack@folsom:~]$ keystone user-role-add --user-id $QUANTUM_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
view raw gistfile1.sh hosted with ❤ by GitHub

List the new user and role assigment
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $QUANTUM_USER_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Configure the quantum service to use keystone
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g' /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g' /etc/quantum/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/# auth_strategy = keystone/auth_strategy = keystone/g' /etc/quantum/quantum.conf
view raw gistfile1.sh hosted with ❤ by GitHub

Configure the L3 agent to use keystone
[openstack@folsom:~]$ sudo sed -i "s|auth_url = http://localhost:35357/v2.0|auth_url = http://$MY_IP:35357/v2.0|g" /etc/quantum/l3_agent.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/quantum/l3_agent.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/quantum/g' /etc/quantum/l3_agent.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notquantum/g' /etc/quantum/l3_agent.ini
view raw gistfile1.sh hosted with ❤ by GitHub

Start Open vSwitch
[openstack@folsom:~]$ sudo service openvswitch-switch restart
view raw gistfile1.sh hosted with ❤ by GitHub

Create the integration and external bridges
[openstack@folsom:~]$ sudo ovs-vsctl add-br br-int
[openstack@folsom:~]$ sudo ovs-vsctl add-br br-ex
view raw gistfile1.sh hosted with ❤ by GitHub

Restart the quantum services
[openstack@folsom:~]$ sudo service quantum-server restart
[openstack@folsom:~]$ sudo service quantum-plugin-openvswitch-agent restart
[openstack@folsom:~]$ sudo service quantum-dhcp-agent restart
[openstack@folsom:~]$ sudo service quantum-l3-agent restart
view raw gistfile1.sh hosted with ❤ by GitHub

Create a network and subnet
[openstack@folsom:~]$ PRIVATE_NET_ID=`quantum net-create private | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ PRIVATE_SUBNET1_ID=`quantum subnet-create --name private-subnet1 $PRIVATE_NET_ID 10.0.0.0/29 | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

List network and subnet
[openstack@folsom:~]$ quantum net-list
[openstack@folsom:~]$ quantum subnet-list
view raw gistfile1.sh hosted with ❤ by GitHub

Examine details of network and subnet
[openstack@folsom:~]$ quantum net-show $PRIVATE_NET_ID
[openstack@folsom:~]$ quantum subnet-show $PRIVATE_SUBNET1_ID
view raw gistfile1.sh hosted with ❤ by GitHub

To add public connectivity to your VM's perform the following:

Bring up eth1
[openstack@folsom:~]$ sudo ip link set dev eth1 up
view raw gistfile1.sh hosted with ❤ by GitHub
Attach eth1 to br-ex
[openstack@folsom:~]$ sudo ovs-vsctl add-port br-ex eth1
[openstack@folsom:~]$ sudo ovs-vsctl show
view raw gistfile1.sh hosted with ❤ by GitHub

As the admin user for Quantum create a provider owned network and subnet and set the MY_PUBLIC_SUBNET_CIDR to your public CIDR
[openstack@folsom:~]$ source ~/credentials/quantum
[openstack@folsom:~]$ echo $MY_PUBLIC_SUBNET_CIDR
[openstack@folsom:~]$ PUBLIC_NET_ID=`quantum net-create public --router:external=True | awk '/ id / { print $4 }'`
[openstack@folsom:~]$ PUBLIC_SUBNET_ID=`quantum subnet-create --name public-subnet $PUBLIC_NET_ID $MY_PUBLIC_SUBNET_CIDR -- --enable_dhcp=False | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Switch back to the 'user' credentials
[openstack@folsom:~]$ source ~/credentials/user
view raw gistfile1.sh hosted with ❤ by GitHub
Connect the router to the public network
[openstack@folsom:~]$ quantum router-gateway-set $ROUTER_ID $PUBLIC_NET_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Exmaine details of router
[openstack@folsom:~]$ quantum router-show $ROUTER_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Get instance ID for MyInstance1
[openstack@folsom:~]$ nova show MyInstance1
INSTANCE_ID=(instance id of your vm)
view raw gistfile1.sh hosted with ❤ by GitHub

Find the port id for instance
[openstack@folsom:~]$ INSTANCE_PORT_ID=`quantum port-list -f csv -c id -- --device_id=$INSTANCE_ID | awk 'END{print};{gsub(/[\"\r]/,"")}'`
view raw gistfile1.sh hosted with ❤ by GitHub

Create a floating IP and attach it to instance
[openstack@folsom:~]$ quantum floatingip-create --port_id=$INSTANCE_PORT_ID $PUBLIC_NET_ID
view raw gistfile1.sh hosted with ❤ by GitHub

5. Install the compute service - Nova

[openstack@folsom:~]$ sudo apt-get install -y nova-api nova-scheduler nova-compute nova-cert nova-consoleauth genisoimage
view raw gistfile1.sh hosted with ❤ by GitHub

Create nova service user in the services tenant
[openstack@folsom:~]$ NOVA_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name nova --pass notnova | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Grant admin role to nova service user
[openstack@folsom:~]$ keystone user-role-add --user-id $NOVA_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
view raw gistfile1.sh hosted with ❤ by GitHub

List the new user and role assigment
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $NOVA_USER_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Create a database for nova
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE nova;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'notnova';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'notnova';"
view raw gistfile1.sh hosted with ❤ by GitHub

Configure nova
[openstack@folsom:~]$ cat <<EOF | sudo tee -a /etc/nova/nova.conf
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://$MY_IP:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=Services
quantum_admin_username=quantum
quantum_admin_password=notquantum
quantum_admin_auth_url=http://$MY_IP:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
sql_connection=mysql://nova:notnova@$MY_IP/nova
auth_strategy=keystone
my_ip=$MY_IP
force_config_drive=True
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Disable verbose logging
[openstack@folsom:~]$ sudo sed -i 's/verbose=True/verbose=False/g' /etc/nova/nova.conf
view raw gistfile1.sh hosted with ❤ by GitHub

Configure nova to use keystone
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/nova/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_TENANT_NAME%/Services/g' /etc/nova/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_USER%/nova/g' /etc/nova/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/%SERVICE_PASSWORD%/notnova/g' /etc/nova/api-paste.ini
view raw gistfile1.sh hosted with ❤ by GitHub

Initialize the nova database
[openstack@folsom:~]$ sudo -u nova nova-manage db sync
view raw gistfile1.sh hosted with ❤ by GitHub

Restart nova services
[openstack@folsom:~]$ sudo service nova-api restart
[openstack@folsom:~]$ sudo service nova-scheduler restart
[openstack@folsom:~]$ sudo service nova-compute restart
[openstack@folsom:~]$ sudo service nova-cert restart
[openstack@folsom:~]$ sudo service nova-consoleauth restart
view raw gistfile1.sh hosted with ❤ by GitHub

Verify nova services successfully restarted
[openstack@folsom:~]$ pgrep -l nova
view raw gistfile1.sh hosted with ❤ by GitHub

Verify nova services are functioning
[openstack@folsom:~]$ sudo nova-manage service list
view raw gistfile1.sh hosted with ❤ by GitHub

List images
[openstack@folsom:~]$ nova image-list
view raw gistfile1.sh hosted with ❤ by GitHub

List flavors
[openstack@folsom:~]$ nova flavor-list
view raw gistfile1.sh hosted with ❤ by GitHub

Boot an instance using flavor and image names (if names are unique)
[openstack@folsom:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyFirstInstance
view raw gistfile1.sh hosted with ❤ by GitHub

Boot an instance using flavor and image IDs
[openstack@folsom:~]$ nova boot --image $IMAGE_ID_1 --flavor 1 MySecondInstance
view raw gistfile1.sh hosted with ❤ by GitHub

List instances, notice status of instance
[openstack@folsom:~]$ nova list
view raw gistfile1.sh hosted with ❤ by GitHub

Show details of instance
[openstack@folsom:~]$ nova show MyFirstInstance
view raw gistfile1.sh hosted with ❤ by GitHub

View console log of instance
[openstack@folsom:~]$ nova console-log MyFirstInstance
view raw gistfile1.sh hosted with ❤ by GitHub

Get network namespace (ie, qdhcp-5ab46e23-118a-4cad-9ca8-51d56a5b6b8c)
[openstack@folsom:~]$ sudo ip netns
[openstack@folsom:~]$ NETNS_ID=qdhcp-$PRIVATE_NET_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Ping first instance after status is active
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ping -c 3 10.0.0.3
view raw gistfile1.sh hosted with ❤ by GitHub

Log into first instance
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh cirros@10.0.0.3
view raw gistfile1.sh hosted with ❤ by GitHub

If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3
view raw gistfile1.sh hosted with ❤ by GitHub

Ping second instance from first instance
[openstack@host1:~]$ ping -c 3 10.0.0.4
view raw gistfile1.sh hosted with ❤ by GitHub

Log into second instance from first instance
[openstack@host1:~]$ ssh cirros@10.0.0.4
view raw gistfile1.sh hosted with ❤ by GitHub

Log out of second instance
[openstack@host2:~]$ exit
view raw gistfile1.sh hosted with ❤ by GitHub

Log out of first instance
[openstack@host1:~]$ exit
view raw gistfile1.sh hosted with ❤ by GitHub

Use virsh to talk directly to libvirt
[openstack@folsom:~]$ sudo virsh list --all
view raw gistfile1.sh hosted with ❤ by GitHub

Delete instances
[openstack@folsom:~]$ nova delete MyFirstInstance
[openstack@folsom:~]$ nova delete MySecondInstance
view raw gistfile1.sh hosted with ❤ by GitHub

List instances, notice status of instance
[openstack@folsom:~]$ nova list
view raw gistfile1.sh hosted with ❤ by GitHub

To start a LXC container do the following:
[openstack@folsom:~]$ sudo apt-get install nova-compute-lxc lxctl
[openstack@folsom:~]$ sudo echo "compute_driver=libvirt.LibvirtDriver" >> /etc/nova/nova.conf
[openstack@folsom:~]$ sudo echo "libvirt_type=lxc" >> /etc/nova/nova.conf
[openstack@folsom:~]$ sudo cat /etc/nova/nova-compute.conf
[DEFAULT]
libvirt_type=lxc
view raw gistfile1.sh hosted with ❤ by GitHub

You need to use a raw image:
[openstack@folsom:~]$ wget http://uec-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64.tar.gz -O images/ubuntu-12.04-server-cloudimg-amd64.tar.gz
[openstack@folsom:~]$ cd images; tar zxfv ubuntu-12.04-server-cloudimg-amd64.tar.gz; cd ..
[openstack@folsom:~]$ glance image-create --name "UbuntuLXC" --disk-format raw --container-format bare --is-public True --file images/precise-server-cloudimg-amd64.img
[openstack@folsom:~]$ glance image-update UbuntuLXC --property hypervisor_type=lxc
view raw gistfile1.sh hosted with ❤ by GitHub
Now you can start the LXC container with nova:
[openstack@folsom:~]$ nova boot --image UbuntuLXC --flavor m1.tiny LXC
view raw gistfile1.sh hosted with ❤ by GitHub

The instance files and rootfs will be located in /var/lib/nova/instances.
Logs go to /var/log/nova/nova-compute.log.
VNC does not work with LXC, but the console and ssh does.

6. Install the dashboard - Horizon

[openstack@folsom:~]$ sudo apt-get install -y memcached novnc
[openstack@folsom:~]$ sudo apt-get install -y --no-install-recommends openstack-dashboard nova-novncproxy
view raw gistfile1.sh hosted with ❤ by GitHub

Configure nova for VNC
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
novncproxy_base_url=http://$MY_IP:6080/vnc_auto.html
vncserver_proxyclient_address=$MY_IP
vncserver_listen=0.0.0.0
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Set default role
[openstack@folsom:~]$ sudo sed -i 's/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"/OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"/g' /etc/openstack-dashboard/local_settings.py
view raw gistfile1.sh hosted with ❤ by GitHub

Restart the nova services
[openstack@folsom:~]$ sudo service nova-api restart
[openstack@folsom:~]$ sudo service nova-scheduler restart
[openstack@folsom:~]$ sudo service nova-compute restart
[openstack@folsom:~]$ sudo service nova-cert restart
[openstack@folsom:~]$ sudo service nova-consoleauth restart
[openstack@folsom:~]$ sudo service nova-novncproxy restart
[openstack@folsom:~]$ sudo service apache2 restart
view raw gistfile1.sh hosted with ❤ by GitHub

Point your browser to http://$MY_IP/horizon.
The credentials that we've create earlier are myadmin/mypassword.

7. Install the volume service - Cinder

[openstack@folsom:~]$ sudo apt-get install -y cinder-api cinder-scheduler cinder-volume
view raw gistfile1.sh hosted with ❤ by GitHub

Create cinder service user in the services tenant
[openstack@folsom:~]$ CINDER_USER_ID=`keystone user-create --tenant-id $SERVICE_TENANT_ID --name cinder --pass notcinder | awk '/ id / { print $4 }'`
view raw gistfile1.sh hosted with ❤ by GitHub

Grant admin role to cinder service user
[openstack@folsom:~]$ keystone user-role-add --user-id $CINDER_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ADMIN_ROLE_ID
view raw gistfile1.sh hosted with ❤ by GitHub

List the new user and role assigment
[openstack@folsom:~]$ keystone user-list --tenant-id $SERVICE_TENANT_ID
[openstack@folsom:~]$ keystone user-role-list --tenant-id $SERVICE_TENANT_ID --user-id $CINDER_USER_ID
view raw gistfile1.sh hosted with ❤ by GitHub

Create a database for cinder
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "CREATE DATABASE cinder;"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'notcinder';"
[openstack@folsom:~]$ mysql -u root -pnotmysql -e "GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'notcinder';"
view raw gistfile1.sh hosted with ❤ by GitHub

Configure cinder
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/cinder/cinder.conf ) <<EOF
sql_connection = mysql://cinder:notcinder@$MY_IP/cinder
my_ip = $MY_IP
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Configure cinder-api to use keystone
[openstack@folsom:~]$ sudo sed -i "s/service_host = 127.0.0.1/service_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i "s/auth_host = 127.0.0.1/auth_host = $MY_IP/g" /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/admin_tenant_name = %SERVICE_TENANT_NAME%/admin_tenant_name = Services/g' /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/admin_user = %SERVICE_USER%/admin_user = cinder/g' /etc/cinder/api-paste.ini
[openstack@folsom:~]$ sudo sed -i 's/admin_password = %SERVICE_PASSWORD%/admin_password = notcinder/g' /etc/cinder/api-paste.ini
view raw gistfile1.sh hosted with ❤ by GitHub

Initialize the database schema
[openstack@folsom:~]$ sudo -u cinder cinder-manage db sync
view raw gistfile1.sh hosted with ❤ by GitHub

Configure nova to use cinder
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/nova/nova.conf ) <<EOF
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=nova.volume.cinder.API
enabled_apis=osapi_compute,metadata
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Restart nova-api to disable the nova-volume api (osapi_volume)
[openstack@folsom:~]$ sudo service nova-api restart
[openstack@folsom:~]$ sudo service nova-scheduler restart
[openstack@folsom:~]$ sudo service nova-compute restart
[openstack@folsom:~]$ sudo service nova-cert restart
[openstack@folsom:~]$ sudo service nova-consoleauth restart
[openstack@folsom:~]$ sudo service nova-novncproxy restart
view raw gistfile1.sh hosted with ❤ by GitHub

Configure tgt
[openstack@folsom:~]$ ( cat | sudo tee -a /etc/tgt/targets.conf ) <<EOF
default-driver iscsi
EOF
view raw gistfile1.sh hosted with ❤ by GitHub

Restart tgt and open-iscsi
[openstack@folsom:~]$ sudo service tgt restart
[openstack@folsom:~]$ sudo service open-iscsi restart
view raw gistfile1.sh hosted with ❤ by GitHub

Create the volume group
[openstack@folsom:~]$ sudo pvcreate /dev/sda4
[openstack@folsom:~]$ sudo vgcreate cinder-volumes /dev/sda4
view raw gistfile1.sh hosted with ❤ by GitHub

Verify the volume group
[openstack@folsom:~]$ sudo vgdisplay
view raw gistfile1.sh hosted with ❤ by GitHub

Restart the volume services
[openstack@folsom:~]$ sudo service cinder-volume restart
[openstack@folsom:~]$ sudo service cinder-scheduler restart
[openstack@folsom:~]$ sudo service cinder-api restart
view raw gistfile1.sh hosted with ❤ by GitHub

Create a new volume
[openstack@folsom:~]$ cinder create 1 --display-name MyFirstVolume
view raw gistfile1.sh hosted with ❤ by GitHub

Boot an instance to attach volume to
[openstack@folsom:~]$ nova boot --image cirros-qcow2 --flavor m1.tiny MyVolumeInstance
view raw gistfile1.sh hosted with ❤ by GitHub

List instances, notice status of instance
[openstack@folsom:~]$ nova list
view raw gistfile1.sh hosted with ❤ by GitHub

List volumes, notice status of volume
[openstack@folsom:~]$ cinder list
view raw gistfile1.sh hosted with ❤ by GitHub

Attach volume to instance after instance is active, and volume is available
[openstack@folsom:~]$ nova volume-attach <instance-id> <volume-id> /dev/vdc
view raw gistfile1.sh hosted with ❤ by GitHub

Log into first instance
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh cirros@10.0.0.3
view raw gistfile1.sh hosted with ❤ by GitHub

If you get a 'REMOTE HOST IDENTIFICATION HAS CHANGED' warning from previous command
[openstack@folsom:~]$ sudo ip netns exec $NETNS_ID ssh-keygen -f "/root/.ssh/known_hosts" -R 10.0.0.3
view raw gistfile1.sh hosted with ❤ by GitHub

Make filesystem on volume
[openstack@folsom:~]$ sudo mkfs.ext3 /dev/vdc
view raw gistfile1.sh hosted with ❤ by GitHub

Create a mountpoint
[openstack@folsom:~]$ sudo mkdir /extraspace
view raw gistfile1.sh hosted with ❤ by GitHub

Mount the volume at the mountpoint
[openstack@folsom:~]$ sudo mount /dev/vdc /extraspace
view raw gistfile1.sh hosted with ❤ by GitHub

Create a file on the volume
[openstack@folsom:~]$ sudo touch /extraspace/helloworld.txt
[openstack@folsom:~]$ sudo ls /extraspace
view raw gistfile1.sh hosted with ❤ by GitHub

Unmount the volume
[openstack@folsom:~]$ sudo umount /extraspace
view raw gistfile1.sh hosted with ❤ by GitHub

Log out of instance
[openstack@folsom:~]$ exit
view raw gistfile1.sh hosted with ❤ by GitHub

Detach volume from instance
[openstack@folsom:~]$ nova volume-detach <instance-id> <volume-id>
view raw gistfile1.sh hosted with ❤ by GitHub

List volumes, notice status of volume
[openstack@folsom:~]$ cinder list
view raw gistfile1.sh hosted with ❤ by GitHub

Delete instance
[openstack@folsom:~]$ nova delete MyVolumeInstance
view raw gistfile1.sh hosted with ❤ by GitHub


 Resources:
[1] http://www.rackspace.com/cloud/private/training/
[2] http://docs.openstack.org/folsom/