OpenStack Grizzly – Creating a cinder ONLY (block storage) node – standalone
Binary Royale is an IT consultancy company based in the East Midlands. We spend all of our time with clients, helping them to make good decisions about their IT. When we come across issues that would be useful to others we “try” to post the answers on our website – www.binaryroyale.com . We cover Derby and Derbyshire, Nottingham and Nottinghamshire mainly, but do also have clients further afield. Please browse our website to see what we offer – thanks, and enjoy the blog
I’ve been doing a fair amount of work with OpenStack recently. One of the first hurdles I’ve encountered was trying to create a BLOCK (Cinder) Storage Node, which wasn’t just the cinder modules installed alongside all of the other OpenStack services, like every piece of documentation out there at present (April 2013) seems to be written about. In fact its a BLOCK node totally separate and connected with ISCSI. This is what I needed to achieve.
If you wish to have PERSISTENT storage you’ll need a BLOCK storage (Cinder)
I’ve been following the instructions from here
as you can see from this set of instructions, the guide is creating 3 different nodes : COMPUTE, CONTROLLER and NETWORK – see diagram below
What it DOESN’T guide you through is installing your BLOCK/CINDER node on a separate box. If you look through the creation of the CONTROLLER NODE, is that actually the CINDER instructions are in there. They are NOT shown in the diagram but they are in the guide – slightly confusing.
To take a separate server, containing decent Ethernet connectivity and bags of storage, and configure it to be a BLOCK/CINDER Storage Node and connect to your setup with iSCSI; Just how most full blown OpenStack setups, in my opinion, would be created. Everyone needs persistent storage right ? And it needs to be centralised and available to all your COMPUTE nodes right ?
- Start by following the instructions in the Grizzly Multi Node guide (Link Above) to create your CONTROLLER NODE – When you get to section 2.10 “Cinder” – Skip it and continue.
- Now build yourself a BLOCK node, using the instructions 2.1 and 2.2 – just to get a box online, with the right repositories and networked.
- You need 3 network interfaces for this node
- Eth0 for OpenStack Management – 10.10.10.x has been used in the setup above
- Eth1 for Public facing API – Internet connection – 192.168.100.x has been used above
- Eth2 for iSCSI – I’m using 10.10.99.x
Your /etc/network/interfaces may look like this for example
# Not internet connected(used for OpenStack management) auto eth0 iface eth0 inet static address 10.10.10.99 netmask 255.255.255.0 # The primary network interface auto eth1 iface eth1 inet static address 192.168.100.99 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 126.96.36.199 188.8.131.52 dns-search camelot.int # Not internet connected(used for iSCSI) auto eth2 iface eth2 inet static address 10.10.99.99 netmask 255.255.255.0
- Follow the instructions to get the grizzly repositories in place and the dist-upgrades all done. Now your new block node is ready for the Cinder modules
- apt-get install -y apt-get install -y cinder-volume iscsitarget iscsitarget-dkms
- Edit /etc/cinder/cinder.conf to make it look like so
[DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf sql_connection = mysql://cinderUser:cinderPass@10.10.10.51/cinder volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver iscsi_helper=tgtadm rabbit_host=10.10.10.51 rabbit_password=guest rabbit_port=5672 rabbit_userid=guest rabbit_virtual_host=/
- Now edit /etc/cinder/api-paste.ini – scroll to the bottom and replace the following section
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory service_protocol = http service_host = 192.168.1.51 service_port = 5000 auth_host = 10.10.10.51 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = service_pass signing_dir = /var/lib/cinder
- Now restart all the cinder services on your new CINDER/BLOCK node
cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
- Now synchronize the cinder settings with the mySQL db, over on your controller node
cinder-manage db sync
- Notice how you get an error about it NOT being able to connect to the mySQL database? no problem. You need to install the python mysql client
apt-get install -y python-mysqldb
- Now try running the db sync command again to register your settings and presence on the controller node
- OK. Now it’s time to provision the storage you have earmarked for this Storage node
- Bring the storage online
- Build it in a RAID array of your choosing, remembering that RAID 1 is fast and RAID 5 is sloooowwww
- Restart your node if you have to so that fdisk -l can see it
- On my test platform I’m using a 20GB Drive
- when I run fdisk -l it appears as – Disk /dev/sdb: 21.5 GB, 21474836480 bytes
- OK. So we need to format this new piece of storage now and give it to CINDER
#Type in the followings:
- Now to give this node to LVM to manage and name it correctly
vgcreate cinder-volumes /dev/sdb1
Sorry to do this to you, but I’ve given up on OpenStack for now; well the rolling out of it manually that is. I’ve decided to use the FuelWeb ISO, available from Mirantis. This builds a cluster for you in some very easy steps, and seeing as OpenStack is pretty complicated, I’m going to take all the help I can.