py-cinder
OpenStack Block Service
Contents
Set up Cinder on FreeBSD
Prerequisites
a configured and working instance of security/py-keystone
net/py-python-openstackclient is required for the setup and administration tasks
a configured message queueing service server, e.g. net/rabbitmq
- a NFS-server running the version 4.1 protocol
Prepare the system for Cinder
The following commands must be run as the OpenStack admin user. (It is assumed that the environment (OS_AUTH_URL, OS_PASSWORD, etc.) is already set thus the subsequent commands are listed only with the required parameters for the sake of brevity.)
Create the user, role and service
$ openstack user create --domain default --password-prompt cinder $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 $ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
Create the endpoints
Important: Note the difference between the volumev2 and volumev3 URLs.
$ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
Configure the cinder service
Cinder works in three different modes:
- as controller node for another Cinder storage node
- as storage node
- as controller/storage node
Controller node
[DEFAULT] # ... auth_strategy = keystone my_ip = 1.2.3.4 transport_url = rabbit://openstack:RABBIT_PASS@controller [keystone_authtoken] # ... auth_uri = http://keystonehost:5000 auth_url = http://keystonehost:5000 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = CINDER_PASS [database] # ... # Please make sure that you use an absolute path otherwise Cinder won't work properly. connection = sqlite:////var/lib/cinder/cinder.db [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
Storage node
[DEFAULT] # ... auth_strategy = keystone my_ip = 1.2.3.4 transport_url = rabbit://openstack:RABBIT_PASS@controller enabled_backends = nfs glance_api_servers = http://glanceserver:9292 [backend_defaults] volume_driver = cinder.volume.drivers.nfs.NfsDriver [keystone_authtoken] # ... auth_uri = http://keystonehost:5000 auth_url = http://keystonehost:5000 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = CINDER_PASS [database] # ... # Please make sure that you use an absolute path otherwise Cinder won't work properly. connection = sqlite:////var/lib/cinder/cinder.db [oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
Create a file nfs_shares in /usr/local/etc/cinder and enter the shares of the NFS server(s) in form like this:
HOSTNAME:/EXPORTNAME
Enable the NFS client in the /etc/rc.conf:
sysrc nfs_client_enable="YES"
Important: The NFS-Server must support the 4.1 protocol!
Populate the Storage service database
# su -m cinder -c "cinder-manage db sync"
Enable the services
Enable and start the Cinder services:
On the controller node:
- cinder-api
- cinder-schedule
On the storage node:
- cinder-volume
Verify operation
Verify that the services are reachable:
$ openstack volume service list
Following command can be used to create a volume of 1GB size if the storage node is configured for NFS:
$ cinder create --display_name nfsvolume 1
References
https://docs.openstack.org/cinder/queens/install/index.html