Configuration Reference 0.9 documentation

Contents

OpenStack Configuration Reference

Abstract

This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project. It includes sample configuration files.

Contents

OpenStack configuration overview

Conventions

The OpenStack documentation uses several typesetting conventions.

Notices

Notices take these forms:

Note

A comment with additional information that explains a part of the text.

Important

Something you must be aware of before proceeding.

Tip

An extra but helpful piece of practical advice.

Caution

Helpful information that prevents the user from making mistakes.

Warning

Critical information about the risk of data loss or security issues.

Command prompts
$ command

Any user, including the root user, can run commands that are prefixed with the $ prompt.

# command

The root user must run commands that are prefixed with the # prompt. You can also prefix these commands with the sudo command, if available, to run them.

Configuration file format

OpenStack uses the INI file format for configuration files. An INI file is a simple text file that specifies options as key=value pairs, grouped into sections. The DEFAULT section contains most of the configuration options. Lines starting with a hash sign (#) are comment lines. For example:

[DEFAULT]
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
debug = true

[database]
# The SQLAlchemy connection string used to connect to the
# database (string value)
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

Options can have different types for values. The comments in the sample config files always mention these and the tables mention the Opt value as first item like (BoolOpt) Toggle.... The following types are used by OpenStack:

boolean value (BoolOpt)

Enables or disables an option. The allowed values are true and false.

# Enable the experimental use of database reconnect on
# connection lost (boolean value)
use_db_reconnect = false
floating point value (FloatOpt)

A floating point number like 0.25 or 1000.

# Sleep time in seconds for polling an ongoing async task
# (floating point value)
task_poll_interval = 0.5
integer value (IntOpt)

An integer number is a number without fractional components, like 0 or 42.

# The port which the OpenStack Compute service listens on.
# (integer value)
compute_port = 8774
IP address (IPOpt)

An IPv4 or IPv6 address.

# Address to bind the server. Useful when selecting a particular network
# interface. (ip address value)
bind_host = 0.0.0.0
key-value pairs (DictOpt)

A key-value pairs, also known as a dictionary. The key value pairs are separated by commas and a colon is used to separate key and value. Example: key1:value1,key2:value2.

# Parameter for l2_l3 workflow setup. (dict value)
l2_l3_setup_params = data_ip_address:192.168.200.99, \
   data_ip_mask:255.255.255.0,data_port:1,gateway:192.168.200.1,ha_port:2
list value (ListOpt)

Represents values of other types, separated by commas. As an example, the following sets allowed_rpc_exception_modules to a list containing the four elements oslo.messaging.exceptions, nova.exception, cinder.exception, and exceptions:

# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call. (list value)
allowed_rpc_exception_modules = oslo.messaging.exceptions,nova.exception
multi valued (MultiStrOpt)

A multi-valued option is a string value and can be given more than once, all values will be used.

# Driver or drivers to handle sending notifications. (multi valued)
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier
port value (PortOpt)

A TCP/IP port number. Ports can range from 1 to 65535.

# Port to which the UDP socket is bound. (port value)
# Minimum value: 1
# Maximum value: 65535
udp_port = 4952
string value (StrOpt)

Strings can be optionally enclosed with single or double quotes.

# Enables or disables publication of error events. (boolean value)
publish_errors = false

# The format for an instance that is passed with the log message.
# (string value)
instance_format = "[instance: %(uuid)s] "
Sections

Configuration options are grouped by section. Most configuration files support at least the following sections:

[DEFAULT]
Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section.
[database]
Configuration options for the database that stores the state of the OpenStack service.
Substitution

The configuration file supports variable substitution. After you set a configuration option, it can be referenced in later configuration values when you precede it with a $, like $OPTION.

The following example uses the values of rabbit_host and rabbit_port to define the value of the rabbit_hosts option, in this case as controller:5672.

# The RabbitMQ broker address where a single node is used.
# (string value)
rabbit_host = controller

# The RabbitMQ broker port where a single node is used.
# (integer value)
rabbit_port = 5672

# RabbitMQ HA cluster host:port pairs. (list value)
rabbit_hosts = $rabbit_host:$rabbit_port

To avoid substitution, use $$, it is replaced by a single $. For example, if your LDAP DNS password is $xkj432, specify it, as follows:

ldap_dns_password = $$xkj432

The code uses the Python string.Template.safe_substitute() method to implement variable substitution. For more details on how variable substitution is resolved, see http://docs.python.org/2/library/string.html#template-strings and PEP 292.

Whitespace

To include whitespace in a configuration value, use a quoted string. For example:

ldap_dns_password='a password with spaces'
Define an alternate location for a config file

Most services and the *-manage command-line clients load the configuration file. To define an alternate location for the configuration file, pass the --config-file CONFIG_FILE parameter when you start a service or call a *-manage command.

Changing config at runtime

OpenStack Newton introduces the ability to reload (or ‘mutate’) certain configuration options at runtime without a service restart. The following projects support this:

  • Compute (nova)

Check individual options to discover if they are mutable.

In practice

A common use case is to enable debug logging after a failure. Use the mutable config option called ‘debug’ to do this (providing log_config_append has not been set). An admin user may perform the following steps:

  1. Log onto the compute node.
  2. Edit the config file (EG nova.conf) and change ‘debug’ to True.
  3. Send a SIGHUP signal to the nova process (For example, pkill -HUP nova).

A log message will be written out confirming that the option has been changed. If you use a CMS like Ansible, Chef, or Puppet, we recommend scripting these steps through your CMS.

OpenStack is a collection of open source project components that enable setting up cloud services. Each component uses similar configuration techniques and a common framework for INI file options.

This guide pulls together multiple references and configuration options for the following OpenStack components:

  • Bare Metal service
  • Block Storage service
  • Compute service
  • Dashboard
  • Database service
  • Data Processing service
  • Identity service
  • Image service
  • Message service
  • Networking service
  • Object Storage service
  • Orchestration service
  • Shared File Systems service
  • Telemetry service

Also, OpenStack uses many shared service and libraries, such as database connections and RPC messaging, whose configuration options are described at Common configurations.

Common configurations

This chapter describes the common configurations for shared service and libraries.

Authentication and authorization

All requests to the API may only be performed by an authenticated agent.

The preferred authentication system is Identity service.

Identity service authentication

To authenticate, an agent issues an authentication request to an Identity service endpoint. In response to valid credentials, Identity service responds with an authentication token and a service catalog that contains a list of all services and endpoints available for the given token.

Multiple endpoints may be returned for each OpenStack service according to physical locations and performance/availability characteristics of different deployments.

Normally, Identity service middleware provides the X-Project-Id header based on the authentication token submitted by the service client.

For this to work, clients must specify a valid authentication token in the X-Auth-Token header for each request to each OpenStack service API. The API validates authentication tokens against Identity service before servicing each request.

No authentication

If authentication is not enabled, clients must provide the X-Project-Id header themselves.

Options

Configure the authentication and authorization strategy through these options:

Description of authentication configuration options
Configuration option = Default value Description
[DEFAULT]  
auth_strategy = keystone (String) This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username.
Description of authorization token configuration options
Configuration option = Default value Description
[keystone_authtoken]  
admin_password = None (String) Service user password.
admin_tenant_name = admin (String) Service tenant name.
admin_token = None (String) This option is deprecated and may be removed in a future release. Single shared secret with the Keystone configuration used for bootstrapping a Keystone installation, or otherwise bypassing the normal authentication process. This option should not be used, use admin_user and admin_password instead.
admin_user = None (String) Service username.
auth_admin_prefix = (String) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
auth_host = 127.0.0.1 (String) Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
auth_port = 35357 (Integer) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https (String) Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
auth_uri = None (String) Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint.
auth_version = None (String) API version of the admin Identity API endpoint.
cache = None (String) Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead.
cafile = None (String) A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
certfile = None (String) Required if identity server requires client certificate
check_revocations_for_cached = False (Boolean) If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server.
delay_auth_decision = False (Boolean) Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
enforce_token_bind = permissive (String) Used to control the use and type of token binding. Can be set to: “disabled” to not check token binding. “permissive” (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. “strict” like “permissive” but if the bind type is unknown the token will be rejected. “required” any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
hash_algorithms = md5 (List) Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance.
http_connect_timeout = None (Integer) Request timeout value for communicating with Identity API server.
http_request_max_retries = 3 (Integer) How many times are we trying to reconnect when communicating with Identity API Server.
identity_uri = None (String) Complete admin Identity API endpoint. This should specify the unversioned root endpoint e.g. https://localhost:35357/
include_service_catalog = True (Boolean) (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) Required if identity server requires client certificate
memcache_pool_conn_get_timeout = 10 (Integer) (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
memcache_pool_dead_retry = 300 (Integer) (Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize = 10 (Integer) (Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout = 3 (Integer) (Optional) Socket timeout in seconds for communicating with a memcached server.
memcache_pool_unused_timeout = 60 (Integer) (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
memcache_secret_key = None (String) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
memcache_security_strategy = None (String) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
memcache_use_advanced_pool = False (Boolean) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.
memcached_servers = None (List) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
region_name = None (String) The region in which the identity server can be found.
revocation_cache_time = 10 (Integer) Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance. Only valid for PKI tokens.
signing_dir = None (String) Directory used to cache files related to PKI tokens.
token_cache_time = 300 (Integer) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.

Cache configurations

The cache configuration options allow the deployer to control how an application uses this library.

These options are supported by:

  • Compute service
  • Identity service
  • Message service
  • Networking service
  • Orchestration service

For a complete list of all available cache configuration options, see olso.cache configuration options.

Database configurations

You can configure OpenStack services to use any SQLAlchemy-compatible database.

To ensure that the database schema is current, run the following command:

# SERVICE-manage db sync

To configure the connection string for the database, use the configuration option settings documented in the table Description of database configuration options.

Description of database configuration options
Configuration option = Default value Description
[DEFAULT]  
db_driver = SERVICE.db (String) DEPRECATED: The driver to use for database access
[database]  
backend = sqlalchemy (String) The back end to use for the database.
connection = None (String) The SQLAlchemy connection string to use to connect to the database.
connection_debug = 0 (Integer) Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace = False (Boolean) Add Python stack traces to SQL as comment strings.
db_inc_retry_interval = True (Boolean) If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retries = 20 (Integer) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10 (Integer) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_retry_interval = 1 (Integer) Seconds between retries of a database transaction.
idle_timeout = 3600 (Integer) Timeout before idle SQL connections are reaped.
max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy.
max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool.
max_retries = 10 (Integer) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool.
mysql_sql_mode = TRADITIONAL (String) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
pool_timeout = None (Integer) If set, use this value for pool_timeout with SQLAlchemy.
retry_interval = 10 (Integer) Interval between retries of opening a SQL connection.
slave_connection = None (String) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite (String) The file name to use with SQLite.
sqlite_synchronous = True (Boolean) If True, SQLite uses synchronous mode.
use_db_reconnect = False (Boolean) Enable the experimental use of database reconnect on connection lost.
use_tpool = False (Boolean) Enable the experimental use of thread pooling for all DB API calls

Logging configurations

You can configure where the service logs events, the level of logging, and log formats.

To customize logging for the service, use the configuration option settings documented in the table Description of common logging configuration options.

Description of common logging configuration options
Configuration option = Default value Description
[DEFAULT]  
debug = False (Boolean) If set to true, the logging level will be set to DEBUG instead of the default INFO level.
default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN, keystoneauth=WARN, oslo.cache=INFO, dogpile.core.dogpile=INFO (List) List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.
fatal_deprecations = False (Boolean) Enables or disables fatal status of deprecations.
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
instance_format = "[instance: %(uuid)s] " (String) The format for an instance that is passed with the log message.
instance_uuid_format = "[instance: %(uuid)s] " (String) The format for an instance UUID that is passed with the log message.
log_config_append = None (String) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, logging_context_format_string).
log_date_format = %Y-%m-%d %H:%M:%S (String) Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
log_dir = None (String) (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.
log_file = None (String) (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.
logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s (String) Format string to use for log messages with context.
logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d (String) Additional data to append to log message when logging level for the message is DEBUG.
logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s (String) Format string to use for log messages when context is undefined.
logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s (String) Prefix each line of exception output with this format.
logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s (String) Defines the format string for %(user_identity)s that is used in logging_context_format_string.
publish_errors = False (Boolean) Enables or disables publication of error events.
syslog_log_facility = LOG_USER (String) Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_stderr = True (Boolean) Log output to standard error. This option is ignored if log_config_append is set.
use_syslog = False (Boolean) Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
verbose = True (Boolean) DEPRECATED: If set to false, the logging level will be set to WARNING instead of the default INFO level.
watch_log_file = False (Boolean) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.

Policy configurations

The policy configuration options allow the deployer to control where the policy files are located and the default rule to apply when policy.

Description of policy configuration options
Configuration option = Default value Description
[oslo_policy]  
policy_default_rule = default (String) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (String) The JSON file that defines policies.

RPC messaging configurations

OpenStack services use Advanced Message Queuing Protocol (AMQP), an open standard for messaging middleware. This messaging middleware enables the OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports two implementations of AMQP: RabbitMQ and ZeroMQ.

Configure messaging

Use these options to configure the RPC messaging driver.

Description of AMQP configuration options
Configuration option = Default value Description
[DEFAULT]  
control_exchange = openstack (String) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
default_publisher_id = None (String) Default publisher_id for outgoing notifications
transport_url = None (String) A URL representing the messaging driver to use and its full configuration. If not set, we fall back to the rpc_backend option and driver specific configuration.
Description of RPC configuration options
Configuration option = Default value Description
[DEFAULT]  
notification_format = both (String) Specifies which notification format shall be used by nova.
rpc_backend = rabbit (String) The messaging driver to use, defaults to rabbit. Other drivers include amqp and zmq.
rpc_cast_timeout = -1 (Integer) Seconds to wait before a cast expires (TTL). The default value of -1 specifies an infinite linger period. The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed. Only supported by impl_zmq.
rpc_conn_pool_size = 30 (Integer) Size of RPC connection pool.
rpc_poll_timeout = 1 (Integer) The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
rpc_response_timeout = 60 (Integer) Seconds to wait for a response from a call.
[cells]  
rpc_driver_queue_base = cells.intercell (String) RPC driver queue base When sending a message to another cell by JSON-ifying the message and making an RPC cast to ‘process_message’, a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this. Possible values: * The base queue name to be used when communicating between cells. Services which consume this: * nova-cells Related options: * None
[oslo_concurrency]  
disable_process_locking = False (Boolean) Enables or disables inter-process locks.
lock_path = None (String) Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
[oslo_messaging]  
event_stream_topic = neutron_lbaas_event (String) topic name for receiving events from a queue
[oslo_messaging_amqp]  
allow_insecure_clients = False (Boolean) Accept clients using either SSL or plain TCP
broadcast_prefix = broadcast (String) address prefix used when broadcasting to all servers
container_name = None (String) Name for the AMQP container
group_request_prefix = unicast (String) address prefix when sending to any server in group
idle_timeout = 0 (Integer) Timeout for inactive connections (in seconds)
password = (String) Password for message broker authentication
sasl_config_dir = (String) Path to directory that contains the SASL configuration
sasl_config_name = (String) Name of configuration file (without .conf suffix)
sasl_mechanisms = (String) Space separated list of acceptable SASL mechanisms
server_request_prefix = exclusive (String) address prefix used when sending to a specific server
ssl_ca_file = (String) CA certificate PEM file to verify server certificate
ssl_cert_file = (String) Identifying certificate PEM file to present to clients
ssl_key_file = (String) Private key PEM file used to sign cert_file certificate
ssl_key_password = None (String) Password for decrypting ssl_key_file (if encrypted)
trace = False (Boolean) Debug: dump AMQP frames to stdout
username = (String) User name for message broker authentication
[oslo_messaging_notifications]  
driver = [] (Multi-valued) The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop
topics = notifications (List) AMQP topic used for OpenStack notifications.
transport_url = None (String) A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.
[upgrade_levels]  
baseapi = None (String) Set a version cap for messages sent to the base api in any service
Configure RabbitMQ

OpenStack Oslo RPC uses RabbitMQ by default. The rpc_backend option is not required as long as RabbitMQ is the default messaging system. However, if it is included in the configuration, you must set it to rabbit:

rpc_backend = rabbit

You can configure messaging communication for different installation scenarios, tune retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the notification_driver option to nova.openstack.common.notifier.rpc_notifier. The default value for sending usage data is sixty seconds plus a random number of seconds from zero to sixty.

Use the options described in the table below to configure the RabbitMQ message system.

Description of RabbitMQ configuration options
Configuration option = Default value Description
[oslo_messaging_rabbit]  
amqp_auto_delete = False (Boolean) Auto-delete queues in AMQP.
amqp_durable_queues = False (Boolean) Use durable queues in AMQP.
channel_max = None (Integer) Maximum number of channels to allow
default_notification_exchange = ${control_exchange}_notification (String) Exchange name for for sending notifications
default_notification_retry_attempts = -1 (Integer) Reconnecting retry count in case of connectivity problem during sending notification, -1 means infinite retry.
default_rpc_exchange = ${control_exchange}_rpc (String) Exchange name for sending RPC messages
default_rpc_retry_attempts = -1 (Integer) Reconnecting retry count in case of connectivity problem during sending RPC message, -1 means infinite retry. If actual retry attempts in not 0 the rpc request could be processed more then one time
fake_rabbit = False (Boolean) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
frame_max = None (Integer) The maximum byte size for an AMQP frame
heartbeat_interval = 1 (Integer) How often to send heartbeats for consumer’s connections
heartbeat_rate = 2 (Integer) How often times during the heartbeat_timeout_threshold we check the heartbeat.
heartbeat_timeout_threshold = 60 (Integer) Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disable the heartbeat). EXPERIMENTAL
host_connection_reconnect_delay = 0.25 (Floating point) Set delay for reconnection to some host which has connection error
kombu_compression = None (String) EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may notbe available in future versions.
kombu_failover_strategy = round-robin (String) Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
kombu_missing_consumer_retry_timeout = 60 (Integer) How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.
kombu_reconnect_delay = 1.0 (Floating point) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs = (String) SSL certification authority file (valid only if SSL enabled).
kombu_ssl_certfile = (String) SSL cert file (valid only if SSL enabled).
kombu_ssl_keyfile = (String) SSL key file (valid only if SSL enabled).
kombu_ssl_version = (String) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
notification_listener_prefetch_count = 100 (Integer) Max number of not acknowledged message which RabbitMQ can send to notification listener.
notification_persistence = False (Boolean) Persist notification messages.
notification_retry_delay = 0.25 (Floating point) Reconnecting retry delay in case of connectivity problem during sending notification message
pool_max_overflow = 0 (Integer) Maximum number of connections to create above pool_max_size.
pool_max_size = 10 (Integer) Maximum number of connections to keep queued.
pool_recycle = 600 (Integer) Lifetime of a connection (since creation) in seconds or None for no recycling. Expired connections are closed on acquire.
pool_stale = 60 (Integer) Threshold at which inactive (since release) connections are considered stale in seconds or None for no staleness. Stale connections are closed on acquire.
pool_timeout = 30 (Integer) Default number of seconds to wait for a connections to available
rabbit_ha_queues = False (Boolean) Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: “rabbitmqctl set_policy HA ‘^(?!amq.).*’ ‘{“ha-mode”: “all”}’ “
rabbit_host = localhost (String) The RabbitMQ broker address where a single node is used.
rabbit_hosts = $rabbit_host:$rabbit_port (List) RabbitMQ HA cluster host:port pairs.
rabbit_interval_max = 30 (Integer) Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
rabbit_login_method = AMQPLAIN (String) The RabbitMQ login method.
rabbit_max_retries = 0 (Integer) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest (String) The RabbitMQ password.
rabbit_port = 5672 (Port number) The RabbitMQ broker port where a single node is used.
rabbit_qos_prefetch_count = 0 (Integer) Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
rabbit_retry_backoff = 2 (Integer) How long to backoff for between retries when connecting to RabbitMQ.
rabbit_retry_interval = 1 (Integer) How frequently to retry connecting with RabbitMQ.
rabbit_transient_queues_ttl = 1800 (Integer) Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues.
rabbit_use_ssl = False (Boolean) Connect over SSL for RabbitMQ.
rabbit_userid = guest (String) The RabbitMQ userid.
rabbit_virtual_host = / (String) The RabbitMQ virtual host.
rpc_listener_prefetch_count = 100 (Integer) Max number of not acknowledged message which RabbitMQ can send to rpc listener.
rpc_queue_expiration = 60 (Integer) Time to live for rpc queues without consumers in seconds.
rpc_reply_exchange = ${control_exchange}_rpc_reply (String) Exchange name for receiving RPC replies
rpc_reply_listener_prefetch_count = 100 (Integer) Max number of not acknowledged message which RabbitMQ can send to rpc reply listener.
rpc_reply_retry_attempts = -1 (Integer) Reconnecting retry count in case of connectivity problem during sending reply. -1 means infinite retry during rpc_timeout
rpc_reply_retry_delay = 0.25 (Floating point) Reconnecting retry delay in case of connectivity problem during sending reply.
rpc_retry_delay = 0.25 (Floating point) Reconnecting retry delay in case of connectivity problem during sending RPC message
socket_timeout = 0.25 (Floating point) Set socket timeout in seconds for connection’s socket
ssl = None (Boolean) Enable SSL
ssl_options = None (Dict) Arguments passed to ssl.wrap_socket
tcp_user_timeout = 0.25 (Floating point) Set TCP_USER_TIMEOUT in seconds for connection’s socket
Configure ZeroMQ

Use these options to configure the ZeroMQ messaging system for OpenStack Oslo RPC. ZeroMQ is not the default messaging system, so you must enable it by setting the rpc_backend option.

Description of ZeroMQ configuration options
Configuration option = Default value Description
[DEFAULT]  
rpc_zmq_bind_address = * (String) ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The “host” option should point or resolve to this address.
rpc_zmq_bind_port_retries = 100 (Integer) Number of retries to find free port number before fail with ZMQBindError.
rpc_zmq_concurrency = eventlet (String) Type of concurrency used. Either “native” or “eventlet”
rpc_zmq_contexts = 1 (Integer) Number of ZeroMQ contexts, defaults to 1.
rpc_zmq_host = localhost (String) Name of this node. Must be a valid hostname, FQDN, or IP address. Must match “host” option, if running Nova.
rpc_zmq_ipc_dir = /var/run/openstack (String) Directory for holding IPC sockets.
rpc_zmq_matchmaker = redis (String) MatchMaker driver.
rpc_zmq_max_port = 65536 (Integer) Maximal port number for random ports range.
rpc_zmq_min_port = 49152 (Port number) Minimal port number for random ports range.
rpc_zmq_topic_backlog = None (Integer) Maximum number of ingress messages to locally buffer per topic. Default is unlimited.
use_pub_sub = True (Boolean) Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
zmq_target_expire = 120 (Integer) Expiration timeout in seconds of a name service record about existing target ( < 0 means no timeout).

Cross-origin resource sharing

Cross-Origin Resource Sharing (CORS) is a mechanism that allows code running in a browser (JavaScript for example) to make requests to a domain, other than the one it was originated from. OpenStack services support CORS requests.

For more information, see cross-project features in OpenStack Administrator Guide, CORS in Dashboard, and CORS in Object Storage service.

For a complete list of all available CORS configuration options, see CORS configuration options.

Application Catalog service

Application Catalog API configuration

Configuration options

The Application Catalog service can be configured by changing the following options:

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_role = admin (String) Role used to identify an authenticated user as administrator.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
secure_proxy_ssl_header = X-Forwarded-Proto (String) The HTTP Header that will be used to determine which the original request protocol scheme was, even if it was removed by an SSL terminator proxy.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_policy]  
policy_default_rule = default (String) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (String) The JSON file that defines policies.
[paste_deploy]  
config_file = None (String) Path to Paste config file
flavor = None (String) Paste flavor
Description of CFAPI configuration options
Configuration option = Default value Description
[cfapi]  
auth_url = localhost:5000 (String) Authentication URL
bind_host = localhost (String) Host for service broker
bind_port = 8083 (String) Port for service broker
packages_service = murano (String) Package service which should be used by service broker
project_domain_name = default (String) Domain name of the project
tenant = admin (String) Project for service broker
user_domain_name = default (String) Domain name of the user

Additional configuration options for Application Catalog service

These options can also be set in the murano.conf file.

Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
backlog = 4096 (Integer) Number of backlog requests to configure the socket with
bind_host = 0.0.0.0 (String) Address to bind the Murano API server to.
bind_port = 8082 (Port number) Port the bind the Murano API server to.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
file_server = (String) Set a file server.
home_region = None (String) Default region name used to get services endpoints.
metadata_dir = ./meta (String) Metadata dir
publish_errors = False (Boolean) Enables or disables publication of error events.
tcp_keepidle = 600 (Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
use_router_proxy = True (Boolean) Use ROUTER remote proxy.
[murano]  
api_limit_max = 100 (Integer) Maximum number of packages to be returned in a single pagination request
api_workers = None (Integer) Number of API workers
cacert = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Murano API used by Murano engine.
cert_file = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Murano used by Murano engine.
enabled_plugins = None (List) List of enabled Extension Plugins. Remove or leave commented to enable all installed plugins.
endpoint_type = publicURL (String) Murano endpoint type used by Murano engine.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers used by Murano engine.
key_file = None (String) (SSL/SSH) Private key file name to communicate with Murano API used by Murano engine.
limit_param_default = 20 (Integer) Default value for package pagination in API.
package_size_limit = 5 (Integer) Maximum application package size, Mb
url = None (String) Optional murano url in format like http://0.0.0.0:8082 used by Murano engine
[stats]  
period = 5 (Integer) Statistics collection interval in minutes.Default value is 5 minutes.
Description of engine configuration options
Configuration option = Default value Description
[engine]  
agent_timeout = 3600 (Integer) Time for waiting for a response from murano agent during the deployment
class_configs = /etc/murano/class-configs (String) Path to class configuration files
disable_murano_agent = False (Boolean) Disallow the use of murano-agent
enable_model_policy_enforcer = False (Boolean) Enable model policy enforcer using Congress
enable_packages_cache = True (Boolean) Enables murano-engine to persist on disk packages downloaded during deployments. The packages would be re-used for consequent deployments.
engine_workers = None (Integer) Number of engine workers
load_packages_from = (List) List of directories to load local packages from. If not provided, packages will be loaded only API
packages_cache = None (String) Location (directory) for Murano package cache.
packages_service = murano (String) The service to store murano packages: murano (stands for legacy behavior using murano-api) or glance (stands for glance-glare artifact service)
use_trusts = True (Boolean) Create resources using trust token rather than user’s token
Description of glare configuration options
Configuration option = Default value Description
[glare]  
ca_file = None (String) (SSL) Tells Murano to use the specified certificate file to verify the peer running Glare API.
cert_file = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Glare.
endpoint_type = publicURL (String) Glare endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Glare API.
key_file = None (String) (SSL/SSH) Private key file name to communicate with Glare API.
url = None (String) Optional glare url in format like http://0.0.0.0:9494 used by Glare API
Description of Orchestration service configuration options
Configuration option = Default value Description
[heat]  
ca_file = None (String) (SSL) Tells Murano to use the specified certificate file to verify the peer running Heat API.
cert_file = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Heat.
endpoint_type = publicURL (String) Heat endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Heat API.
key_file = None (String) (SSL/SSH) Private key file name to communicate with Heat API.
stack_tags = murano (List) List of tags to be assigned to heat stacks created during environment deployment.
url = None (String) Optional heat endpoint override
Description of Workflow service configuration options
Configuration option = Default value Description
[mistral]  
ca_cert = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Mistral.
endpoint_type = publicURL (String) Mistral endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Mistral.
service_type = workflowv2 (String) Mistral service type.
url = None (String) Optional mistral endpoint override
Description of Networking service configuration options
Configuration option = Default value Description
[networking]  
create_router = True (Boolean) This option will create a router when one with “router_name” does not exist
default_dns = (List) List of default DNS nameservers to be assigned to created Networks
driver = None (String) Network driver to use. Options are neutron or nova.If not provided, the driver will be detected.
env_ip_template = 10.0.0.0 (String) Template IP address for generating environment subnet cidrs
external_network = ext-net (String) ID or name of the external network for routers to connect to
max_environments = 250 (Integer) Maximum number of environments that use a single router per tenant
max_hosts = 250 (Integer) Maximum number of VMs per environment
network_config_file = netconfig.yaml (String) If provided networking configuration will be taken from this file
router_name = murano-default-router (String) Name of the router that going to be used in order to join all networks created by Murano
[neutron]  
ca_cert = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Neutron.
endpoint_type = publicURL (String) Neutron endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Neutron API.
url = None (String) Optional neutron endpoint override
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.

New, updated, and deprecated options in Newton for Application Catalog service

New options
Option = default value (Type) Help string
[cfapi] packages_service = murano (StrOpt) Package service which should be used by service broker
[engine] engine_workers = None (IntOpt) Number of engine workers
[murano] api_workers = None (IntOpt) Number of API workers
[networking] driver = None (StrOpt) Network driver to use. Options are neutron or nova.If not provided, the driver will be detected.
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None
[engine] workers [engine] engine_workers

This chapter describes the Application Catalog service configuration options.

Note

The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.

Bare Metal service

Bare Metal API configuration

Configuration options

The following options allow configuration of the APIs that Bare Metal service supports.

Description of API configuration options
Configuration option = Default value Description
[api]  
api_workers = None (Integer) Number of workers for OpenStack Ironic API service. The default is equal to the number of CPUs available if that can be determined, else a default worker count of 1 is returned.
enable_ssl_api = False (Boolean) Enable the integrated stand-alone API to service requests via HTTPS instead of HTTP. If there is a front-end service performing HTTPS offloading from the service, this option should be False; note, you will want to change public API endpoint to represent SSL termination URL with ‘public_endpoint’ option.
host_ip = 0.0.0.0 (String) The IP address on which ironic-api listens.
max_limit = 1000 (Integer) The maximum number of items returned in a single response from a collection resource.
port = 6385 (Port number) The TCP port on which ironic-api listens.
public_endpoint = None (String) Public URL to use when building the links to the API resources (for example, “https://ironic.rocks:6384”). If None the links will be built using the request’s host URL. If the API is operating behind a proxy, you will want to change this to represent the proxy’s URL. Defaults to None.
ramdisk_heartbeat_timeout = 300 (Integer) Maximum interval (in seconds) for agent heartbeats.
restrict_lookup = True (Boolean) Whether to restrict the lookup API to only nodes in certain states.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal

Additional configuration options for Bare Metal service

The following tables provide a comprehensive list of the Bare Metal service configuration options.

Description of agent configuration options
Configuration option = Default value Description
[agent]  
agent_api_version = v1 (String) API version to use for communicating with the ramdisk agent.
deploy_logs_collect = on_failure (String) Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never.
deploy_logs_local_path = /var/log/ironic/deploy (String) The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to “local”.
deploy_logs_storage_backend = local (String) The name of the storage backend where the logs will be stored.
deploy_logs_swift_container = ironic_deploy_logs_container (String) The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to “swift”.
deploy_logs_swift_days_to_expire = 30 (Integer) Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to “swift”.
manage_agent_boot = True (Boolean) Whether Ironic will manage booting of the agent ramdisk. If set to False, you will need to configure your mechanism to allow booting the agent ramdisk.
memory_consumed_by_agent = 0 (Integer) The memory size in MiB consumed by agent when it is booted on a bare metal node. This is used for checking if the image can be downloaded and deployed on the bare metal node after booting agent ramdisk. This may be set according to the memory consumed by the agent ramdisk image.
post_deploy_get_power_state_retries = 6 (Integer) Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off.
post_deploy_get_power_state_retry_interval = 5 (Integer) Amount of time (in seconds) to wait between polling power state after trigger soft poweroff.
stream_raw_images = True (Boolean) Whether the agent ramdisk should stream raw images directly onto the disk or not. By streaming raw images directly onto the disk the agent ramdisk will not spend time copying the image to a tmpfs partition (therefore consuming less memory) prior to writing it to the disk. Unless the disk where the image will be copied to is really slow, this option should be set to True. Defaults to True.
Description of AMT configuration options
Configuration option = Default value Description
[amt]  
action_wait = 10 (Integer) Amount of time (in seconds) to wait, before retrying an AMT operation
awake_interval = 60 (Integer) Time interval (in seconds) for successive awake call to AMT interface, this depends on the IdleTimeout setting on AMT interface. AMT Interface will go to sleep after 60 seconds of inactivity by default. IdleTimeout=0 means AMT will not go to sleep at all. Setting awake_interval=0 will disable awake call.
max_attempts = 3 (Integer) Maximum number of times to attempt an AMT operation, before failing
protocol = http (String) Protocol used for AMT endpoint
Description of audit configuration options
Configuration option = Default value Description
[audit]  
audit_map_file = /etc/ironic/ironic_api_audit_map.conf (String) Path to audit map file for ironic-api service. Used only when API audit is enabled.
enabled = False (Boolean) Enable auditing of API requests (for ironic-api service).
ignore_req_list = None (String) Comma separated list of Ironic REST API HTTP methods to be ignored during audit. For example: auditing will not be done on any GET or POST requests if this is set to “GET,POST”. It is used only when API audit is enabled.
namespace = openstack (String) namespace prefix for generated id
[audit_middleware_notifications]  
driver = None (String) The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used.
topics = None (List) List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used.
transport_url = None (String) A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC.
Description of Cisco UCS configuration options
Configuration option = Default value Description
[cimc]  
action_interval = 10 (Integer) Amount of time in seconds to wait in between power operations
max_retry = 6 (Integer) Number of times a power operation needs to be retried
[cisco_ucs]  
action_interval = 5 (Integer) Amount of time in seconds to wait in between power operations
max_retry = 6 (Integer) Number of times a power operation needs to be retried
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
bindir = /usr/local/bin (String) Directory where ironic binaries are installed.
debug_tracebacks_in_api = False (Boolean) Return server tracebacks in the API response for any error responses. WARNING: this is insecure and should not be used in a production environment.
default_network_interface = None (String) Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint.
enabled_drivers = pxe_ipmitool (List) Specify the list of drivers to load during service initialization. Missing drivers, or drivers which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented drivers. A complete list of drivers present on your system may be found by enumerating the “ironic.drivers” entrypoint. An example may be found in the developer documentation online.
enabled_network_interfaces = flat, noop (List) Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. This value must be the same on all ironic-conductor and ironic-api services, because it is used by ironic-api service to validate a new or updated node’s network_interface value.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
fatal_exception_format_errors = False (Boolean) Used if there is a formatting error when generating an exception message (a programming error). If True, raise an exception; if False, use the unformatted message.
force_raw_images = True (Boolean) If True, convert backing images to “raw” disk image format.
grub_config_template = $pybasedir/common/grub_conf.template (String) Template file for grub configuration file.
hash_distribution_replicas = 1 (Integer) [Experimental Feature] Number of hosts to map onto each hash partition. Setting this to more than one will cause additional conductor services to prepare deployment environments and potentially allow the Ironic cluster to recover more quickly if a conductor instance is terminated.
hash_partition_exponent = 5 (Integer) Exponent to determine number of hash partitions to use when distributing load across conductors. Larger values will result in more even distribution of load and less load when rebalancing the ring, but more memory usage. Number of partitions per conductor is (2^hash_partition_exponent). This determines the granularity of rebalancing: given 10 hosts, and an exponent of the 2, there are 40 partitions in the ring.A few thousand partitions should make rebalancing smooth in most cases. The default is suitable for up to a few hundred conductors. Too many partitions has a CPU impact.
hash_ring_reset_interval = 180 (Integer) Interval (in seconds) between hash ring resets.
host = localhost (String) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address.
isolinux_bin = /usr/lib/syslinux/isolinux.bin (String) Path to isolinux binary file.
isolinux_config_template = $pybasedir/common/isolinux_config.template (String) Template file for isolinux configuration file.
my_ip = 127.0.0.1 (String) IP address of this host. If unset, will determine the IP programmatically. If unable to do so, will use “127.0.0.1”.
notification_level = None (String) Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset.
parallel_image_downloads = False (Boolean) Run image downloads and raw format conversions in parallel.
pybasedir = /usr/lib/python/site-packages/ironic/ironic (String) Directory where the ironic python module is installed.
rootwrap_config = /etc/ironic/rootwrap.conf (String) Path to the rootwrap configuration file to use for running commands as root.
state_path = $pybasedir (String) Top-level directory for maintaining ironic’s state.
tempdir = /tmp (String) Temporary working directory, default is Python temp dir.
[ironic_lib]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal.
root_helper = sudo ironic-rootwrap /etc/ironic/rootwrap.conf (String) Command that is prefixed to commands that are run as root. If not specified, no commands are run as root.
Description of conductor configuration options
Configuration option = Default value Description
[conductor]  
api_url = None (String) URL of Ironic API service. If not set ironic can get the current value from the keystone service catalog.
automated_clean = True (Boolean) Enables or disables automated cleaning. Automated cleaning is a configurable set of steps, such as erasing disk drives, that are performed on the node to ensure it is in a baseline state and ready to be deployed to. This is done after instance deletion as well as during the transition from a “manageable” to “available” state. When enabled, the particular steps performed to clean a node depend on which driver that node is managed by; see the individual driver’s documentation for details. NOTE: The introduction of the cleaning operation causes instance deletion to take significantly longer. In an environment where all tenants are trusted (eg, because there is only one tenant), this option could be safely disabled.
check_provision_state_interval = 60 (Integer) Interval between checks of provision timeouts, in seconds.
clean_callback_timeout = 1800 (Integer) Timeout (seconds) to wait for a callback from the ramdisk doing the cleaning. If the timeout is reached the node will be put in the “clean failed” provision state. Set to 0 to disable timeout.
configdrive_swift_container = ironic_configdrive_container (String) Name of the Swift container to store config drive data. Used when configdrive_use_swift is True.
configdrive_use_swift = False (Boolean) Whether to upload the config drive to Swift.
deploy_callback_timeout = 1800 (Integer) Timeout (seconds) to wait for a callback from a deploy ramdisk. Set to 0 to disable timeout.
force_power_state_during_sync = True (Boolean) During sync_power_state, should the hardware power state be set to the state recorded in the database (True) or should the database be updated based on the hardware state (False).
heartbeat_interval = 10 (Integer) Seconds between conductor heart beats.
heartbeat_timeout = 60 (Integer) Maximum time (in seconds) since the last check-in of a conductor. A conductor is considered inactive when this time has been exceeded.
inspect_timeout = 1800 (Integer) Timeout (seconds) for waiting for node inspection. 0 - unlimited.
node_locked_retry_attempts = 3 (Integer) Number of attempts to grab a node lock.
node_locked_retry_interval = 1 (Integer) Seconds to sleep between node lock attempts.
periodic_max_workers = 8 (Integer) Maximum number of worker threads that can be started simultaneously by a periodic task. Should be less than RPC thread pool size.
power_state_sync_max_retries = 3 (Integer) During sync_power_state failures, limit the number of times Ironic should try syncing the hardware node power state with the node power state in DB
send_sensor_data = False (Boolean) Enable sending sensor data message via the notification bus
send_sensor_data_interval = 600 (Integer) Seconds between conductor sending sensor data message to ceilometer via the notification bus.
send_sensor_data_types = ALL (List) List of comma separated meter types which need to be sent to Ceilometer. The default value, “ALL”, is a special value meaning send all the sensor data.
sync_local_state_interval = 180 (Integer) When conductors join or leave the cluster, existing conductors may need to update any persistent local state as nodes are moved around the cluster. This option controls how often, in seconds, each conductor will check for nodes that it should “take over”. Set it to a negative value to disable the check entirely.
sync_power_state_interval = 60 (Integer) Interval between syncing the node power state to the database, in seconds.
workers_pool_size = 100 (Integer) The size of the workers greenthread pool. Note that 2 threads will be reserved by the conductor itself for handling heart beats and periodic tasks.
Description of console configuration options
Configuration option = Default value Description
[console]  
subprocess_checking_interval = 1 (Integer) Time interval (in seconds) for checking the status of console subprocess.
subprocess_timeout = 10 (Integer) Time (in seconds) to wait for the console subprocess to start.
terminal = shellinaboxd (String) Path to serial console terminal program. Used only by Shell In A Box console.
terminal_cert_dir = None (String) Directory containing the terminal SSL cert (PEM) for serial console access. Used only by Shell In A Box console.
terminal_pid_dir = None (String) Directory for holding terminal pid files. If not specified, the temporary directory will be used.
Description of DRAC configuration options
Configuration option = Default value Description
[drac]  
query_raid_config_job_status_interval = 120 (Integer) Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not.
Description of logging configuration options
Configuration option = Default value Description
[DEFAULT]  
pecan_debug = False (Boolean) Enable pecan debug mode. WARNING: this is insecure and should not be used in a production environment.
Description of deploy configuration options
Configuration option = Default value Description
[deploy]  
continue_if_disk_secure_erase_fails = False (Boolean) Defines what to do if an ATA secure erase operation fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue.
erase_devices_metadata_priority = None (Integer) Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning.
erase_devices_priority = None (Integer) Priority to run in-band erase devices via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 10 for the GenericHardwareManager). If set to 0, will not run during cleaning.
http_root = /httpboot (String) ironic-conductor node’s HTTP root path.
http_url = None (String) ironic-conductor node’s HTTP server URL. Example: http://192.1.2.3:8080
power_off_after_deploy_failure = True (Boolean) Whether to power off a node after deploy failure. Defaults to True.
shred_final_overwrite_with_zeros = True (Boolean) Whether to write zeros to a node’s block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True.
shred_random_overwrite_iterations = 1 (Integer) During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1.
Description of DHCP configuration options
Configuration option = Default value Description
[dhcp]  
dhcp_provider = neutron (String) DHCP provider to use. “neutron” uses Neutron, and “none” uses a no-op provider.
Description of disk partitioner configuration options
Configuration option = Default value Description
[disk_partitioner]  
check_device_interval = 1 (Integer) After Ironic has completed creating the partition table, it continues to check for activity on the attached iSCSI device status at this interval prior to copying the image to the node, in seconds
check_device_max_retries = 20 (Integer) The maximum number of times to check that the device is not accessed by another process. If the device is still busy after that, the disk partitioning will be treated as having failed.
[disk_utils]  
bios_boot_partition_size = 1 (Integer) Size of BIOS Boot partition in MiB when configuring GPT partitioned systems for local boot in BIOS.
dd_block_size = 1M (String) Block size to use when writing to the nodes disk.
efi_system_partition_size = 200 (Integer) Size of EFI system partition in MiB when configuring UEFI systems for local boot.
iscsi_verify_attempts = 3 (Integer) Maximum attempts to verify an iSCSI connection is active, sleeping 1 second between attempts.
Description of glance configuration options
Configuration option = Default value Description
[glance]  
allowed_direct_url_schemes = (List) A list of URL schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file].
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_strategy = keystone (String) Authentication strategy to use when connecting to glance.
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
glance_api_insecure = False (Boolean) Allow to perform insecure SSL (https) requests to glance.
glance_api_servers = None (List) A list of the glance api servers available to ironic. Prefix with https:// for SSL-based glance API servers. Format is [hostname|IP]:port.
glance_cafile = None (String) Optional path to a CA certificate bundle to be used to validate the SSL certificate served by glance. It is used when glance_api_insecure is set to False.
glance_host = $my_ip (String) Default glance hostname or IP address.
glance_num_retries = 0 (Integer) Number of retries when downloading an image from glance.
glance_port = 9292 (Port number) Default glance port.
glance_protocol = http (String) Default protocol to use when connecting to glance. Set to https for SSL.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
swift_account = None (String) The account that Glance uses to communicate with Swift. The format is “AUTH_uuid”. “uuid” is the UUID for the account configured in the glance-api.conf. Required for temporary URLs when Glance backend is Swift. For example: “AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30”. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id”
swift_api_version = v1 (String) The Swift API version to create a temporary URL for. Defaults to “v1”. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id”
swift_container = glance (String) The Swift container Glance is configured to store its images in. Defaults to “glance”, which is the default in glance-api.conf. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id”
swift_endpoint_url = None (String) The “endpoint” (scheme, hostname, optional port) for the Swift URL of the form “endpoint_url/api_version/[account/]container/object_id”. Do not include trailing “/”. For example, use “https://swift.example.com”. If using RADOS Gateway, endpoint may also contain /swift path; if it does not, it will be appended. Required for temporary URLs.
swift_store_multiple_containers_seed = 0 (Integer) This should match a config by the same name in the Glance configuration file. When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many containers are created.
swift_temp_url_cache_enabled = False (Boolean) Whether to cache generated Swift temporary URLs. Setting it to true is only useful when an image caching proxy is used. Defaults to False.
swift_temp_url_duration = 1200 (Integer) The length of time in seconds that the temporary URL will be valid for. Defaults to 20 minutes. If some deploys get a 401 response code when trying to download from the temporary URL, try raising this duration. This value must be greater than or equal to the value for swift_temp_url_expected_download_start_delay
swift_temp_url_expected_download_start_delay = 0 (Integer) This is the delay (in seconds) from the time of the deploy request (when the Swift temporary URL is generated) to when the IPA ramdisk starts up and URL is used for the image download. This value is used to check if the Swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled this will determine if a cached entry will still be valid when the download starts. swift_temp_url_duration value must be greater than or equal to this option’s value. Defaults to 0.
swift_temp_url_key = None (String) The secret token given to Swift to allow temporary URL downloads. Required for temporary URLs.
temp_url_endpoint_type = swift (String) Type of endpoint to use for temporary URLs. If the Glance backend is Swift, use “swift”; if it is CEPH with RADOS gateway, use “radosgw”.
timeout = None (Integer) Timeout value for http requests
Description of iBoot Web Power Switch configuration options
Configuration option = Default value Description
[iboot]  
max_retry = 3 (Integer) Maximum retries for iBoot operations
reboot_delay = 5 (Integer) Time (in seconds) to sleep between when rebooting (powering off and on again).
retry_interval = 1 (Integer) Time (in seconds) between retry attempts for iBoot operations
Description of iLO configuration options
Configuration option = Default value Description
[ilo]  
ca_file = None (String) CA certificate file to validate iLO.
clean_priority_clear_secure_boot_keys = 0 (Integer) Priority for clear_secure_boot_keys clean step. This step is not enabled by default. It can be enabled to clear all secure boot keys enrolled with iLO.
clean_priority_erase_devices = None (Integer) DEPRECATED: Priority for erase devices clean step. If unset, it defaults to 10. If set to 0, the step will be disabled and will not run during cleaning. This configuration option is duplicated by [deploy] erase_devices_priority, please use that instead.
clean_priority_reset_bios_to_default = 10 (Integer) Priority for reset_bios_to_default clean step.
clean_priority_reset_ilo = 0 (Integer) Priority for reset_ilo clean step.
clean_priority_reset_ilo_credential = 30 (Integer) Priority for reset_ilo_credential clean step. This step requires “ilo_change_password” parameter to be updated in nodes’s driver_info with the new password.
clean_priority_reset_secure_boot_keys_to_default = 20 (Integer) Priority for reset_secure_boot_keys clean step. This step will reset the secure boot keys to manufacturing defaults.
client_port = 443 (Port number) Port to be used for iLO operations
client_timeout = 60 (Integer) Timeout (in seconds) for iLO operations
default_boot_mode = auto (String) Default boot mode to be used in provisioning when “boot_mode” capability is not provided in the “properties/capabilities” of the node. The default is “auto” for backward compatibility. When “auto” is specified, default boot mode will be selected based on boot mode settings on the system.
power_retry = 6 (Integer) Number of times a power operation needs to be retried
power_wait = 2 (Integer) Amount of time in seconds to wait in between power operations
swift_ilo_container = ironic_ilo_container (String) The Swift iLO container to store data.
swift_object_expiry_timeout = 900 (Integer) Amount of time in seconds for Swift objects to auto-expire.
use_web_server_for_images = False (Boolean) Set this to True to use http web server to host floppy images and generated boot ISO. This requires http_root and http_url to be configured in the [deploy] section of the config file. If this is set to False, then Ironic will use Swift to host the floppy images and generated boot_iso.
Description of inspector configuration options
Configuration option = Default value Description
[inspector]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
enabled = False (Boolean) whether to enable inspection using ironic-inspector
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
service_url = None (String) ironic-inspector HTTP endpoint. If this is not set, the service catalog will be used.
status_check_period = 60 (Integer) period (in seconds) to check status of nodes on inspection
timeout = None (Integer) Timeout value for http requests
Description of IPMI configuration options
Configuration option = Default value Description
[ipmi]  
min_command_interval = 5 (Integer) Minimum time, in seconds, between IPMI operations sent to a server. There is a risk with some hardware that setting this too low may cause the BMC to crash. Recommended setting is 5 seconds.
retry_timeout = 60 (Integer) Maximum time in seconds to retry IPMI operations. There is a tradeoff when setting this value. Setting this too low may cause older BMCs to crash and require a hard reset. However, setting too high can cause the sync power state periodic task to hang when there are slow or unresponsive BMCs.
Description of iRMC configuration options
Configuration option = Default value Description
[irmc]  
auth_method = basic (String) Authentication method to be used for iRMC operations
client_timeout = 60 (Integer) Timeout (in seconds) for iRMC operations
port = 443 (Port number) Port to be used for iRMC operations
remote_image_server = None (String) IP of remote image server
remote_image_share_name = share (String) share name of remote_image_server
remote_image_share_root = /remote_image_share_root (String) Ironic conductor node’s “NFS” or “CIFS” root path
remote_image_share_type = CIFS (String) Share type of virtual media
remote_image_user_domain = (String) Domain name of remote_image_user_name
remote_image_user_name = None (String) User name of remote_image_server
remote_image_user_password = None (String) Password of remote_image_user_name
sensor_method = ipmitool (String) Sensor data retrieval method.
snmp_community = public (String) SNMP community. Required for versions “v1” and “v2c”
snmp_port = 161 (Port number) SNMP port
snmp_security = None (String) SNMP security name. Required for version “v3”
snmp_version = v2c (String) SNMP protocol version
Description of iSCSI configuration options
Configuration option = Default value Description
[iscsi]  
portal_port = 3260 (Port number) The port number on which the iSCSI portal listens for incoming connections.
Description of keystone configuration options
Configuration option = Default value Description
[keystone]  
region_name = None (String) The region used for getting endpoints of OpenStack services.
Description of metrics configuration options
Configuration option = Default value Description
[metrics]  
agent_backend = noop (String) Backend for the agent ramdisk to use for metrics. Default possible backends are “noop” and “statsd”.
agent_global_prefix = None (String) Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
agent_prepend_host = False (Boolean) Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
agent_prepend_host_reverse = True (Boolean) Split the prepended host value by ”.” and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names).
agent_prepend_uuid = False (Boolean) Prepend the node’s Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
backend = noop (String) Backend to use for the metrics system.
global_prefix = None (String) Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
prepend_host = False (Boolean) Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
prepend_host_reverse = True (Boolean) Split the prepended host value by ”.” and reverse it (to better match the reverse hierarchical form of domain names).
Description of metrics statsd configuration options
Configuration option = Default value Description
[metrics_statsd]  
agent_statsd_host = localhost (String) Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on.
agent_statsd_port = 8125 (Port number) Port for the agent ramdisk to use with the statsd backend.
statsd_host = localhost (String) Host for use with the statsd backend.
statsd_port = 8125 (Port number) Port to use with the statsd backend.
Description of neutron configuration options
Configuration option = Default value Description
[neutron]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_strategy = keystone (String) Authentication strategy to use when connecting to neutron. Running neutron in noauth mode (related to but not affected by this setting) is insecure and should only be used for testing.
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
cleaning_network_uuid = None (String) Neutron network UUID for the ramdisk to be booted into for cleaning nodes. Required for “neutron” network interface. It is also required if cleaning nodes when using “flat” network interface or “neutron” DHCP provider.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
port_setup_delay = 0 (Integer) Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port.
provisioning_network_uuid = None (String) Neutron network UUID for the ramdisk to be booted into for provisioning nodes. Required for “neutron” network interface.
retries = 3 (Integer) Client retries in the case of a failed request.
timeout = None (Integer) Timeout value for http requests
url = None (String) URL for connecting to neutron. Default value translates to ‘http://$my_ip:9696‘ when auth_strategy is ‘noauth’, and to discovery from Keystone catalog when auth_strategy is ‘keystone’.
url_timeout = 30 (Integer) Timeout value for connecting to neutron in seconds.
Description of PXE configuration options
Configuration option = Default value Description
[pxe]  
default_ephemeral_format = ext4 (String) Default file system format for ephemeral partition, if one is created.
image_cache_size = 20480 (Integer) Maximum size (in MiB) of cache for master images, including those in use.
image_cache_ttl = 10080 (Integer) Maximum TTL (in minutes) for old master images in cache.
images_path = /var/lib/ironic/images/ (String) On the ironic-conductor node, directory where images are stored on disk.
instance_master_path = /var/lib/ironic/master_images (String) On the ironic-conductor node, directory where master instance images are stored on disk. Setting to <None> disables image caching.
ip_version = 4 (String) The IP version that will be used for PXE booting. Defaults to 4. EXPERIMENTAL
ipxe_boot_script = $pybasedir/drivers/modules/boot.ipxe (String) On ironic-conductor node, the path to the main iPXE script file.
ipxe_enabled = False (Boolean) Enable iPXE boot.
ipxe_timeout = 0 (Integer) Timeout value (in seconds) for downloading an image via iPXE. Defaults to 0 (no timeout)
ipxe_use_swift = False (Boolean) Download deploy images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ‘ipxe_enabled’ option is set to true.
pxe_append_params = nofb nomodeset vga=normal (String) Additional append parameters for baremetal PXE boot.
pxe_bootfile_name = pxelinux.0 (String) Bootfile DHCP parameter.
pxe_config_template = $pybasedir/drivers/modules/pxe_config.template (String) On ironic-conductor node, template file for PXE configuration.
tftp_master_path = /tftpboot/master_images (String) On ironic-conductor node, directory where master TFTP images are stored on disk. Setting to <None> disables image caching.
tftp_root = /tftpboot (String) ironic-conductor node’s TFTP root path. The ironic-conductor must have read/write access to this path.
tftp_server = $my_ip (String) IP address of ironic-conductor node’s TFTP server.
uefi_pxe_bootfile_name = bootx64.efi (String) Bootfile DHCP parameter for UEFI boot mode.
uefi_pxe_config_template = $pybasedir/drivers/modules/pxe_grub_config.template (String) On ironic-conductor node, template file for PXE configuration for UEFI boot loader.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of SeaMicro configuration options
Configuration option = Default value Description
[seamicro]  
action_timeout = 10 (Integer) Seconds to wait for power action to be completed
max_retry = 3 (Integer) Maximum retries for SeaMicro operations
Description of service catalog configuration options
Configuration option = Default value Description
[service_catalog]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests
Description of SNMP configuration options
Configuration option = Default value Description
[snmp]  
power_timeout = 10 (Integer) Seconds to wait for power action to be completed
reboot_delay = 0 (Integer) Time (in seconds) to sleep between when rebooting (powering off and on again)
Description of SSH configuration options
Configuration option = Default value Description
[ssh]  
get_vm_name_attempts = 3 (Integer) Number of attempts to try to get VM name used by the host that corresponds to a node’s MAC address.
get_vm_name_retry_interval = 3 (Integer) Number of seconds to wait between attempts to get VM name used by the host that corresponds to a node’s MAC address.
libvirt_uri = qemu:///system (String) libvirt URI.
Description of swift configuration options
Configuration option = Default value Description
[swift]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
swift_max_retries = 2 (Integer) Maximum number of times to retry a Swift request, before failing.
timeout = None (Integer) Timeout value for http requests
Description of VirtualBox configuration options
Configuration option = Default value Description
[virtualbox]  
port = 18083 (Port number) Port on which VirtualBox web service is listening.

New, updated, and deprecated options in Newton for Bare Metal service

New options
Option = default value (Type) Help string
[DEFAULT] default_network_interface = None (StrOpt) Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint.
[DEFAULT] enabled_network_interfaces = flat, noop (ListOpt) Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. This value must be the same on all ironic-conductor and ironic-api services, because it is used by ironic-api service to validate a new or updated node’s network_interface value.
[DEFAULT] notification_level = None (StrOpt) Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset.
[agent] deploy_logs_collect = on_failure (StrOpt) Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never.
[agent] deploy_logs_local_path = /var/log/ironic/deploy (StrOpt) The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to “local”.
[agent] deploy_logs_storage_backend = local (StrOpt) The name of the storage backend where the logs will be stored.
[agent] deploy_logs_swift_container = ironic_deploy_logs_container (StrOpt) The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to “swift”.
[agent] deploy_logs_swift_days_to_expire = 30 (IntOpt) Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to “swift”.
[api] ramdisk_heartbeat_timeout = 300 (IntOpt) Maximum interval (in seconds) for agent heartbeats.
[api] restrict_lookup = True (BoolOpt) Whether to restrict the lookup API to only nodes in certain states.
[audit] audit_map_file = /etc/ironic/ironic_api_audit_map.conf (StrOpt) Path to audit map file for ironic-api service. Used only when API audit is enabled.
[audit] enabled = False (BoolOpt) Enable auditing of API requests (for ironic-api service).
[audit] ignore_req_list = None (StrOpt) Comma separated list of Ironic REST API HTTP methods to be ignored during audit. For example: auditing will not be done on any GET or POST requests if this is set to “GET,POST”. It is used only when API audit is enabled.
[audit] namespace = openstack (StrOpt) namespace prefix for generated id
[audit_middleware_notifications] driver = None (StrOpt) The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used.
[audit_middleware_notifications] topics = None (ListOpt) List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used.
[audit_middleware_notifications] transport_url = None (StrOpt) A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC.
[deploy] continue_if_disk_secure_erase_fails = False (BoolOpt) Defines what to do if an ATA secure erase operation fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue.
[deploy] erase_devices_metadata_priority = None (IntOpt) Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning.
[deploy] power_off_after_deploy_failure = True (BoolOpt) Whether to power off a node after deploy failure. Defaults to True.
[deploy] shred_final_overwrite_with_zeros = True (BoolOpt) Whether to write zeros to a node’s block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True.
[deploy] shred_random_overwrite_iterations = 1 (IntOpt) During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1.
[drac] query_raid_config_job_status_interval = 120 (IntOpt) Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not.
[glance] auth_section = None (Opt) Config Section from which to load plugin specific options
[glance] auth_type = None (Opt) Authentication type to load
[glance] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[glance] certfile = None (StrOpt) PEM encoded client certificate cert file
[glance] insecure = False (BoolOpt) Verify HTTPS connections.
[glance] keyfile = None (StrOpt) PEM encoded client certificate key file
[glance] timeout = None (IntOpt) Timeout value for http requests
[ilo] ca_file = None (StrOpt) CA certificate file to validate iLO.
[ilo] default_boot_mode = auto (StrOpt) Default boot mode to be used in provisioning when “boot_mode” capability is not provided in the “properties/capabilities” of the node. The default is “auto” for backward compatibility. When “auto” is specified, default boot mode will be selected based on boot mode settings on the system.
[inspector] auth_section = None (Opt) Config Section from which to load plugin specific options
[inspector] auth_type = None (Opt) Authentication type to load
[inspector] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[inspector] certfile = None (StrOpt) PEM encoded client certificate cert file
[inspector] insecure = False (BoolOpt) Verify HTTPS connections.
[inspector] keyfile = None (StrOpt) PEM encoded client certificate key file
[inspector] timeout = None (IntOpt) Timeout value for http requests
[iscsi] portal_port = 3260 (PortOpt) The port number on which the iSCSI portal listens for incoming connections.
[metrics] agent_backend = noop (StrOpt) Backend for the agent ramdisk to use for metrics. Default possible backends are “noop” and “statsd”.
[metrics] agent_global_prefix = None (StrOpt) Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
[metrics] agent_prepend_host = False (BoolOpt) Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
[metrics] agent_prepend_host_reverse = True (BoolOpt) Split the prepended host value by ”.” and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names).
[metrics] agent_prepend_uuid = False (BoolOpt) Prepend the node’s Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
[metrics] backend = noop (StrOpt) Backend to use for the metrics system.
[metrics] global_prefix = None (StrOpt) Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
[metrics] prepend_host = False (BoolOpt) Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
[metrics] prepend_host_reverse = True (BoolOpt) Split the prepended host value by ”.” and reverse it (to better match the reverse hierarchical form of domain names).
[metrics_statsd] agent_statsd_host = localhost (StrOpt) Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on.
[metrics_statsd] agent_statsd_port = 8125 (PortOpt) Port for the agent ramdisk to use with the statsd backend.
[metrics_statsd] statsd_host = localhost (StrOpt) Host for use with the statsd backend.
[metrics_statsd] statsd_port = 8125 (PortOpt) Port to use with the statsd backend.
[neutron] auth_section = None (Opt) Config Section from which to load plugin specific options
[neutron] auth_type = None (Opt) Authentication type to load
[neutron] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[neutron] certfile = None (StrOpt) PEM encoded client certificate cert file
[neutron] insecure = False (BoolOpt) Verify HTTPS connections.
[neutron] keyfile = None (StrOpt) PEM encoded client certificate key file
[neutron] port_setup_delay = 0 (IntOpt) Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port.
[neutron] provisioning_network_uuid = None (StrOpt) Neutron network UUID for the ramdisk to be booted into for provisioning nodes. Required for “neutron” network interface.
[neutron] timeout = None (IntOpt) Timeout value for http requests
[oneview] enable_periodic_tasks = True (BoolOpt) Whether to enable the periodic tasks for OneView driver be aware when OneView hardware resources are taken and released by Ironic or OneView users and proactively manage nodes in clean fail state according to Dynamic Allocation model of hardware resources allocation in OneView.
[oneview] periodic_check_interval = 300 (IntOpt) Period (in seconds) for periodic tasks to be executed when enable_periodic_tasks=True.
[pxe] ipxe_use_swift = False (BoolOpt) Download deploy images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ‘ipxe_enabled’ option is set to true.
[service_catalog] auth_section = None (Opt) Config Section from which to load plugin specific options
[service_catalog] auth_type = None (Opt) Authentication type to load
[service_catalog] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[service_catalog] certfile = None (StrOpt) PEM encoded client certificate cert file
[service_catalog] insecure = False (BoolOpt) Verify HTTPS connections.
[service_catalog] keyfile = None (StrOpt) PEM encoded client certificate key file
[service_catalog] timeout = None (IntOpt) Timeout value for http requests
[swift] auth_section = None (Opt) Config Section from which to load plugin specific options
[swift] auth_type = None (Opt) Authentication type to load
[swift] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[swift] certfile = None (StrOpt) PEM encoded client certificate cert file
[swift] insecure = False (BoolOpt) Verify HTTPS connections.
[swift] keyfile = None (StrOpt) PEM encoded client certificate key file
[swift] timeout = None (IntOpt) Timeout value for http requests
New default values
Option Previous default value New default value
[DEFAULT] my_ip 10.0.0.1 127.0.0.1
[neutron] url http://$my_ip:9696 None
[pxe] uefi_pxe_bootfile_name elilo.efi bootx64.efi
[pxe] uefi_pxe_config_template $pybasedir/drivers/modules/elilo_efi_pxe_config.template $pybasedir/drivers/modules/pxe_grub_config.template
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None
[agent] heartbeat_timeout [api] ramdisk_heartbeat_timeout
[deploy] erase_devices_iterations [deploy] shred_random_overwrite_iterations
[keystone_authtoken] cafile [glance] cafile
[keystone_authtoken] cafile [neutron] cafile
[keystone_authtoken] cafile [service_catalog] cafile
[keystone_authtoken] cafile [swift] cafile
[keystone_authtoken] cafile [inspector] cafile
[keystone_authtoken] certfile [service_catalog] certfile
[keystone_authtoken] certfile [neutron] certfile
[keystone_authtoken] certfile [glance] certfile
[keystone_authtoken] certfile [inspector] certfile
[keystone_authtoken] certfile [swift] certfile
[keystone_authtoken] insecure [glance] insecure
[keystone_authtoken] insecure [inspector] insecure
[keystone_authtoken] insecure [swift] insecure
[keystone_authtoken] insecure [service_catalog] insecure
[keystone_authtoken] insecure [neutron] insecure
[keystone_authtoken] keyfile [inspector] keyfile
[keystone_authtoken] keyfile [swift] keyfile
[keystone_authtoken] keyfile [neutron] keyfile
[keystone_authtoken] keyfile [glance] keyfile
[keystone_authtoken] keyfile [service_catalog] keyfile

The Bare Metal service is capable of managing and provisioning physical machines. The configuration file of this module is /etc/ironic/ironic.conf.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Block Storage service

Introduction to the Block Storage service

The Block Storage service provides persistent block storage resources that Compute instances can consume. This includes secondary attached storage similar to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a Block Storage device for Compute to use as a bootable persistent instance.

The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage service does not provide a shared storage solution like NFS. With the Block Storage service, you can attach a device to only one instance.

The Block Storage service provides:

  • cinder-api - a WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs only, although there is a translation that can be done through Compute’s EC2 interface, which calls in to the Block Storage client.
  • cinder-scheduler - schedules and routes requests to the appropriate volume service. Depending upon your configuration, this may be simple round-robin scheduling to the running volume services, or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Volume Types, and Capabilities as well as custom filters.
  • cinder-volume - manages Block Storage devices, specifically the back-end devices themselves.
  • cinder-backup - provides a means to back up a Block Storage volume to OpenStack Object Storage (swift).

The Block Storage service contains the following components:

  • Back-end Storage Devices - the Block Storage service requires some form of back-end storage that the service is built on. The default implementation is to use LVM on a local volume group named “cinder-volumes.” In addition to the base driver implementation, the Block Storage service also provides the means to add support for other storage devices to be utilized such as external Raid Arrays or other storage appliances. These back-end storage devices may have custom block sizes when using KVM or QEMU as the hypervisor.

  • Users and Tenants (Projects) - the Block Storage service can be used by many different cloud computing consumers or customers (tenants on a shared system), using role-based access assignments. Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role, but this can be configured by the system administrator in the appropriate policy.json file that maintains the rules. A user’s access to particular volumes is limited by tenant, but the user name and password are assigned per user. Key pairs granting access to a volume are enabled per user, but quotas to control resource consumption across available hardware resources are per tenant.

    For tenants, quota controls are available to limit:

    • The number of volumes that can be created.
    • The number of snapshots that can be created.
    • The total number of GBs allowed per tenant (shared between snapshots and volumes).

    You can revise the default quota values with the Block Storage CLI, so the limits placed by quotas are editable by admin users.

  • Volumes, Snapshots, and Backups - the basic resources offered by the Block Storage service are volumes and snapshots which are derived from volumes and volume backups:

    • Volumes - allocated block storage resources that can be attached to instances as secondary storage or they can be used as the root store to boot instances. Volumes are persistent R/W block storage devices most commonly attached to the compute node through iSCSI.
    • Snapshots - a read-only point in time copy of a volume. The snapshot can be created from a volume that is currently in use (through the use of --force True) or in an available state. The snapshot can then be used to create a new volume through create from snapshot.
    • Backups - an archived copy of a volume currently stored in Object Storage (swift).

Volume drivers

Ceph RADOS Block Device (RBD)

If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.

Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.

_images/ceph-architecture.png

Ceph architecture

RADOS

Ceph is based on Reliable Autonomic Distributed Object Store (RADOS). RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:

Object Storage Device (OSD) Daemon
The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data). You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.
Meta-Data Server (MDS)
Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
Monitor (MON)
A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three ceph-mon daemons on separate servers.

Ceph developers recommend XFS for production deployments, Btrfs for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.

Note

If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).

For more information about usable file systems, see ceph.com/ceph-storage/file-system/.

Ways to store, use, and expose data

To store and access your data, you can use the following storage systems:

RADOS
Use as an object, default storage mechanism.
RBD
Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
CephFS
Use as a file, POSIX-compliant file system.

Ceph exposes RADOS; you can access it through the following interfaces:

RADOS Gateway
OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
librados
and its related C/C++ bindings
RBD and QEMU-RBD
Linux kernel and QEMU block devices that stripe data across multiple objects.
Driver options

The following table contains the configuration options supported by the Ceph RADOS Block Device driver.

Note

The volume_tmp_dir option has been deprecated and replaced by image_conversion_dir.

Description of Ceph storage configuration options
Configuration option = Default value Description
[DEFAULT]  
rados_connect_timeout = -1 (Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rados_connection_interval = 5 (Integer) Interval value (in seconds) between connection retries to ceph cluster.
rados_connection_retries = 3 (Integer) Number of retries if connection to ceph cluster failed.
rbd_ceph_conf = (String) Path to the ceph configuration file
rbd_cluster_name = ceph (String) The name of ceph cluster
rbd_flatten_volume_from_snapshot = False (Boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5 (Integer) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool = rbd (String) The RADOS pool where rbd volumes are stored
rbd_secret_uuid = None (String) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4 (Integer) Volumes will be chunked into objects of this size (in megabytes).
rbd_user = None (String) The RADOS client name for accessing rbd volumes - only set when using cephx authentication
volume_tmp_dir = None (String) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, please use image_conversion_dir instead.
GlusterFS driver

GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on Gluster’s homepage.

This driver enables the use of GlusterFS in a similar fashion as NFS. It supports basic volume operations, including snapshot and clone.

To use Block Storage with GlusterFS, first set the volume_driver in the cinder.conf file:

volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver

The following table contains the configuration options supported by the GlusterFS driver.

Description of GlusterFS storage configuration options
Configuration option = Default value Description
[DEFAULT]  
glusterfs_mount_point_base = $state_path/mnt (String) Base dir containing mount points for gluster shares.
glusterfs_shares_config = /etc/cinder/glusterfs_shares (String) File with the list of available gluster shares
nas_volume_prov_type = thin (String) Provisioning type that will be used when creating volumes.
LVM

The default volume back end uses local volumes managed by LVM.

This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.

Set the following in your cinder.conf configuration file, and use the following options to configure for iSCSI transport:

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi

Use the following options to configure for the iSER transport:

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iser
Description of LVM configuration options
Configuration option = Default value Description
[DEFAULT]  
lvm_conf_file = /etc/cinder/lvm.conf (String) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify ‘None’ to not use a conf file even if one exists).
lvm_max_over_subscription_ratio = 1.0 (Floating point) max_over_subscription_ratio setting for the LVM driver. If set, this takes precedence over the general max_over_subscription_ratio option. If None, the general option is used.
lvm_mirrors = 0 (Integer) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space
lvm_suppress_fd_warnings = False (Boolean) Suppress leaked file descriptor warnings in LVM commands.
lvm_type = default (String) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported.
volume_group = cinder-volumes (String) Name for the VG that will contain exported volumes

Caution

When extending an existing volume which has a linked snapshot, the related logical volume is deactivated. This logical volume is automatically reactivated unless auto_activation_volume_list is defined in LVM configuration file lvm.conf. See the lvm.conf file for more information.

If auto activated volumes are restricted, then include the cinder volume group into this list:

auto_activation_volume_list = [ "existingVG", "cinder-volumes" ]

This note does not apply for thinly provisioned volumes because they do not need to be deactivated.

NFS driver

The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.

How the NFS driver works

The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.

The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the /var/lib/nova/instances directory.

How to use the NFS driver

Creating an NFS server is outside the scope of this document.

Configure with one NFS server

This example assumes access to the following NFS server and mount point:

  • 192.168.1.200:/storage

This example demonstrates the usage of this driver with one NFS server.

Set the nas_host option to the IP address or host name of your NFS server, and the nas_share_path option to the NFS export path:

nas_host = 192.168.1.200
nas_share_path = /storage
Configure with multiple NFS servers

Note

You can use the multiple NFS servers with cinder multi back ends feature. Configure the enabled_backends option with multiple values, and use the nas_host and nas_share options for each back end as described above.

The below example is another method to use multiple NFS servers, and demonstrates the usage of this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.

This example assumes access to the following NFS servers and mount points:

  • 192.168.1.200:/storage
  • 192.168.1.201:/storage
  • 192.168.1.202:/storage
  1. Add your list of NFS servers to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/shares.txt file, then:

    # cat /etc/cinder/shares.txt
    192.168.1.200:/storage
    192.168.1.201:/storage
    192.168.1.202:/storage
    

    Comments are allowed in this file. They begin with a #.

  2. Configure the nfs_mount_point_base option. This is a directory where cinder-volume mounts all NFS shares stored in the shares.txt file. For this example, /var/lib/cinder/nfs is used. You can, of course, use the default value of $state_path/mnt.

  3. Start the cinder-volume service. /var/lib/cinder/nfs should now contain a directory for each NFS share specified in the shares.txt file. The name of each directory is a hashed name:

    # ls /var/lib/cinder/nfs/
    ...
    46c5db75dc3a3a50a10bfd1a456a9f3f
    ...
    
  4. You can now create volumes as you normally would:

    $ nova volume-create --display-name myvol 5
    # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
    volume-a8862558-e6d6-4648-b5df-bb84f31c8935
    

This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.

NFS driver notes
  • cinder-volume manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one cinder-volume service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one cinder-volume service is needed as well as potentially more than one NFS server.
  • Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Please test accordingly.
  • Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.

Note

Regular IO flushing and syncing still stands.

Sheepdog driver

Sheepdog is an open-source distributed storage system that provides a virtual storage pool utilizing internal disk of commodity servers.

Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshotting, cloning, rollback, and thin provisioning.

More information can be found on Sheepdog Project.

This driver enables the use of Sheepdog through Qemu/KVM.

Supported operations

Sheepdog driver supports these operations:

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
Configuration

Set the following option in the cinder.conf file:

volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver

The following table contains the configuration options supported by the Sheepdog driver:

Description of Sheepdog driver configuration options
Configuration option = Default value Description
[DEFAULT]  
sheepdog_store_address = 127.0.0.1 (String) IP address of sheep daemon.
sheepdog_store_port = 7000 (Port number) Port of sheep daemon.
SambaFS driver

There is a volume back-end for Samba filesystems. Set the following in your cinder.conf file, and use the following options to configure it.

Note

The SambaFS driver requires qemu-img version 1.7 or higher on Linux nodes, and qemu-img version 1.6 or higher on Windows nodes.

volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
Description of Samba volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
smbfs_allocation_info_file_path = $state_path/allocation_data (String) The path of the automatically generated file containing information about volume disk space allocation.
smbfs_default_volume_format = qcow2 (String) Default format that will be used when creating volumes if no volume format is specified.
smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775 (String) Mount options passed to the smbfs client. See mount.cifs man page for details.
smbfs_mount_point_base = $state_path/mnt (String) Base dir containing mount points for smbfs shares.
smbfs_oversub_ratio = 1.0 (Floating point) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid.
smbfs_shares_config = /etc/cinder/smbfs_shares (String) File with the list of available smbfs shares.
smbfs_sparsed_volumes = True (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time.
smbfs_used_ratio = 0.95 (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
Blockbridge EPS
Introduction

Blockbridge is software that transforms commodity infrastructure into secure multi-tenant storage that operates as a programmable service. It provides automatic encryption, secure deletion, quality of service (QoS), replication, and programmable security capabilities on your choice of hardware. Blockbridge uses micro-segmentation to provide isolation that allows you to concurrently operate OpenStack, Docker, and bare-metal workflows on shared resources. When used with OpenStack, isolated management domains are dynamically created on a per-project basis. All volumes and clones, within and between projects, are automatically cryptographically isolated and implement secure deletion.

Architecture reference

Blockbridge architecture

_images/bb-cinder-fig1.png
Control paths

The Blockbridge driver is packaged with the core distribution of OpenStack. Operationally, it executes in the context of the Block Storage service. The driver communicates with an OpenStack-specific API provided by the Blockbridge EPS platform. Blockbridge optionally communicates with Identity, Compute, and Block Storage services.

Block storage API

Blockbridge is API driven software-defined storage. The system implements a native HTTP API that is tailored to the specific needs of OpenStack. Each Block Storage service operation maps to a single back-end API request that provides ACID semantics. The API is specifically designed to reduce, if not eliminate, the possibility of inconsistencies between the Block Storage service and external storage infrastructure in the event of hardware, software or data center failure.

Extended management

OpenStack users may utilize Blockbridge interfaces to manage replication, auditing, statistics, and performance information on a per-project and per-volume basis. In addition, they can manage low-level data security functions including verification of data authenticity and encryption key delegation. Native integration with the Identity Service allows tenants to use a single set of credentials. Integration with Block storage and Compute services provides dynamic metadata mapping when using Blockbridge management APIs and tools.

Attribute-based provisioning

Blockbridge organizes resources using descriptive identifiers called attributes. Attributes are assigned by administrators of the infrastructure. They are used to describe the characteristics of storage in an application-friendly way. Applications construct queries that describe storage provisioning constraints and the Blockbridge storage stack assembles the resources as described.

Any given instance of a Blockbridge volume driver specifies a query for resources. For example, a query could specify '+ssd +10.0.0.0 +6nines -production iops.reserve=1000 capacity.reserve=30%'. This query is satisfied by selecting SSD resources, accessible on the 10.0.0.0 network, with high resiliency, for non-production workloads, with guaranteed IOPS of 1000 and a storage reservation for 30% of the volume capacity specified at create time. Queries and parameters are completely administrator defined: they reflect the layout, resource, and organizational goals of a specific deployment.

Supported operations
  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
Supported protocols

Blockbridge provides iSCSI access to storage. A unique iSCSI data fabric is programmatically assembled when a volume is attached to an instance. A fabric is disassembled when a volume is detached from an instance. Each volume is an isolated SCSI device that supports persistent reservations.

Configuration steps
Create an authentication token

Whenever possible, avoid using password-based authentication. Even if you have created a role-restricted administrative user via Blockbridge, token-based authentication is preferred. You can generate persistent authentication tokens using the Blockbridge command-line tool as follows:

$ bb -H bb-mn authorization create --notes "OpenStack" --restrict none
Authenticating to https://bb-mn/api

Enter user or access token: system
Password for system:
Authenticated; token expires in 3599 seconds.

== Authorization: ATH4762894C40626410
notes                 OpenStack
serial                ATH4762894C40626410
account               system (ACT0762594C40626440)
user                  system (USR1B62094C40626440)
enabled               yes
created at            2015-10-24 22:08:48 +0000
access type           online
token suffix          xaKUy3gw
restrict              none

== Access Token
access token          1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw

*** Remember to record your access token!
Create volume type

Before configuring and enabling the Blockbridge volume driver, register an OpenStack volume type and associate it with a volume_backend_name. In this example, a volume type, ‘Production’, is associated with the volume_backend_name ‘blockbridge_prod’:

$ cinder type-create Production
$ cinder type-key Production volume_backend_name=blockbridge_prod
Specify volume driver

Configure the Blockbridge volume driver in /etc/cinder/cinder.conf. Your volume_backend_name must match the value specified in the cinder type-key command in the previous step.

volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_prod
Specify API endpoint and authentication

Configure the API endpoint and authentication. The following example uses an authentication token. You must create your own as described in Create an authentication token.

blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
Specify resource query

By default, a single pool is configured (implied) with a default resource query of '+openstack'. Within Blockbridge, datastore resources that advertise the ‘openstack’ attribute will be selected to fulfill OpenStack provisioning requests. If you prefer a more specific query, define a custom pool configuration.

blockbridge_pools = Production: +production +qos iops.reserve=5000

Pools support storage systems that offer multiple classes of service. You may wish to configure multiple pools to implement more sophisticated scheduling capabilities.

Configuration options
Description of BlockBridge EPS volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
blockbridge_api_host = None (String) IP address/hostname of Blockbridge API.
blockbridge_api_port = None (Integer) Override HTTPS port to connect to Blockbridge API server.
blockbridge_auth_password = None (String) Blockbridge API password (for auth scheme ‘password’)
blockbridge_auth_scheme = token (String) Blockbridge API authentication scheme (token or password)
blockbridge_auth_token = None (String) Blockbridge API token (for auth scheme ‘token’)
blockbridge_auth_user = None (String) Blockbridge API user (for auth scheme ‘password’)
blockbridge_default_pool = None (String) Default pool name if unspecified.
blockbridge_pools = {'OpenStack': '+openstack'} (Dict) Defines the set of exposed pools and their associated backend query strings
Configuration example

cinder.conf example file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
[Default]
enabled_backends = bb_devel bb_prod

[bb_prod]
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_prod
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
blockbridge_pools = Production: +production +qos iops.reserve=5000

[bb_devel]
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_devel
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
blockbridge_pools = Development: +development
Multiple volume types

Volume types are exposed to tenants, pools are not. To offer multiple classes of storage to OpenStack tenants, you should define multiple volume types. Simply repeat the process above for each desired type. Be sure to specify a unique volume_backend_name and pool configuration for each type. The cinder.conf example included with this documentation illustrates configuration of multiple types.

Testing resources

Blockbridge is freely available for testing purposes and deploys in seconds as a Docker container. This is the same container used to run continuous integration for OpenStack. For more information visit www.blockbridge.io.

CloudByte volume driver
CloudByte Block Storage driver configuration
Description of CloudByte volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
cb_account_name = None (String) CloudByte storage specific account name. This maps to a project name in OpenStack.
cb_add_qosgroup = {'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'} (Dict) These values will be used for CloudByte storage’s addQos API call.
cb_apikey = None (String) Driver will use this API key to authenticate against the CloudByte storage’s management interface.
cb_auth_group = None (String) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None.
cb_confirm_volume_create_retries = 3 (Integer) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts.
cb_confirm_volume_create_retry_interval = 5 (Integer) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage.
cb_confirm_volume_delete_retries = 3 (Integer) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts.
cb_confirm_volume_delete_retry_interval = 5 (Integer) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage.
cb_create_volume = {'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'} (Dict) These values will be used for CloudByte storage’s createVolume API call.
cb_tsm_name = None (String) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM.
cb_update_file_system = compression, sync, noofcopies, readonly (List) These values will be used for CloudByte storage’s updateFileSystem API call.
cb_update_qos_group = iops, latency, graceallowed (List) These values will be used for CloudByte storage’s updateQosGroup API call.
Coho Data volume driver

The Coho DataStream Scale-Out Storage allows your Block Storage service to scale seamlessly. The architecture consists of commodity storage servers with SDN ToR switches. Leveraging an SDN OpenFlow controller allows you to scale storage horizontally, while avoiding storage and network bottlenecks by intelligent load-balancing and parallelized workloads. High-performance PCIe NVMe flash, paired with traditional hard disk drives (HDD) or solid-state drives (SSD), delivers low-latency performance even with highly mixed workloads in large scale environment.

Coho Data’s storage features include real-time instance level granularity performance and capacity reporting via API or UI, and single-IP storage endpoint access.

Supported operations
  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Create a thin provisioned volume.
  • Get volume statistics.
Coho Data QoS support

QoS support for the Coho Data driver includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage QoS specs extension module:

  • maxIOPS - The maximum number of IOPS allowed for this volume.
  • maxMBS - The maximum throughput allowed for this volume.

The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate

Note

If you change a volume type with QoS to a new volume type without QoS, the QoS configuration settings will be removed.

System requirements
  • NFS client on the Block storage controller.
Coho Data Block Storage driver configuration
  1. Create cinder volume type.

    $ cinder type-create coho-1
    
  2. Edit the OpenStack Block Storage service configuration file. The following sample, /etc/cinder/cinder.conf, configuration lists the relevant settings for a typical Block Storage service using a single Coho Data storage:

    [DEFAULT]
    enabled_backends = coho-1
    default_volume_type = coho-1
    
    [coho-1]
    volume_driver = cinder.volume.drivers.coho.CohoDriver
    volume_backend_name = coho-1
    nfs_shares_config = /etc/cinder/coho_shares
    nas_secure_file_operations = 'false'
    
  3. Add your list of Coho Datastream NFS addresses to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/coho_shares, then:

    $ cat /etc/cinder/coho_shares
    <coho-nfs-ip>:/<export-path>
    
  4. Restart the cinder-volume service to enable Coho Data driver.

Description of Coho volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
coho_rpc_port = 2049 (Integer) RPC port to connect to Coho Data MicroArray
CoprHD FC, iSCSI, and ScaleIO drivers

CoprHD is an open source software-defined storage controller and API platform. It enables policy-based management and cloud automation of storage resources for block, object and file storage providers. For more details, see CoprHD.

EMC ViPR Controller is the commercial offering of CoprHD. These same volume drivers can also be considered as EMC ViPR Controller Block Storage drivers.

System requirements

CoprHD version 3.0 is required. Refer to the CoprHD documentation for installation and configuration instructions.

If you are using these drivers to integrate with EMC ViPR Controller, use EMC ViPR Controller 3.0.

Supported operations

The following operations are supported:

  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create, delete, and update consistency groups.
  • Create and delete consistency group snapshots.
Driver options

The following table contains the configuration options specific to the CoprHD volume driver.

Description of Coprhd volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
coprhd_emulate_snapshot = False (Boolean) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
coprhd_hostname = None (String) Hostname for the CoprHD Instance
coprhd_password = None (String) Password for accessing the CoprHD Instance
coprhd_port = 4443 (Port number) Port for the CoprHD Instance
coprhd_project = None (String) Project to utilize within the CoprHD Instance
coprhd_scaleio_rest_gateway_host = None (String) Rest Gateway IP or FQDN for Scaleio
coprhd_scaleio_rest_gateway_port = 4984 (Port number) Rest Gateway Port for Scaleio
coprhd_scaleio_rest_server_password = None (String) Rest Gateway Password
coprhd_scaleio_rest_server_username = None (String) Username for Rest Gateway
coprhd_tenant = None (String) Tenant to utilize within the CoprHD Instance
coprhd_username = None (String) Username for accessing the CoprHD Instance
coprhd_varray = None (String) Virtual Array to utilize within the CoprHD Instance
scaleio_server_certificate_path = None (String) Server certificate path
scaleio_verify_server_certificate = False (Boolean) verify server certificate
Preparation

This involves setting up the CoprHD environment first and then configuring the CoprHD Block Storage driver.

CoprHD

The CoprHD environment must meet specific configuration requirements to support the OpenStack Block Storage driver.

  • CoprHD users must be assigned a Tenant Administrator role or a Project Administrator role for the Project being used. CoprHD roles are configured by CoprHD Security Administrators. Consult the CoprHD documentation for details.
  • A CorprHD system administrator must execute the following configurations using the CoprHD UI, CoprHD API, or CoprHD CLI:
    • Create CoprHD virtual array
    • Create CoprHD virtual storage pool
    • Virtual Array designated for iSCSI driver must have an IP network created with appropriate IP storage ports
    • Designated tenant for use
    • Designated project for use

Note

Use each back end to manage one virtual array and one virtual storage pool. However, the user can have multiple instances of CoprHD Block Storage driver, sharing the same virtual array and virtual storage pool.

  • A typical CoprHD virtual storage pool will have the following values specified:
    • Storage Type: Block
    • Provisioning Type: Thin
    • Protocol: iSCSI/Fibre Channel(FC)/ScaleIO
    • Multi-Volume Consistency: DISABLED OR ENABLED
    • Maximum Native Snapshots: A value greater than 0 allows the OpenStack user to take Snapshots
CoprHD drivers - Single back end

cinder.conf

  1. Modify /etc/cinder/cinder.conf by adding the following lines, substituting values for your environment:

    [coprhd-iscsi]
    volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
    volume_backend_name = coprhd-iscsi
    coprhd_hostname = <CoprHD-Host-Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    coprhd_emulate_snapshot = True or False, True if the CoprHD vpool has VMAX or VPLEX as the backing storage
    
  2. If you use the ScaleIO back end, add the following lines:

    coprhd_scaleio_rest_gateway_host = <IP or FQDN>
    coprhd_scaleio_rest_gateway_port = 443
    coprhd_scaleio_rest_server_username = <username>
    coprhd_scaleio_rest_server_password = <password>
    scaleio_verify_server_certificate = True or False
    scaleio_server_certificate_path = <path-of-certificate-for-validation>
    
  3. Specify the driver using the enabled_backends parameter:

    enabled_backends = coprhd-iscsi
    

    Note

    To utilize the Fibre Channel driver, replace the volume_driver line above with:

    volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
    

    Note

    To utilize the ScaleIO driver, replace the volume_driver line above with:

    volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDScaleIODriver
    

    Note

    Set coprhd_emulate_snapshot to True if the CoprHD vpool has VMAX or VPLEX as the back-end storage. For these type of back-end storages, when a user tries to create a snapshot, an actual volume gets created in the back end.

  4. Modify the rpc_response_timeout value in /etc/cinder/cinder.conf to at least 5 minutes. If this entry does not already exist within the cinder.conf file, add it in the [DEFAULT] section:

    [DEFAULT]
    ...
    rpc_response_timeout = 300
    
  5. Now, restart the cinder-volume service.

Volume type creation and extra specs

  1. Create OpenStack volume types:

    $ openstack volume type create <typename>
    
  2. Map the OpenStack volume type to the CoprHD virtual pool:

    $ openstack volume type set <typename> --property CoprHD:VPOOL=<CoprHD-PoolName>
    
  3. Map the volume type created to appropriate back-end driver:

    $ openstack volume type set <typename> --property volume_backend_name=<VOLUME_BACKEND_DRIVER>
    
CoprHD drivers - Multiple back-ends

cinder.conf

  1. Add or modify the following entries if you are planning to use multiple back-end drivers:

    enabled_backends = coprhddriver-iscsi,coprhddriver-fc,coprhddriver-scaleio
    
  2. Add the following at the end of the file:

    [coprhddriver-iscsi]
    volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
    volume_backend_name = EMCCoprHDISCSIDriver
    coprhd_hostname = <CoprHD Host Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    
    
    [coprhddriver-fc]
    volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
    volume_backend_name = EMCCoprHHDFCDriver
    coprhd_hostname = <CoprHD Host Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    
    
    [coprhddriver-scaleio]
    volume_driver = cinder.volume.drivers.coprhd.scaleio.EMCCoprHDScaleIODriver
    volume_backend_name = EMCCoprHDScaleIODriver
    coprhd_hostname = <CoprHD Host Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    coprhd_scaleio_rest_gateway_host = <ScaleIO Rest Gateway>
    coprhd_scaleio_rest_gateway_port = 443
    coprhd_scaleio_rest_server_username = <rest gateway username>
    coprhd_scaleio_rest_server_password = <rest gateway password>
    scaleio_verify_server_certificate = True or False
    scaleio_server_certificate_path = <certificate path>
    
  3. Restart the cinder-volume service.

Volume type creation and extra specs

Setup the volume-types and volume-type to volume-backend association:

$ openstack volume type create "CoprHD High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property CoprHD:VPOOL="High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property volume_backend_name= EMCCoprHDISCSIDriver

$ openstack volume type create "CoprHD High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property CoprHD:VPOOL="High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property volume_backend_name= EMCCoprHDFCDriver

$ openstack volume type create "CoprHD performance SIO"
$ openstack volume type set "CoprHD performance SIO" --property CoprHD:VPOOL="Scaled Perf"
$ openstack volume type set "CoprHD performance SIO" --property volume_backend_name= EMCCoprHDScaleIODriver
ISCSI driver notes
  • The compute host must be added to the CoprHD along with its ISCSI initiator.
  • The ISCSI initiator must be associated with IP network on the CoprHD.
FC driver notes
  • The compute host must be attached to a VSAN or fabric discovered by CoprHD.
  • There is no need to perform any SAN zoning operations. CoprHD will perform the necessary operations automatically as part of the provisioning process.
ScaleIO driver notes
  • Install the ScaleIO SDC on the compute host.

  • The compute host must be added as the SDC to the ScaleIO MDS using the below commands:

    /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip List of MDM IPs
    (starting with primary MDM and separated by comma)
    Example:
    /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip
    10.247.78.45,10.247.78.46,10.247.78.47
    

This step has to be repeated whenever the SDC (compute host in this case) is rebooted.

Consistency group configuration

To enable the support of consistency group and consistency group snapshot operations, use a text editor to edit the file /etc/cinder/policy.json and change the values of the below fields as specified. Upon editing the file, restart the c-api service:

"consistencygroup:create" : "",
"consistencygroup:delete": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:update": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
Names of resources in back-end storage

All the resources like volume, consistency group, snapshot, and consistency group snapshot will use the display name in OpenStack for naming in the back-end storage.

Datera drivers
Datera iSCSI driver

The Datera Elastic Data Fabric (EDF) is a scale-out storage software that turns standard, commodity hardware into a RESTful API-driven, intent-based policy controlled storage fabric for large-scale clouds. The Datera EDF integrates seamlessly with the Block Storage service. It provides storage through the iSCSI block protocol framework over the iSCSI block protocol. Datera supports all of the Block Storage services.

System requirements, prerequisites, and recommendations
Prerequisites
  • Must be running compatible versions of OpenStack and Datera EDF. Please visit here to determine the correct version.
  • All nodes must have access to Datera EDF through the iSCSI block protocol.
  • All nodes accessing the Datera EDF must have the following packages installed:
    • Linux I/O (LIO)
    • open-iscsi
    • open-iscsi-utils
    • wget
Description of Datera volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
datera_503_interval = 5 (Integer) Interval between 503 retries
datera_503_timeout = 120 (Integer) Timeout for HTTP 503 retry messages
datera_acl_allow_all = False (Boolean) DEPRECATED: True to set acl ‘allow_all’ on volumes created
datera_api_port = 7717 (String) Datera API port.
datera_api_version = 2 (String) Datera API version.
datera_debug = False (Boolean) True to set function arg and return logging
datera_debug_replica_count_override = False (Boolean) ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1
datera_num_replicas = 3 (Integer) DEPRECATED: Number of replicas to create of an inode.
Configuring the Datera volume driver

Modify the /etc/cinder/cinder.conf file for Block Storage service.

  • Enable the Datera volume driver:
[DEFAULT]
# ...
enabled_backends = datera
# ...
  • Optional. Designate Datera as the default back-end:
default_volume_type = datera
  • Create a new section for the Datera back-end definition. The san_ip can be either the Datera Management Network VIP or one of the Datera iSCSI Access Network VIPs depending on the network segregation requirements:
volume_driver = cinder.volume.drivers.datera.DateraDriver
san_ip = <IP_ADDR>            # The OOB Management IP of the cluster
san_login = admin             # Your cluster admin login
san_password = password       # Your cluster admin password
san_is_local = true
datera_num_replicas = 3       # Number of replicas to use for volume
Enable the Datera volume driver
  • Verify the OpenStack control node can reach the Datera san_ip:
$ ping -c 4 <san_IP>
  • Start the Block Storage service on all nodes running the cinder-volume services:
$ service cinder-volume restart

QoS support for the Datera drivers includes the ability to set the following capabilities in QoS Specs

  • read_iops_max – must be positive integer
  • write_iops_max – must be positive integer
  • total_iops_max – must be positive integer
  • read_bandwidth_max – in KB per second, must be positive integer
  • write_bandwidth_max – in KB per second, must be positive integer
  • total_bandwidth_max – in KB per second, must be positive integer
# Create qos spec
$ cinder qos-create DateraBronze total_iops_max=1000 \
  total_bandwidth_max=2000

# Associate qos-spec with volume type
$ cinder qos-associate <qos-spec-id> <volume-type-id>

# Add additional qos values or update existing ones
$ cinder qos-key <qos-spec-id> set read_bandwidth_max=500
Supported operations
  • Create, delete, attach, detach, manage, unmanage, and list volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Support for naming convention changes.
Configuring multipathing

The following configuration is for 3.X Linux kernels, some parameters in different Linux distributions may be different. Make the following changes in the multipath.conf file:

defaults {
checker_timer 5
}
devices {
    device {
        vendor "DATERA"
        product "IBLOCK"
        getuid_callout "/lib/udev/scsi_id --whitelisted –
        replace-whitespace --page=0x80 --device=/dev/%n"
        path_grouping_policy group_by_prio
        path_checker tur
        prio alua
        path_selector "queue-length 0"
        hardware_handler "1 alua"
        failback 5
    }
}
blacklist {
    device {
        vendor ".*"
        product ".*"
    }
}
blacklist_exceptions {
    device {
        vendor "DATERA.*"
        product "IBLOCK.*"
    }
}
Dell EqualLogic volume driver

The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Clone a volume.
Configuration

The OpenStack Block Storage service supports:

  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.

The Dell EqualLogic volume driver’s ability to access the EqualLogic Group is dependent upon the generic block storage driver’s SSH settings in the /etc/cinder/cinder.conf file (see Block Storage service sample configuration files for reference).

Description of Dell EqualLogic volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
eqlx_chap_login = admin (String) Existing CHAP account name. Note that this option is deprecated in favour of “chap_username” as specified in cinder/volume/driver.py and will be removed in next release.
eqlx_chap_password = password (String) Password for specified CHAP account name. Note that this option is deprecated in favour of “chap_password” as specified in cinder/volume/driver.py and will be removed in the next release
eqlx_cli_max_retries = 5 (Integer) Maximum retry count for reconnection. Default is 5.
eqlx_cli_timeout = 30 (Integer) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of “ssh_conn_timeout” as specified in cinder/volume/drivers/san/san.py and will be removed in M release.
eqlx_group_name = group-0 (String) Group name to use for creating volumes. Defaults to “group-0”.
eqlx_pool = default (String) Pool in which volumes will be created. Defaults to “default”.
eqlx_use_chap = False (Boolean) Use CHAP authentication for targets. Note that this option is deprecated in favour of “use_chap_auth” as specified in cinder/volume/driver.py and will be removed in next release.
Default (single-instance) configuration

The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:

[DEFAULT]
# Required settings

volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip = IP_EQLX
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

# Optional settings

san_thin_provision = true|false
eqlx_use_chap = true|false
eqlx_chap_login = EQLX_UNAME
eqlx_chap_password = EQLX_PW
eqlx_cli_max_retries = 5
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5

In this example, replace the following variables accordingly:

IP_EQLX
The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
SAN_UNAME
The user name to login to the Group manager via SSH at the san_ip. Default user name is grpadmin.
SAN_PW
The corresponding password of SAN_UNAME. Not used when san_private_key is set. Default password is password.
EQLX_GROUP
The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is group-0.
EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is default. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group.
EQLX_UNAME
The CHAP login account for each volume in a pool, if eqlx_use_chap is set to true. Default account name is chapadmin.
EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when san_password is set. There is no default value.

In addition, enable thin provisioning for SAN volumes using the default san_thin_provision = true setting.

Multiple back-end configuration

The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back ends:

enabled_backends = backend1,backend2
san_ssh_port = 22
ssh_conn_timeout = 30
san_thin_provision = true

[backend1]
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name = backend1
san_ip = IP_EQLX1
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

[backend2]
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name = backend2
san_ip = IP_EQLX2
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

In this example:

  • Thin provisioning for SAN volumes is enabled (san_thin_provision = true). This is recommended when setting up Dell EqualLogic back ends.
  • Each Dell EqualLogic back-end configuration ([backend1] and [backend2]) has the same required settings as a single back-end configuration, with the addition of volume_backend_name.
  • The san_ssh_port option is set to its default value, 22. This option sets the port used for SSH.
  • The ssh_conn_timeout option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH.
  • The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of backend1 and backend2 through SSH, respectively.

For information on configuring multiple back ends, see Configure a multiple-storage back end.

Dell Storage Center Fibre Channel and iSCSI drivers

The Dell Storage Center volume driver interacts with configured Storage Center arrays.

The Dell Storage Center driver manages Storage Center arrays through the Dell Storage Manager (DSM). DSM connection settings and Storage Center options are defined in the cinder.conf file.

Prerequisite: Dell Storage Manager 2015 R1 or later must be used.

Supported operations

The Dell Storage Center volume driver provides the following Cinder volume operations:

  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Create, delete, list and update a consistency group.
  • Create, delete, and list consistency group snapshots.
  • Manage an existing volume.
  • Failover-host for replicated back ends.
  • Create a replication using Live Volume.
Extra spec options

Volume type extra specs can be used to enable a variety of Dell Storage Center options. Selecting Storage Profiles, Replay Profiles, enabling replication, replication options including Live Volume and Active Replay replication.

Storage Profiles control how Storage Center manages volume data. For a given volume, the selected Storage Profile dictates which disk tier accepts initial writes, as well as how data progression moves data between tiers to balance performance and cost. Predefined Storage Profiles are the most effective way to manage data in Storage Center.

By default, if no Storage Profile is specified in the volume extra specs, the default Storage Profile for the user account configured for the Block Storage driver is used. The extra spec key storagetype:storageprofile with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.

For ease of use from the command line, spaces in Storage Profile names are ignored. As an example, here is how to define two volume types using the High Priority and Low Priority Storage Profiles:

$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority

Replay Profiles control how often the Storage Center takes a replay of a given volume and how long those replays are kept. The default profile is the daily profile that sets the replay to occur once a day and to persist for one week.

The extra spec key storagetype:replayprofiles with the value of the name of the Replay Profile or profiles on the Storage Center can be set to allow to use Replay Profiles other than the default daily profile.

As an example, here is how to define a volume type using the hourly Replay Profile and another specifying both hourly and the default daily profile:

$ cinder type-create "HourlyType"
$ cinder type-key "HourlyType" set storagetype:replayprofile=hourly
$ cinder type-create "HourlyAndDailyType"
$ cinder type-key "HourlyAndDailyType" set storagetype:replayprofiles=hourly,daily

Note the comma separated string for the HourlyAndDailyType.

Replication for a given volume type is enabled via the extra spec replication_enabled.

To create a volume type that specifies only replication enabled back ends:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'

Extra specs can be used to configure replication. In addition to the Replay Profiles above, replication:activereplay can be set to enable replication of the volume’s active replay. And the replication type can be changed to synchronous via the replication_type extra spec can be set.

To create a volume type that enables replication of the active replay:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication:activereplay='<is> True'

To create a volume type that enables synchronous replication :

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication_type='<in> sync'

To create a volume type that enables replication using Live Volume:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication:livevolume='<is> True'
iSCSI configuration

Use the following instructions to update the configuration file for iSCSI:

default_volume_type = delliscsi
enabled_backends = delliscsi

[delliscsi]
# Name to give this storage back-end
volume_backend_name = delliscsi
# The iSCSI driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
# IP address of DSM
san_ip = 172.23.8.101
# DSM user name
san_login = Admin
# DSM password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702

# ==Optional settings==

# The DSM API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
Fibre Channel configuration

Use the following instructions to update the configuration file for fibre channel:

default_volume_type = dellfc
enabled_backends = dellfc

[dellfc]
# Name to give this storage back-end
volume_backend_name = dellfc
# The FC driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver
# IP address of the DSM
san_ip = 172.23.8.101
# DSM user name
san_login = Admin
# DSM password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702

# ==Optional settings==

# The DSM API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
Dual DSM

It is possible to specify a secondary DSM to use in case the primary DSM fails.

Configuration is done through the cinder.conf. Both DSMs have to be configured to manage the same set of Storage Centers for this backend. That means the dell_sc_ssn and any Storage Centers used for replication or Live Volume.

Add network and credential information to the backend to enable Dual DSM.

[dell]
# The IP address and port of the secondary DSM.
secondary_san_ip = 192.168.0.102
secondary_sc_api_port = 3033
# Specify credentials for the secondary DSM.
secondary_san_login = Admin
secondary_san_password = secret

The driver will use the primary until a failure. At that point it will attempt to use the secondary. It will continue to use the secondary until the volume service is restarted or the secondary fails at which point it will attempt to use the primary.

Replication configuration

Add the following to the back-end specification to specify another Storage Center to replicate to.

[dell]
replication_device = target_device_id: 65495, qosnode: cinderqos

The target_device_id is the SSN of the remote Storage Center and the qosnode is the QoS Node setup between the two Storage Centers.

Note that more than one replication_device line can be added. This will slow things down, however.

A volume is only replicated if the volume is of a volume-type that has the extra spec replication_enabled set to <is> True.

Replication notes

This driver supports both standard replication and Live Volume (if supported and licensed). The main difference is that a VM attached to a Live Volume is mapped to both Storage Centers. In the case of a failure of the primary Live Volume still requires a failover-host to move control of the volume to the second controller.

Existing mappings should work and not require the instance to be remapped but it might need to be rebooted.

Live Volume is more resource intensive than replication. One should be sure to plan accordingly.

Failback

The failover-host command is designed for the case where the primary system is not coming back. If it has been executed and the primary has been restored it is possible to attempt a failback.

Simply specify default as the backend_id.

$ cinder failover-host cinder@delliscsi --backend_id default

Non trivial heavy lifting is done by this command. It attempts to recover best it can but if things have diverged to far it can only do so much. It is also a one time only command so do not reboot or restart the service in the middle of it.

Failover and failback are significant operations under OpenStack Cinder. Be sure to consult with support before attempting.

Server type configuration

This option allows one to set a default Server OS type to use when creating a server definition on the Dell Storage Center.

When attaching a volume to a node the Dell Storage Center driver creates a server definition on the storage array. This defition includes a Server OS type. The type used by the Dell Storage Center cinder driver is “Red Hat Linux 6.x”. This is a modern operating system definition that supports all the features of an OpenStack node.

Add the following to the back-end specification to specify the Server OS to use when creating a server definition. The server type used must come from the drop down list in the DSM.

[dell]
default_server_os = 'Red Hat Linux 7.x'

Note that this server definition is created once. Changing this setting after the fact will not change an existing definition. The selected Server OS does not have to match the actual OS used on the node.

Driver options

The following table contains the configuration options specific to the Dell Storage Center volume driver.

Description of Dell Storage Center volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
dell_sc_api_port = 3033 (Port number) Dell API port
dell_sc_server_folder = openstack (String) Name of the server folder to use on the Storage Center
dell_sc_ssn = 64702 (Integer) Storage Center System Serial Number
dell_sc_verify_cert = False (Boolean) Enable HTTPS SC certificate verification
dell_sc_volume_folder = openstack (String) Name of the volume folder to use on the Storage Center
dell_server_os = Red Hat Linux 6.x (String) Server OS type to use when creating a new server on the Storage Center.
excluded_domain_ip = None (Unknown) Domain IP to be excluded from iSCSI returns.
secondary_san_ip = (String) IP address of secondary DSM controller
secondary_san_login = Admin (String) Secondary DSM user name
secondary_san_password = (String) Secondary DSM user password name
secondary_sc_api_port = 3033 (Port number) Secondary Dell API port
Dot Hill AssuredSAN Fibre Channel and iSCSI drivers

The DotHillFCDriver and DotHillISCSIDriver volume drivers allow Dot Hill arrays to be used for block storage in OpenStack deployments.

System requirements

To use the Dot Hill drivers, the following are required:

  • Dot Hill AssuredSAN array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
    • Appropriate licenses for the snapshot and copy volume features
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed via an HTTPS connection. HTTP can also be used if dothill_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

    If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.

  2. Edit the cinder.conf file to define an storage back-end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in key=value format.

    • The dothill_backend_name value specifies the name of the storage pool or vdisk on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that is shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate Cinder driver name; IP address or hostname of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    iSCSI example back-end entries

    [pool-a]
    dothill_backend_name = A
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    dothill_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    dothill_backend_name = B
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    dothill_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Fibre Channel example back-end entries

    [pool-a]
    dothill_backend_name = A
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    dothill_backend_name = B
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If any volume_backend_name value refers to a vdisk rather than a virtual pool, add an additional statement dothill_backend_type = linear to that back-end entry.

  4. If HTTPS is not enabled in the array, include dothill_api_protocol = http in each of the back-end definitions.

  5. If HTTPS is enabled, you can enable certificate verification with the option dothill_verify_certificate=True. You may also use the dothill_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  6. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_backends parameter specifying the back-end entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example of [DEFAULT] section changes

    [DEFAULT]
      ...
    enabled_backends = pool-a,pool-b
    default_volume_type = dothill
      ...
    
  7. Create a new volume type for each distinct volume_backend_name value that you added to cinder.conf. The example below assumes that the same volume_backend_name=dothill-array option was specified in all of the entries, and specifies that the volume type dothill can be used to allocate volumes from any of them.

    Example of creating a volume type

    $ cinder type-create dothill
    
    $ cinder type-key dothill set volume_backend_name=dothill-array
    
  8. After modifying cinder.conf, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the Dot Hill drivers.

Description of Dot Hill volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
dothill_api_protocol = https (String) DotHill API interface protocol.
dothill_backend_name = A (String) Pool or Vdisk name to use for volume creation.
dothill_backend_type = virtual (String) linear (for Vdisk) or virtual (for Pool).
dothill_iscsi_ips = (List) List of comma-separated target iSCSI IP addresses.
dothill_verify_certificate = False (Boolean) Whether to verify DotHill array SSL certificate.
dothill_verify_certificate_path = None (String) DotHill array SSL certificate path.
EMC ScaleIO Block Storage driver configuration

ScaleIO is a software-only solution that uses existing servers’ local disks and LAN to create a virtual SAN that has all of the benefits of external storage, but at a fraction of the cost and complexity. Using the driver, Block Storage hosts can connect to a ScaleIO Storage cluster.

This section explains how to configure and connect the block storage nodes to a ScaleIO storage cluster.

Support matrix
ScaleIO version Supported Linux operating systems
1.32 CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12
2.0 CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12, Ubuntu 14.04
Deployment prerequisites
  • ScaleIO Gateway must be installed and accessible in the network. For installation steps, refer to the Preparing the installation Manager and the Gateway section in ScaleIO Deployment Guide. See Official documentation.
  • ScaleIO Data Client (SDC) must be installed on all OpenStack nodes.

Note

Ubuntu users must follow the specific instructions in the ScaleIO deployment guide for Ubuntu environments. See the Deploying on Ubuntu servers section in ScaleIO Deployment Guide. See Official documentation.

Official documentation

To find the ScaleIO documentation:

  1. Go to the ScaleIO product documentation page.
  2. From the left-side panel, select the relevant version (1.32 or 2.0).
  3. Search for “ScaleIO Installation Guide 1.32” or “ScaleIO 2.0 Deployment Guide” accordingly.
Supported operations
  • Create, delete, clone, attach, detach, manage, and unmanage volumes
  • Create, delete, manage, and unmanage volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
  • Create, list, update, and delete consistency groups
  • Create, list, update, and delete consistency group snapshots
ScaleIO QoS support

QoS support for the ScaleIO driver includes the ability to set the following capabilities in the Block Storage API cinder.api.contrib.qos_specs_manage QoS specs extension module:

  • maxIOPS
  • maxIOPSperGB
  • maxBWS
  • maxBWSperGB

The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate
maxIOPS
The QoS I/O rate limit. If not set, the I/O rate will be unlimited. The setting must be larger than 10.
maxIOPSperGB
The QoS I/O rate limit. The limit will be calculated by the specified value multiplied by the volume size. The setting must be larger than 10.
maxBWS
The QoS I/O bandwidth rate limit in KBs. If not set, the I/O bandwidth rate will be unlimited. The setting must be a multiple of 1024.
maxBWSperGB
The QoS I/O bandwidth rate limit in KBs. The limit will be calculated by the specified value multiplied by the volume size. The setting must be a multiple of 1024.

The driver always chooses the minimum between the QoS keys value and the relevant calculated value of maxIOPSperGB or maxBWSperGB.

Since the limits are per SDC, they will be applied after the volume is attached to an instance, and thus to a compute node/SDC.

ScaleIO thin provisioning support

The Block Storage driver supports creation of thin-provisioned and thick-provisioned volumes. The provisioning type settings can be added as an extra specification of the volume type, as follows:

provisioning:type = thin\thick

The old specification: sio:provisioning_type is deprecated.

Oversubscription

Configure the oversubscription ratio by adding the following parameter under the seperate section for ScaleIO:

sio_max_over_subscription_ratio = OVER_SUBSCRIPTION_RATIO

Note

The default value for sio_max_over_subscription_ratio is 10.0.

Oversubscription is calculated correctly by the Block Storage service only if the extra specification provisioning:type appears in the volume type regardless to the default provisioning type. Maximum oversubscription value supported for ScaleIO is 10.0.

Default provisioning type

If provisioning type settings are not specified in the volume type, the default value is set according to the san_thin_provision option in the configuration file. The default provisioning type will be thin if the option is not specified in the configuration file. To set the default provisioning type thick, set the san_thin_provision option to false in the configuration file, as follows:

san_thin_provision = false

The configuration file is usually located in /etc/cinder/cinder.conf. For a configuration example, see: cinder.conf.

ScaleIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end, or under a separate section in case of multiple back ends (for example [ScaleIO]). The configuration file is usually located at /etc/cinder/cinder.conf.

For a configuration example, refer to the example cinder.conf .

ScaleIO driver name

Configure the driver name by adding the following parameter:

volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
ScaleIO MDM server IP

The ScaleIO Meta Data Manager monitors and maintains the available resources and permissions.

To retrieve the MDM server IP address, use the drv_cfg --query_mdms command.

Configure the MDM server IP address by adding the following parameter:

san_ip = ScaleIO GATEWAY IP
ScaleIO Protection Domain name

ScaleIO allows multiple Protection Domains (groups of SDSs that provide backup for each other).

To retrieve the available Protection Domains, use the command scli --query_all and search for the Protection Domains section.

Configure the Protection Domain for newly created volumes by adding the following parameter:

sio_protection_domain_name = ScaleIO Protection Domain
ScaleIO Storage Pool name

A ScaleIO Storage Pool is a set of physical devices in a Protection Domain.

To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.

Configure the Storage Pool for newly created volumes by adding the following parameter:

sio_storage_pool_name = ScaleIO Storage Pool
ScaleIO Storage Pools

Multiple Storage Pools and Protection Domains can be listed for use by the virtual machines.

To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.

Configure the available Storage Pools by adding the following parameter:

sio_storage_pools = Comma-separated list of protection domain:storage pool name
ScaleIO user credentials

Block Storage requires a ScaleIO user with administrative privileges. ScaleIO recommends creating a dedicated OpenStack user account that has an administrative user role.

Refer to the ScaleIO User Guide for details on user account management.

Configure the user credentials by adding the following parameters:

san_login = ScaleIO username

san_password = ScaleIO password
Multiple back ends

Configuring multiple storage back ends allows you to create several back-end storage solutions that serve the same Compute resources.

When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

Configuration example

cinder.conf example file

You can update the cinder.conf file by editing the necessary parameters as follows:

[Default]
enabled_backends = scaleio

[scaleio]
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
volume_backend_name = scaleio
san_ip = GATEWAY_IP
sio_protection_domain_name = Default_domain
sio_storage_pool_name = Default_pool
sio_storage_pools = Domain1:Pool1,Domain2:Pool2
san_login = SIO_USER
san_password = SIO_PASSWD
san_thin_provision = false
Configuration options

The ScaleIO driver supports these configuration options:

Description of EMC SIO volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
sio_max_over_subscription_ratio = 10.0 (Floating point) max_over_subscription_ratio setting for the ScaleIO driver. This replaces the general max_over_subscription_ratio which has no effect in this driver.Maximum value allowed for ScaleIO is 10.0.
sio_protection_domain_id = None (String) Protection Domain ID.
sio_protection_domain_name = None (String) Protection Domain name.
sio_rest_server_port = 443 (String) REST server port.
sio_round_volume_capacity = True (Boolean) Round up volume capacity.
sio_server_certificate_path = None (String) Server certificate path.
sio_storage_pool_id = None (String) Storage Pool ID.
sio_storage_pool_name = None (String) Storage Pool name.
sio_storage_pools = None (String) Storage Pools.
sio_unmap_volume_before_deletion = False (Boolean) Unmap volume before deletion.
sio_verify_server_certificate = False (Boolean) Verify server certificate.
EMC VMAX iSCSI and FC drivers

The EMC VMAX drivers, EMCVMAXISCSIDriver and EMCVMAXFCDriver, support the use of EMC VMAX storage arrays with Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.

The drivers perform volume operations by communicating with the back-end VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.

The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back end for VMAX storage operations.

The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.

System requirements

The Cinder driver supports both VMAX-2 and VMAX-3 series.

For VMAX-2 series, SMI-S version V4.6.2.29 (Solutions Enabler 7.6.2.67) or Solutions Enabler 8.1.2 is required.

For VMAX-3 series, Solutions Enabler 8.3 is required. This is SSL only. Refer to section below SSL support.

When installing Solutions Enabler, make sure you explicitly add the SMI-S component.

You can download SMI-S from the EMC’s support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.

Ensure that there is only one SMI-S (ECOM) server active on the same VMAX array.

Required VMAX software suites for OpenStack

There are five Software Suites available for the VMAX All Flash and Hybrid:

  • Base Suite
  • Advanced Suite
  • Local Replication Suite
  • Remote Replication Suite
  • Total Productivity Pack

Openstack requires the Advanced Suite and the Local Replication Suite or the Total Productivity Pack (it includes the Advanced Suite and the Local Replication Suite) for the VMAX All Flash and Hybrid.

There are four bundled Software Suites for the VMAX2:

  • Advanced Software Suite
  • Base Software Suite
  • Enginuity Suite
  • Symmetrix Management Suite

OpenStack requires the Advanced Software Bundle for the VMAX2.

or

The VMAX2 Optional Software are:

  • EMC Storage Analytics (ESA)
  • FAST VP
  • Ionix ControlCenter and ProSphere Package
  • Open Replicator for Symmetrix
  • PowerPath
  • RecoverPoint EX
  • SRDF for VMAX 10K
  • Storage Configuration Advisor
  • TimeFinder for VMAX10K

OpenStack requires TimeFinder for VMAX10K for the VMAX2.

Each are licensed separately. For further details on how to get the relevant license(s), reference eLicensing Support below.

eLicensing support

To activate your entitlements and obtain your VMAX license files, visit the Service Center on https://support.emc.com, as directed on your License Authorization Code (LAC) letter emailed to you.

  • For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your EMC account representative or authorized reseller.

  • For help with any errors applying license files through Solutions Enabler, contact the EMC Customer Support Center.

  • If you are missing a LAC letter or require further instructions on activating your licenses through the Online Support site, contact EMC’s worldwide Licensing team at licensing@emc.com or call:

    North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.

    EMEA: +353 (0) 21 4879862 and follow the voice prompts.

Supported operations

VMAX drivers support these operations:

  • Create, list, delete, attach, and detach volumes
  • Create, list, and delete volume snapshots
  • Copy an image to a volume
  • Copy a volume to an image
  • Clone a volume
  • Extend a volume
  • Retype a volume (Host assisted volume migration only)
  • Create a volume from a snapshot
  • Create and delete consistency group
  • Create and delete consistency group snapshot
  • Modify consistency group (add/remove volumes)
  • Create consistency group from source (source can only be a CG snapshot)

VMAX drivers also support the following features:

  • Dynamic masking view creation
  • Dynamic determination of the target iSCSI IP address
  • iSCSI multipath support
  • Oversubscription
  • Live Migration

VMAX2:

  • FAST automated storage tiering policy
  • Striped volume creation

VMAX All Flash and Hybrid:

  • Service Level support
  • SnapVX support
  • All Flash support

Note

VMAX All Flash array with Solutions Enabler 8.3 have compression enabled by default when associated with Diamond Service Level. This means volumes added to any newly created storage groups will be compressed.

Setup VMAX drivers
Pywbem Versions
Pywbem Version Ubuntu14.04(LTS),Ubuntu16.04(LTS), Red Hat Enterprise Linux, CentOS and Fedora
  Python2 Python3
pip Native pip Native
0.9.0 No N/A Yes N/A
0.8.4 No N/A Yes N/A
0.7.0 No Yes No Yes

Note

On Python2, use the updated distro version, for example:

# apt-get install python-pywbem

Note

On Python3, use the official pywbem version (V0.9.0 or v0.8.4).

  1. Install the python-pywbem package for your distribution.

    • On Ubuntu:

      # apt-get install python-pywbem
      
    • On openSUSE:

      # zypper install python-pywbem
      
    • On Red Hat Enterprise Linux, CentOS, and Fedora:

      # yum install pywbem
      
  2. Install iSCSI Utilities (for iSCSI drivers only).

    1. Download and configure the Cinder node as an iSCSI initiator.

    2. Install the open-iscsi package.

      • On Ubuntu:

        # apt-get install open-iscsi
        
      • On openSUSE:

        # zypper install open-iscsi
        
      • On Red Hat Enterprise Linux, CentOS, and Fedora:

        # yum install scsi-target-utils.x86_64
        
    3. Enable the iSCSI driver to start automatically.

  3. Download SMI-S from support.emc.com and install it. Add your VMAX arrays to SMI-S.

    You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.

    Note

    You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.

    SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe for windows and ./TestSmiProvider for linux

    Use addsys in TestSmiProvider to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.

  4. Configure Block Storage

    Add the following entries to /etc/cinder/cinder.conf:

    enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
    
    [CONF_GROUP_ISCSI]
    volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
    cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
    volume_backend_name = ISCSI_backend
    
    [CONF_GROUP_FC]
    volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
    cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
    volume_backend_name = FC_backend
    

    In this example, two back-end configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml.

    Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:

    $ cinder type-create VMAX_ISCSI
    $ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
    $ cinder type-create VMAX_FC
    $ cinder type-key VMAX_FC set volume_backend_name=FC_backend
    

    By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.

    Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml file. You do not need to restart the service for this change.

    Add the following lines to the XML file:

    VMAX2
    <?xml version="1.0" encoding="UTF-8" ?>
    <EMC>
      <EcomServerIp>1.1.1.1</EcomServerIp>
      <EcomServerPort>00</EcomServerPort>
      <EcomUserName>user1</EcomUserName>
      <EcomPassword>password1</EcomPassword>
      <PortGroups>
        <PortGroup>OS-PORTGROUP1-PG</PortGroup>
        <PortGroup>OS-PORTGROUP2-PG</PortGroup>
      </PortGroups>
      <Array>111111111111</Array>
      <Pool>FC_GOLD1</Pool>
      <FastPolicy>GOLD1</FastPolicy>
    </EMC>
    
    VMAX All Flash and Hybrid
    <?xml version="1.0" encoding="UTF-8" ?>
    <EMC>
      <EcomServerIp>1.1.1.1</EcomServerIp>
      <EcomServerPort>00</EcomServerPort>
      <EcomUserName>user1</EcomUserName>
      <EcomPassword>password1</EcomPassword>
      <PortGroups>
        <PortGroup>OS-PORTGROUP1-PG</PortGroup>
        <PortGroup>OS-PORTGROUP2-PG</PortGroup>
      </PortGroups>
      <Array>111111111111</Array>
      <Pool>SRP_1</Pool>
      <SLO>Gold</SLO>
      <Workload>OLTP</Workload>
    </EMC>
    

    Where:

EcomServerIp
IP address of the ECOM server which is packaged with SMI-S.
EcomServerPort
Port number of the ECOM server which is packaged with SMI-S.
EcomUserName and EcomPassword
Cedentials for the ECOM server.
PortGroups
Supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given back end), as appropriate for the configured driver (iSCSI or FC).
Array
Unique VMAX array serial number.
Pool
Unique pool name within a given array. For back ends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For back ends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
FastPolicy
VMAX2 only. Name of the FAST Policy to be used. By including this tag, volumes managed by this back end are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
SLO
VMAX All Flash and Hybrid only. The Service Level Objective (SLO) that manages the underlying storage to provide expected performance. Omitting the SLO tag means that non FAST storage groups will be created instead (storage groups not associated with any service level).
Workload
VMAX All Flash and Hybrid only. When a workload type is added, the latency range is reduced due to the added information. Omitting the Workload tag means the latency range will be the widest for its SLO type.
FC Zoning with VMAX

Zone Manager is required when there is a fabric between the host and array. This is necessary for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.

iSCSI with VMAX
  • Make sure the iscsi-initiator-utils package is installed on all Compute nodes.

Note

You can only ping the VMAX iSCSI target ports when there is a valid masking view. An attach operation creates this masking view.

VMAX masking view and group naming info
Masking view names

Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions. [protocol] is either I for volumes attached over iSCSI or F for volumes attached over Fiber Channel.

VMAX2

OS-[shortHostName]-[poolName]-[protocol]-MV

VMAX2 (where FAST policy is used)

OS-[shortHostName]-[fastPolicy]-[protocol]-MV

VMAX All Flash and Hybrid

OS-[shortHostName]-[SRP]-[SLO]-[workload]-[protocol]-MV
Initiator group names

For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format. [protocol] is either I for volumes attached over iSCSI or F for volumes attached over Fiber Channel.

OS-[shortHostName]-[protocol]-IG

Note

Hosts attaching to OpenStack managed VMAX storage cannot also attach to storage on the same VMAX that are not managed by OpenStack.

FA port groups

VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.

Storage group names

As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). [protocol] is either I for volumes attached over iSCSI or F for volumes attached over Fiber Channel.

VMAX2

OS-[shortHostName]-[poolName]-[protocol]-SG

VMAX2 (where FAST policy is used)

OS-[shortHostName]-[fastPolicy]-[protocol]-SG

VMAX All Flash and Hybrid

OS-[shortHostName]-[SRP]-[SLO]-[Workload]-[protocol]-SG
VMAX2 concatenated or striped volumes

In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.

Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type storagetype:stripecount representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped volume type will be striped and made up of 4 meta members.

$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4
SSL support

Note

The ECOM component in Solutions Enabler enforces SSL in 8.3. By default, this port is 5989.

  1. Get the CA certificate of the ECOM server:

    # openssl s_client -showcerts -connect <ecom_hostname>.lss.emc.com:5989 </dev/null
    
  2. Copy the pem file to the system certificate directory:

    # cp <ecom_hostname>.lss.emc.com.pem /usr/share/ca-certificates/<ecom_hostname>.lss.emc.com.crt
    
  3. Update CA certificate database with the following commands (accept defaults):

    # dpkg-reconfigure ca-certificates
    # dpkg-reconfigure ca-certificates
    
  4. Update /etc/cinder/cinder.conf to reflect SSL functionality by adding the following to the back end block:

    driver_ssl_cert_verify = False
    driver_use_ssl = True
    driver_ssl_cert_path = /opt/stack/<ecom_hostname>.lss.emc.com.pem (Optional if Step 3 and 4 are skipped)
    
  5. Update EcomServerIp to ECOM host name and EcomServerPort to secure port (5989 by default) in /etc/cinder/cinder_emc_config_<conf_group>.xml.

Oversubscription support

Oversubscription support requires the /etc/cinder/cinder.conf to be updated with two additional tags max_over_subscription_ratio and reserved_percentage. In the sample below, the value of 2.0 for max_over_subscription_ratio means that the pools in oversubscribed by a factor of 2, or 200% oversubscribed. The reserved_percentage is the high water mark where by the physical remaining space cannot be exceeded. For example, if there is only 4% of physical space left and the reserve percentage is 5, the free space will equate to zero. This is a safety mechanism to prevent a scenario where a provisioning request fails due to insufficient raw space.

The parameter max_over_subscription_ratio and reserved_percentage are optional.

To set these parameter go to the configuration group of the volume type in /etc/cinder/cinder.conf.

[VMAX_ISCSI_SILVER]
cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_SILVER.xml
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
volume_backend_name = VMAX_ISCSI_SILVER
max_over_subscription_ratio = 2.0
reserved_percentage = 10

For the second iteration of over subscription, take into account the EMCMaxSubscriptionPercent property on the pool. This value is the highest that a pool can be oversubscribed.

Scenario 1

EMCMaxSubscriptionPercent is 200 and the user defined max_over_subscription_ratio is 2.5, the latter is ignored. Oversubscription is 200%.

Scenario 2

EMCMaxSubscriptionPercent is 200 and the user defined max_over_subscription_ratio is 1.5, 1.5 equates to 150% and is less than the value set on the pool. Oversubscription is 150%.

Scenario 3

EMCMaxSubscriptionPercent is 0. This means there is no upper limit on the pool. The user defined max_over_subscription_ratio is 1.5. Oversubscription is 150%.

Scenario 4

EMCMaxSubscriptionPercent is 0. max_over_subscription_ratio is not set by the user. We recommend to default to upper limit, this is 150%.

Note

If FAST is set and multiple pools are associated with a FAST policy, then the same rules apply. The difference is, the TotalManagedSpace and EMCSubscribedCapacity for each pool associated with the FAST policy are aggregated.

Scenario 5

EMCMaxSubscriptionPercent is 200 on one pool. It is 300 on another pool. The user defined max_over_subscription_ratio is 2.5. Oversubscription is 200% on the first pool and 250% on the other.

QoS (Quality of Service) support

Quality of service(QoS) has traditionally been associated with network bandwidth usage. Network administrators set limitations on certain networks in terms of bandwidth usage for clients. This enables them to provide a tiered level of service based on cost. The cinder QoS offers similar functionality based on volume type setting limits on host storage bandwidth per service offering. Each volume type is tied to specific QoS attributes that are unique to each storage vendor. The VMAX plugin offers limits via the following attributes:

  • By I/O limit per second (IOPS)
  • By limiting throughput per second (MB/S)
  • Dynamic distribution
  • The VMAX offers modification of QoS at the Storage Group level
USE CASE 1 - Default values

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
maxIOPS 4000
maxMBPS 4000
DistributionType Always
  1. Create QoS Specs with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver maxIOPS=4000 maxMBPS=4000 DistributionType=Always
    
  2. Associate QoS specs with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - 4000
  • Host I/O Limit (IO/Sec) - 4000
  • Set Dynamic Distribution - Always

Outcome - Block Storage (cinder)

Volume is created against volume type and QoS is enforced with the parameters above.

USE CASE 2 - Preset limits

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - 2000
  • Host I/O Limit (IO/Sec) - 2000
  • Set Dynamic Distribution - Never
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
maxIOPS 4000
maxMBPS 4000
DistributionType Always
  1. Create QoS specifications with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver maxIOPS=4000 maxMBPS=4000 DistributionType=Always
    
  2. Associate QoS specifications with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - 4000
  • Host I/O Limit (IO/Sec) - 4000
  • Set Dynamic Distribution - Always

Outcome - Block Storage (cinder)

Volume is created against volume type and QoS is enforced with the parameters above.

USE CASE 3 - Preset limits

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
DistributionType Always
  1. Create QoS specifications with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver DistributionType=Always
    
  2. Associate QoS specifications with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A

Outcome - Block Storage (cinder)

Volume is created against volume type and there is no QoS change.

USE CASE 4 - Preset limits

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
DistributionType OnFailure
  1. Create QoS specifications with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver DistributionType=OnFailure
    
  2. Associate QoS specifications with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A

Outcome - Block Storage (cinder)

Volume is created against volume type and there is no QoS change.

iSCSI multipathing support
  • Install open-iscsi on all nodes on your system
  • Do not install EMC PowerPath as they cannot co-exist with native multipath software
  • Multipath tools must be installed on all nova compute nodes

On Ubuntu:

# apt-get install open-iscsi           #ensure iSCSI is installed
# apt-get install multipath-tools      #multipath modules
# apt-get install sysfsutils sg3-utils #file system utilities
# apt-get install scsitools            #SCSI tools

On openSUSE and SUSE Linux Enterprise Server:

# zipper install open-iscsi           #ensure iSCSI is installed
# zipper install multipath-tools      #multipath modules
# zipper install sysfsutils sg3-utils #file system utilities
# zipper install scsitools            #SCSI tools

On Red Hat Enterprise Linux and CentOS:

# yum install iscsi-initiator-utils   #ensure iSCSI is installed
# yum install device-mapper-multipath #multipath modules
# yum install sysfsutils sg3-utils    #file system utilities
# yum install scsitools               #SCSI tools
Multipath configuration file

The multipath configuration file may be edited for better management and performance. Log in as a privileged user and make the following changes to /etc/multipath.conf on the Compute (nova) node(s).

devices {
# Device attributed for EMC VMAX
    device {
            vendor "EMC"
            product "SYMMETRIX"
            path_grouping_policy multibus
            getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n"
            path_selector "round-robin 0"
            path_checker tur
            features "0"
            hardware_handler "0"
            prio const
            rr_weight uniform
            no_path_retry 6
            rr_min_io 1000
            rr_min_io_rq 1
    }
}

You may need to reboot the host after installing the MPIO tools or restart iSCSI and multipath services.

On Ubuntu:

# service open-iscsi restart
# service multipath-tools restart

On On openSUSE, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and CentOS:

# systemctl restart open-iscsi
# systemctl restart multipath-tools
$ lsblk
NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                          8:0    0     1G  0 disk
..360000970000196701868533030303235 (dm-6) 252:6    0     1G  0 mpath
sdb                                          8:16   0     1G  0 disk
..360000970000196701868533030303235 (dm-6) 252:6    0     1G  0 mpath
vda                                        253:0    0     1T  0 disk
OpenStack configurations

On Compute (nova) node, add the following flag in the [libvirt] section of /etc/nova/nova.conf:

iscsi_use_multipath = True

On cinder controller node, set the multipath flag to true in /etc/cinder.conf:

use_multipath_for_image_xfer = True

Restart nova-compute and cinder-volume services after the change.

Verify you have multiple initiators available on the compute node for I/O
  1. Create a 3GB VMAX volume.

  2. Create an instance from image out of native LVM storage or from VMAX storage, for example, from a bootable volume

  3. Attach the 3GB volume to the new instance:

    $ multipath -ll
    mpath102 (360000970000196700531533030383039) dm-3 EMC,SYMMETRIX
    size=3G features='1 queue_if_no_path' hwhandler='0' wp=rw
    '-+- policy='round-robin 0' prio=1 status=active
    33:0:0:1 sdb 8:16 active ready running
    '- 34:0:0:1 sdc 8:32 active ready running
    
  4. Use the lsblk command to see the multipath device:

    $ lsblk
    NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sdb                                          8:0    0     3G  0 disk
    ..360000970000196700531533030383039 (dm-6) 252:6    0     3G  0 mpath
    sdc                                          8:16   0     3G  0 disk
    ..360000970000196700531533030383039 (dm-6) 252:6    0     3G  0 mpath
    vda
    
Consistency group support

Consistency Groups operations are performed through the CLI using v2 of the cinder API.

/etc/cinder/policy.json may need to be updated to enable new API calls for Consistency groups.

Note

Even though the terminology is ‘Consistency Group’ in OpenStack, a Storage Group is created on the VMAX, and should not be confused with a VMAX Consistency Group which is an SRDF construct. The Storage Group is not associated with any FAST policy.

Operations
  • Create a Consistency Group:

    cinder --os-volume-api-version 2 consisgroup-create [--name <name>]
    [--description <description>] [--availability-zone <availability-zone>]
    <volume-types>
    
    $ cinder --os-volume-api-version 2 consisgroup-create --name bronzeCG2 volume_type_1
    
  • List Consistency Groups:

    cinder consisgroup-list [--all-tenants [<0|1>]]
    
    $ cinder consisgroup-list
    
  • Show a Consistency Group:

    cinder consisgroup-show <consistencygroup>
    
    $ cinder consisgroup-show 38a604b7-06eb-4202-8651-dbf2610a0827
    
  • Update a consistency Group:

    cinder consisgroup-update [--name <name>] [--description <description>]
    [--add-volumes <uuid1,uuid2,......>] [--remove-volumes <uuid3,uuid4,......>]
    <consistencygroup>
    

    Change name:

    $ cinder consisgroup-update --name updated_name 38a604b7-06eb-4202-8651-dbf2610a0827
    

    Add volume(s) to a Consistency Group:

    $ cinder consisgroup-update --add-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827
    

    Delete volume(s) from a Consistency Group:

    $ cinder consisgroup-update --remove-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827
    
  • Create a snapshot of a Consistency Group:

    cinder cgsnapshot-create [--name <name>] [--description <description>]
    <consistencygroup>
    
    $ cinder cgsnapshot-create 618d962d-2917-4cca-a3ee-9699373e6625
    
  • Delete a snapshot of a Consistency Group:

    cinder cgsnapshot-delete <cgsnapshot> [<cgsnapshot> ...]
    
    $ cinder cgsnapshot-delete 618d962d-2917-4cca-a3ee-9699373e6625
    
  • Delete a Consistency Group:

    cinder consisgroup-delete [--force] <consistencygroup> [<consistencygroup> ...]
    
    $ cinder consisgroup-delete --force 618d962d-2917-4cca-a3ee-9699373e6625
    
  • Create a Consistency group from source (the source can only be a CG snapshot):

    cinder consisgroup-create-from-src [--cgsnapshot <cgsnapshot>]
    [--source-cg <source-cg>] [--name <name>] [--description <description>]
    
    $ cinder consisgroup-create-from-src --source-cg 25dae184-1f25-412b-b8d7-9a25698fdb6d
    
  • You can also create a volume in a consistency group in one step:

    cinder create [--consisgroup-id <consistencygroup-id>] [--name <name>]
    [--description <description>] [--volume-type <volume-type>]
    [--availability-zone <availability-zone>] <size>
    
    $ cinder create --volume-type volume_type_1 --name cgBronzeVol --consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
    
Workload Planner (WLP)

VMAX Hybrid allows you to manage application storage by using Service Level Objectives (SLO) using policy based automation rather than the tiering in the VMAX2. The VMAX Hybrid comes with up to 6 SLO policies defined. Each has a set of workload characteristics that determine the drive types and mixes which will be used for the SLO. All storage in the VMAX Array is virtually provisioned, and all of the pools are created in containers called Storage Resource Pools (SRP). Typically there is only one SRP, however there can be more. Therefore, it is the same pool we will provision to but we can provide different SLO/Workload combinations.

The SLO capacity is retrieved by interfacing with Unisphere Workload Planner (WLP). If you do not set up this relationship then the capacity retrieved is that of the entire SRP. This can cause issues as it can never be an accurate representation of what storage is available for any given SLO and Workload combination.

Enabling WLP on Unisphere
  1. To enable WLP on Unisphere, click on the array‣Performance‣Settings.
  2. Set both the Real Time and the Root Cause Analysis.
  3. Click Register.

Note

This should be set up ahead of time (allowing for several hours of data collection), so that the Unisphere for VMAX Performance Analyzer can collect rated metrics for each of the supported element types.

Using TestSmiProvider to add statistics access point

After enabling WLP you must then enable SMI-S to gain access to the WLP data:

  1. Connect to the SMI-S Provider using TestSmiProvider.

  2. Navigate to the Active menu.

  3. Type reg and enter the noted responses to the questions:

    (EMCProvider:5989) ? reg
    Current list of statistics Access Points: ?
    Note: The current list will be empty if there are no existing Access Points.
    Add Statistics Access Point {y|n} [n]: y
    HostID [l2se0060.lss.emc.com]: ?
    Note: Enter the Unisphere for VMAX location using a fully qualified Host ID.
    Port [8443]: ?
    Note: The Port default is the Unisphere for VMAX default secure port. If the secure port
    is different for your Unisphere for VMAX setup, adjust this value accordingly.
    User [smc]: ?
    Note: Enter the Unisphere for VMAX username.
    Password [smc]: ?
    Note: Enter the Unisphere for VMAX password.
    
  4. Type reg again to view the current list:

    (EMCProvider:5988) ? reg
    Current list of statistics Access Points:
    HostIDs:
    l2se0060.lss.emc.com
    PortNumbers:
    8443
    Users:
    smc
    Add Statistics Access Point {y|n} [n]: n
    
EMC VNX driver

EMC VNX driver interacts with configured VNX array. It supports both iSCSI and FC protocol.

The VNX cinder driver performs the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command-line interface used for management, diagnostics, and reporting functions for VNX. It also supports both iSCSI and FC protocol.

System requirements
  • VNX Operational Environment for Block version 5.32 or higher.
  • VNX Snapshot and Thin Provisioning license should be activated for VNX.
  • Python library storops to interact with VNX.
  • Navisphere CLI v7.32 or higher is installed along with the driver.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Modify consistency groups.
  • Efficient non-disruptive volume backup.
  • Create a cloned consistency group.
  • Create a consistency group from consistency group snapshots.
  • Replication v2.1 support.
Preparation

This section contains instructions to prepare the Block Storage nodes to use the EMC VNX driver. You should install the Navisphere CLI and ensure you have correct zoning configurations.

Install Navisphere CLI

Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment. You need to download different versions for different platforms:

Install Python library storops

storops is a Python library that interacts with VNX array through Navisphere CLI. Use the following command to install the storops library:

$ pip install storops
Check array software

Make sure your have the following software installed for certain features:

Feature Software Required
All ThinProvisioning
All VNXSnapshots
FAST cache support FASTCache
Create volume with type compressed Compression
Create volume with type deduplicated Deduplication

Required software

You can check the status of your array software in the Software page of Storage System Properties. Here is how it looks like:

_images/emc-enabler.png
Network configuration

For the FC Driver, FC zoning is properly configured between the hosts and the VNX. Check Register FC port with VNX for reference.

For the iSCSI Driver, make sure your VNX iSCSI port is accessible by your hosts. Check Register iSCSI port with VNX for reference.

You can use initiator_auto_registration = True configuration to avoid registering the ports manually. Check the detail of the configuration in Back-end configuration for reference.

If you are trying to setup multipath, refer to Multipath setup.

Back-end configuration

Make the following changes in the /etc/cinder/cinder.conf file.

Minimum configuration

Here is a sample of minimum back-end configuration. See the following sections for the detail of each option. Set storage_protocol = iscsi if iSCSI protocol is used.

[DEFAULT]
enabled_backends = vnx_array1

[vnx_array1]
san_ip = 10.10.72.41
san_login = sysadmin
san_password = sysadmin
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc
Multiple back-end configuration

Here is a sample of a minimum back-end configuration. See following sections for the detail of each option. Set storage_protocol = iscsi if iSCSI protocol is used.

[DEFAULT]
enabled_backends = backendA, backendB

[backendA]
storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc

[backendB]
storage_vnx_pool_names = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc

The value of the option storage_protocol can be either fc or iscsi, which is case insensitive.

For more details on multiple back ends, see Configure multiple-storage back ends

Required configurations

IP of the VNX Storage Processors

Specify SP A or SP B IP to connect:

san_ip = <IP of VNX Storage Processor>

VNX login credentials

There are two ways to specify the credentials.

  • Use plain text username and password.

    Supply for plain username and password:

    san_login = <VNX account with administrator role>
    san_password = <password for VNX account>
    storage_vnx_authentication_type = global
    

    Valid values for storage_vnx_authentication_type are: global (default), local, and ldap.

  • Use Security file.

    This approach avoids the plain text password in your cinder configuration file. Supply a security file as below:

    storage_vnx_security_file_dir = <path to security file>
    

Check Unisphere CLI user guide or Authenticate by security file for how to create a security file.

Path to your Unisphere CLI

Specify the absolute path to your naviseccli:

naviseccli_path = /opt/Navisphere/bin/naviseccli

Driver’s storage protocol

  • For the FC Driver, add the following option:

    volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
    storage_protocol = fc
    
  • For iSCSI Driver, add the following option:

    volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
    storage_protocol = iscsi
    
Optional configurations
VNX pool names

Specify the list of pools to be managed, separated by commas. They should already exist in VNX.

storage_vnx_pool_names = pool 1, pool 2

If this value is not specified, all pools of the array will be used.

Initiator auto registration

When initiator_auto_registration is set to True, the driver will automatically register initiators to all working target ports of the VNX array during volume attaching (The driver will skip those initiators that have already been registered) if the option io_port_list is not specified in the cinder.conf file.

If the user wants to register the initiators with some specific ports but not register with the other ports, this functionality should be disabled.

When a comma-separated list is given to io_port_list, the driver will only register the initiator to the ports specified in the list and only return target port(s) which belong to the target ports in the io_port_list instead of all target ports.

  • Example for FC ports:

    io_port_list = a-1,B-3
    

    a or B is Storage Processor, number 1 and 3 are Port ID.

  • Example for iSCSI ports:

    io_port_list = a-1-0,B-3-0
    

    a or B is Storage Processor, the first numbers 1 and 3 are Port ID and the second number 0 is Virtual Port ID

Note

  • Rather than de-registered, the registered ports will be simply bypassed whatever they are in io_port_list or not.
  • The driver will raise an exception if ports in io_port_list do not exist in VNX during startup.
Force delete volumes in storage group

Some available volumes may remain in storage group on the VNX array due to some OpenStack timeout issue. But the VNX array do not allow the user to delete the volumes which are in storage group. Option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.

When force_delete_lun_in_storagegroup is set to True in the back-end section, the driver will move the volumes out of the storage groups and then delete them if the user tries to delete the volumes that remain in the storage group on the VNX array.

The default value of force_delete_lun_in_storagegroup is False.

Over subscription in thin provisioning

Over subscription allows that the sum of all volume’s capacity (provisioned capacity) to be larger than the pool’s total capacity.

max_over_subscription_ratio in the back-end section is the ratio of provisioned capacity over total capacity.

The default value of max_over_subscription_ratio is 20.0, which means the provisioned capacity can be 20 times of the total capacity. If the value of this ratio is set larger than 1.0, the provisioned capacity can exceed the total capacity.

Storage group automatic deletion

For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances which are going to consume VNX Block Storage (using compute node’s host name as storage group’s name). All the volumes attached to the VM instances in a Compute node will be put into the storage group. If destroy_empty_storage_group is set to True, the driver will remove the empty storage group after its last volume is detached. For data safety, it does not suggest to set destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.

Initiator auto deregistration

Enabling storage group automatic deletion is the precondition of this function. If initiator_auto_deregistration is set to True is set, the driver will deregister all FC and iSCSI initiators of the host after its storage group is deleted.

FC SAN auto zoning

The EMC VNX driver supports FC SAN auto zoning when ZoneManager is configured and zoning_mode is set to fabric in cinder.conf. For ZoneManager configuration, refer to Fibre Channel Zone Manager.

Volume number threshold

In VNX, there is a limitation on the number of pool volumes that can be created in the system. When the limitation is reached, no more pool volumes can be created even if there is remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the volume limitation, the creation fails.

The default value of check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.

iSCSI initiators

iscsi_initiators is a dictionary of IP addresses of the iSCSI initiator ports on OpenStack Compute and Block Storage nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.

Note

This option is only valid for iSCSI driver.

Here is an example. VNX will connect host1 with 10.0.0.1 and 10.0.0.2. And it will connect host2 with 10.0.0.3.

The key name (host1 in the example) should be the output of hostname command.

iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
Default timeout

Specify the timeout in minutes for operations like LUN migration, LUN creation, etc. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait.

The default value for this option is infinite.

default_timeout = 60
Max LUNs per storage group

The max_luns_per_storage_group specify the maximum number of LUNs in a storage group. Default value is 255. It is also the maximum value supported by VNX.

Ignore pool full threshold

If ignore_pool_full_threshold is set to True, driver will force LUN creation even if the full threshold of pool is reached. Default to False.

Extra spec options

Extra specs are used in volume types created in Block Storage as the preferred property of the volume.

The Block Storage scheduler will use extra specs to find the suitable back end for the volume and the Block Storage driver will create the volume based on the properties specified by the extra spec.

Use the following command to create a volume type:

$ cinder type-create "demoVolumeType"

Use the following command to update the extra spec of a volume type:

$ cinder type-key "demoVolumeType" set provisioning:type=thin thick_provisioning_support='<is> True'

The following sections describe the VNX extra keys.

Provisioning type
  • Key: provisioning:type

  • Possible Values:

    • thick

      Volume is fully provisioned.

      Run the following commands to create a thick volume type:

      $ cinder type-create "ThickVolumeType"
      $ cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
      
    • thin

      Volume is virtually provisioned.

      Run the following commands to create a thin volume type:

      $ cinder type-create "ThinVolumeType"
      $ cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
      
    • deduplicated

      Volume is thin and deduplication is enabled. The administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX Deduplication license must be activated on VNX, and specify deduplication_support=True to let Block Storage scheduler find the proper volume back end.

      Run the following commands to create a deduplicated volume type:

      $ cinder type-create "DeduplicatedVolumeType"
      $ cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
      
    • compressed

      Volume is thin and compression is enabled. The administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX Compression license must be activated on VNX, and use compression_support=True to let Block Storage scheduler find a volume back end. VNX does not support creating snapshots on a compressed volume.

      Run the following commands to create a compressed volume type:

      $ cinder type-create "CompressedVolumeType"
      $ cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
      
  • Default: thick

Note

provisioning:type replaces the old spec key storagetype:provisioning. The latter one is obsolete since the Mitaka release.

Storage tiering support
  • Key: storagetype:tiering
  • Possible values:
    • StartHighThenAuto
    • Auto
    • HighestAvailable
    • LowestAvailable
    • NoMovement
  • Default: StartHighThenAuto

VNX supports fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the key fast_support='<is> True' to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:

Run the following commands to create a volume type with tiering policy:

$ cinder type-create "ThinVolumeOnAutoTier"
$ cinder type-key "ThinVolumeOnAutoTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'

Note

The tiering policy cannot be applied to a deduplicated volume. Tiering policy of the deduplicated LUN align with the settings of the pool.

FAST cache support
  • Key: fast_cache_enabled
  • Possible values:
    • True
    • False
  • Default: False

VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. Volume will be created on the backend with FAST cache enabled when <is> True is specified.

Pool name
  • Key: pool_name
  • Possible values: name of the storage pool managed by cinder
  • Default: None

If the user wants to create a volume on a certain storage pool in a back end that manages multiple pools, a volume type with a extra spec specified storage pool should be created first, then the user can use this volume type to create the volume.

Run the following commands to create the volume type:

$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
Obsolete extra specs

Note

DO NOT use the following obsolete extra spec keys:

  • storagetype:provisioning
  • storagetype:pool
Advanced features
Snap copy
  • Metadata Key: snapcopy
  • Possible Values:
    • True or true
    • False or false
  • Default: False

VNX driver supports snap copy which accelerates the process for creating a copied volume.

By default, the driver will do full data copy when creating a volume from a snapshot or cloning a volume. This is time-consuming, especially for large volumes. When snap copy is used, driver creates a snapshot and mounts it as a volume for the 2 kinds of operations which will be instant even for large volumes.

To enable this functionality, append --metadata snapcopy=True when creating cloned volume or creating volume from snapshot.

$ cinder create --source-volid <source-void> --name "cloned_volume" --metadata snapcopy=True

Or

$ cinder create --snapshot-id <snapshot-id> --name "vol_from_snapshot" --metadata snapcopy=True

The newly created volume is a snap copy instead of a full copy. If a full copy is needed, retype or migrate can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.

You can determine whether the volume is a snap-copy volume or not by showing its metadata. If the snapcopy in metadata is True or true, the volume is a snap-copy volume. Otherwise, it is a full-copy volume.

$ cinder metadata-show <volume>

Constraints

  • The number of snap-copy volumes created from a single source volume is limited to 255 at one point in time.
  • The source volume which has snap-copy volume can not be deleted or migrated.
  • snapcopy volume will be change to full-copy volume after host-assisted or storage-assisted migration.
  • snapcopy volume can not be added to consisgroup because of VNX limitation.
Efficient non-disruptive volume backup

The default implementation in Block Storage for non-disruptive volume backup is not efficient since a cloned volume will be created during backup.

The approach of efficient backup is to create a snapshot for the volume and connect this snapshot (a mount point in VNX) to the Block Storage host for volume backup. This eliminates migration time involved in volume clone.

Constraints

  • Backup creation for a snap-copy volume is not allowed if the volume status is in-use since snapshot cannot be taken from this volume.
Configurable migration rate

VNX cinder driver is leveraging the LUN migration from the VNX. LUN migration is involved in cloning, migrating, retyping, and creating volume from snapshot. When admin set migrate_rate in volume’s metadata, VNX driver can start migration with specified rate. The available values for the migrate_rate are high, asap, low and medium.

The following is an example to set migrate_rate to asap:

$ cinder metadata <volume-id> set migrate_rate=asap

After set, any cinder volume operations involving VNX LUN migration will take the value as the migration rate. To restore the migration rate to default, unset the metadata as following:

$ cinder metadata <volume-id> unset migrate_rate

Note

Do not use the asap migration rate when the system is in production, as the normal host I/O may be interrupted. Use asap only when the system is offline (free of any host-level I/O).

Replication v2.1 support

Cinder introduces Replication v2.1 support in Mitaka, it supports fail-over and fail-back replication for specific back end. In VNX cinder driver, MirrorView is used to set up replication for the volume.

To enable this feature, you need to set configuration in cinder.conf as below:

replication_device = backend_id:<secondary VNX serial number>,
                     san_ip:192.168.1.2,
                     san_login:admin,
                     san_password:admin,
                     naviseccli_path:/opt/Navisphere/bin/naviseccli,
                     storage_vnx_authentication_type:global,
                     storage_vnx_security_file_dir:

Currently, only synchronized mode MirrorView is supported, and one volume can only have 1 secondary storage system. Therefore, you can have only one replication_device presented in driver configuration section.

To create a replication enabled volume, you need to create a volume type:

$ cinder type-create replication-type
$ cinder type-key replication-type set replication_enabled="<is> True"

And then create volume with above volume type:

$ cinder create --volume-type replication-type --name replication-volume 1

Supported operations

  • Create volume

  • Create cloned volume

  • Create volume from snapshot

  • Fail-over volume:

    $ cinder failover-host --backend_id <secondary VNX serial number> <hostname>
    
  • Fail-back volume:

    $ cinder failover-host --backend_id default <hostname>
    

Requirements

  • 2 VNX systems must be in same domain.
  • For iSCSI MirrorView, user needs to setup iSCSI connection before enable replication in Cinder.
  • For FC MirrorView, user needs to zone specific FC ports from 2 VNX system together.
  • MirrorView Sync enabler( MirrorView/S ) installed on both systems.
  • Write intent log enabled on both VNX systems.

For more information on how to configure, please refer to: MirrorView-Knowledgebook:-Releases-30-–-33

Best practice
Multipath setup

Enabling multipath volume access is recommended for robust data access. The major configuration includes:

  1. Install multipath-tools, sysfsutils and sg3-utils on the nodes hosting Nova-Compute and Cinder-Volume services. Check the operating system manual for the system distribution for specific installation steps. For Red Hat based distributions, they should be device-mapper-multipath, sysfsutils and sg3_utils.
  2. Specify use_multipath_for_image_xfer=true in the cinder.conf file for each FC/iSCSI back end.
  3. Specify iscsi_use_multipath=True in libvirt section of the nova.conf file. This option is valid for both iSCSI and FC driver.

For multipath-tools, here is an EMC recommended sample of /etc/multipath.conf file.

user_friendly_names is not specified in the configuration and thus it will take the default value no. It is not recommended to set it to yes because it may fail operations such as VM live migration.

blacklist {
    # Skip the files under /dev that are definitely not FC/iSCSI devices
    # Different system may need different customization
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

    # Skip LUNZ device from VNX
    device {
        vendor "DGC"
        product "LUNZ"
        }
}

defaults {
    user_friendly_names no
    flush_on_last_del yes
}

devices {
    # Device attributed for EMC CLARiiON and VNX series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "1 queue_if_no_path"
        hardware_handler "1 alua"
        prio alua
        failback immediate
    }
}

Note

When multipath is used in OpenStack, multipath faulty devices may come out in Nova-Compute nodes due to different issues (Bug 1336683 is a typical example).

A solution to completely avoid faulty devices has not been found yet. faulty_device_cleanup.py mitigates this issue when VNX iSCSI storage is used. Cloud administrators can deploy the script in all Nova-Compute nodes and use a CRON job to run the script on each Nova-Compute node periodically so that faulty devices will not stay too long. Refer to: VNX faulty device cleanup for detailed usage and the script.

Restrictions and limitations
iSCSI port cache

EMC VNX iSCSI driver caches the iSCSI ports information, so that the user should restart the cinder-volume service or wait for seconds (which is configured by periodic_interval in the cinder.conf file) before any volume attachment operation after changing the iSCSI port configurations. Otherwise the attachment may fail because the old iSCSI port configurations were used.

No extending for volume with snapshots

VNX does not support extending the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the status of the volume would change to error_extending.

Limitations for deploying cinder on computer node

It is not recommended to deploy the driver on a compute node if cinder upload-to-image --force True is used against an in-use volume. Otherwise, cinder upload-to-image --force True will terminate the data access of the vm instance to the volume.

Storage group with host names in VNX

When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and also add the compute node’s or Block Storage node’s registered initiators into the storage group.

If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.

It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail).

EMC storage-assisted volume migration

EMC VNX driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False <volume_id> <host> or cinder migrate <volume_id> <host>, cinder will try to leverage the VNX’s native volume migration functionality.

In following scenarios, VNX storage-assisted volume migration will not be triggered:

  • in-use volume migration between back ends with different storage protocol, for example, FC and iSCSI.
  • Volume is to be migrated across arrays.
Appendix
Authenticate by security file

VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials.

The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.

  1. Find out the Linux user id of the cinder-volume processes. Assuming the cinder-volume service is running by the account cinder.

  2. Run su as root user.

  3. In /etc/passwd file, change cinder:x:113:120::/var/lib/cinder:/bin/false to cinder:x:113:120::/var/lib/cinder:/bin/bash (This temporary change is to make step 4 work.)

  4. Save the credentials on behalf of cinder user to a security file (assuming the array credentials are admin/admin in global scope). In the command below, the -secfilepath switch is used to specify the location to save the security file.

    # su -l cinder -c '/opt/Navisphere/bin/naviseccli \
      -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
    
  5. Change cinder:x:113:120::/var/lib/cinder:/bin/bash back to cinder:x:113:120::/var/lib/cinder:/bin/false in /etc/passwd file.

  6. Remove the credentials options san_login, san_password and storage_vnx_authentication_type from cinder.conf file. (normally it is /etc/cinder/cinder.conf file). Add option storage_vnx_security_file_dir and set its value to the directory path of your security file generated in the above step. Omit this option if -secfilepath is not used in the above step.

  7. Restart the cinder-volume service to validate the change.

Register FC port with VNX

This configuration is only required when initiator_auto_registration=False.

To access VNX storage, the Compute nodes should be registered on VNX first if initiator auto registration is not enabled.

To perform Copy Image to Volume and Copy Volume to Image operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.

The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).

  1. Assume 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 is the WWN of a FC initiator port name of the compute node whose host name and IP are myhost1 and 10.10.61.1. Register 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 in Unisphere:
  2. Log in to Unisphere, go to FNM0000000000 > Hosts > Initiators.
  3. Refresh and wait until the initiator 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 with SP Port A-1 appears.
  4. Click the Register button, select CLARiiON/VNX and enter the host name (which is the output of the hostname command) and IP address:
    • Hostname: myhost1
    • IP: 10.10.61.1
    • Click Register.
  5. Then host 10.10.61.1 will appear under Hosts > Host List as well.
  6. Register the wwn with more ports if needed.
Register iSCSI port with VNX

This configuration is only required when initiator_auto_registration=False.

To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.

To perform Copy Image to Volume and Copy Volume to Image operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.

The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).

  1. On the compute node with IP address 10.10.61.1 and host name myhost1, execute the following commands (assuming 10.10.61.35 is the iSCSI target):

    1. Start the iSCSI initiator service on the node:

      # /etc/init.d/open-iscsi start
      
    2. Discover the iSCSI target portals on VNX:

      # iscsiadm -m discovery -t st -p 10.10.61.35
      
    3. Change directory to /etc/iscsi :

      # cd /etc/iscsi
      
    4. Find out the iqn of the node:

      # more initiatorname.iscsi
      
  2. Log in to VNX from the compute node using the target corresponding to the SPA port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
    
  3. Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g in Unisphere:

    1. Log in to Unisphere, go to FNM0000000000 > Hosts > Initiators.
    2. Refresh and wait until the initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the host name (which is the output of the hostname command) and IP address:
      • Hostname: myhost1
      • IP: 10.10.61.1
      • Click Register.
    4. Then host 10.10.61.1 will appear under Hosts > Host List as well.
  4. Log out iSCSI on the node:

    # iscsiadm -m node -u
    
  5. Log in to VNX from the compute node using the target corresponding to the SPB port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
    
  6. In Unisphere, register the initiator with the SPB port.

  7. Log out iSCSI on the node:

    # iscsiadm -m node -u
    
  8. Register the iqn with more ports if needed.

EMC XtremIO Block Storage driver configuration

The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtremIO Storage cluster.

This section explains how to configure and connect the block storage nodes to an XtremIO storage cluster.

Support matrix

XtremIO version 4.x is supported.

Supported operations
  • Create, delete, clone, attach, and detach volumes.
  • Create and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Extend a volume.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Get volume statistics.
  • Create, modify, delete, and list consistency groups.
  • Create, modify, delete, and list snapshots of consistency groups.
  • Create consistency group from consistency group or consistency group snapshot.
  • Volume Migration (host assisted)
XtremIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf.

Description of EMC XtremIO volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
xtremio_array_busy_retry_count = 5 (Integer) Number of retries in case array is busy
xtremio_array_busy_retry_interval = 5 (Integer) Interval between retries in case array is busy
xtremio_cluster_name = (String) XMS cluster id in multi-cluster environment
xtremio_volumes_per_glance_cache = 100 (Integer) Number of volumes created from each cached glance image

For a configuration example, refer to the configuration Configuration example.

XtremIO driver name

Configure the driver name by setting the following parameter in the cinder.conf file:

  • For iSCSI:

    volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
    
  • For Fibre Channel:

    volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
    
XtremIO management server (XMS) IP

To retrieve the management IP, use the show-xms CLI command.

Configure the management IP by adding the following parameter:

san_ip = XMS Management IP
XtremIO cluster name

In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.

To retrieve the cluster name, run the show-clusters CLI command.

Configure the cluster name by adding the following parameter:

xtremio_cluster_name = Cluster-Name

Note

When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.

XtremIO user credentials

OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.

Refer to the XtremIO User Guide for details on user account management.

Create an XMS account using either the XMS GUI or the add-user-account CLI command.

Configure the user credentials by adding the following parameters:

san_login = XMS username
san_password = XMS username password
Multiple back ends

Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.

When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

Setting thin provisioning and multipathing parameters

To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:

  • Thin Provisioning

    All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter.

    The use_cow_images parameter in the nova.conf file should be set to False as follows:

    use_cow_images = False
    
  • Multipathing

    The use_multipath_for_image_xfer parameter in the cinder.conf file should be set to True as follows:

    use_multipath_for_image_xfer = True
    
Image service optimization

Limit the number of copies (XtremIO snapshots) taken from each image cache.

xtremio_volumes_per_glance_cache = 100

The default value is 100. A value of 0 ignores the limit and defers to the array maximum as the effective limit.

SSL certification

To enable SSL certificate validation, modify the following option in the cinder.conf file:

driver_ssl_cert_verify = true

By default, SSL certificate validation is disabled.

To specify a non-default path to CA_Bundle file or directory with certificates of trusted CAs:

driver_ssl_cert_path = Certificate path
Configuring CHAP

The XtremIO Block Storage driver supports CHAP initiator authentication and discovery.

If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.

To set the CHAP initiator mode using CLI, run the following XMCLI command:

$ modify-chap chap-authentication-mode=initiator

If CHAP initiator discovery is required, set the CHAP discovery mode to initiator.

To set the CHAP initiator discovery mode using CLI, run the following XMCLI command:

$ modify-chap chap-discovery-mode=initiator

The CHAP initiator modes can also be set via the XMS GUI.

Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.

The CHAP initiator authentication and discovery credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.

Configuration example

You can update the cinder.conf file by editing the necessary parameters as follows:

[Default]
enabled_backends = XtremIO

[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA
Fujitsu ETERNUS DX driver

Fujitsu ETERNUS DX driver provides FC and iSCSI support for ETERNUS DX S3 series.

The driver performs volume operations by communicating with ETERNUS DX. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.

You can specify RAID Group and Thin Provisioning Pool (TPP) in ETERNUS DX as a storage pool.

System requirements

Supported storages:

  • ETERNUS DX60 S3
  • ETERNUS DX100 S3/DX200 S3
  • ETERNUS DX500 S3/DX600 S3
  • ETERNUS DX8700 S3/DX8900 S3
  • ETERNUS DX200F

Requirements:

  • Firmware version V10L30 or later is required.
  • The multipath environment with ETERNUS Multipath Driver is unsupported.
  • An Advanced Copy Feature license is required to create a snapshot and a clone.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume. (*1)
  • Get volume statistics.

(*1): It is executable only when you use TPP as a storage pool.

Preparation
Package installation

Install the python-pywbem package for your distribution.

  • On Ubuntu:

    # apt-get install python-pywbem
    
  • On openSUSE:

    # zypper install python-pywbem
    
  • On Red Hat Enterprise Linux, CentOS, and Fedora:

    # yum install pywbem
    
ETERNUS DX setup

Perform the following steps using ETERNUS Web GUI or ETERNUS CLI.

Note

  • These following operations require an account that has the Admin role.
  • For detailed operations, refer to ETERNUS Web GUI User’s Guide or ETERNUS CLI User’s Guide for ETERNUS DX S3 series.
  1. Create an account for communication with cinder controller.

  2. Enable the SMI-S of ETERNUS DX.

  3. Register an Advanced Copy Feature license and configure copy table size.

  4. Create a storage pool for volumes.

  5. (Optional) If you want to create snapshots on a different storage pool for volumes, create a storage pool for snapshots.

  6. Create Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for “create a snapshot”.

  7. Configure storage ports used for OpenStack.

    • Set those storage ports to CA mode.

    • Enable the host-affinity settings of those storage ports.

      (ETERNUS CLI command for enabling host-affinity settings):

      CLI> set fc-parameters -host-affinity enable -port <CM#><CA#><Port#>
      CLI> set iscsi-parameters -host-affinity enable -port <CM#><CA#><Port#>
      
  8. Ensure LAN connection between cinder controller and MNT port of ETERNUS DX and SAN connection between Compute nodes and CA ports of ETERNUS DX.

Configuration
  1. Edit cinder.conf.

    Add the following entries to /etc/cinder/cinder.conf:

    FC entries:

    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
    cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
    

    iSCSI entries:

    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
    cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
    

    If there is no description about cinder_eternus_config_file,

    then the parameter is set to default value /etc/cinder/cinder_fujitsu_eternus_dx.xml.

  2. Create a driver configuration file.

    Create a driver configuration file in the file path specified as cinder_eternus_config_file in cinder.conf, and add parameters to the file as below:

    FC configuration:

    <?xml version='1.0' encoding='UTF-8'?>
    <FUJITSU>
    <EternusIP>0.0.0.0</EternusIP>
    <EternusPort>5988</EternusPort>
    <EternusUser>smisuser</EternusUser>
    <EternusPassword>smispassword</EternusPassword>
    <EternusPool>raid5_0001</EternusPool>
    <EternusSnapPool>raid5_0001</EternusSnapPool>
    </FUJITSU>
    

    iSCSI configuration:

    <?xml version='1.0' encoding='UTF-8'?>
    <FUJITSU>
    <EternusIP>0.0.0.0</EternusIP>
    <EternusPort>5988</EternusPort>
    <EternusUser>smisuser</EternusUser>
    <EternusPassword>smispassword</EternusPassword>
    <EternusPool>raid5_0001</EternusPool>
    <EternusSnapPool>raid5_0001</EternusSnapPool>
    <EternusISCSIIP>1.1.1.1</EternusISCSIIP>
    <EternusISCSIIP>1.1.1.2</EternusISCSIIP>
    <EternusISCSIIP>1.1.1.3</EternusISCSIIP>
    <EternusISCSIIP>1.1.1.4</EternusISCSIIP>
    </FUJITSU>
    

    Where:

    EternusIP

    IP address for the SMI-S connection of the ETRENUS DX.

    Enter the IP address of MNT port of the ETERNUS DX.

    EternusPort

    Port number for the SMI-S connection port of the ETERNUS DX.

    EternusUser

    User name for the SMI-S connection of the ETERNUS DX.

    EternusPassword

    Password for the SMI-S connection of the ETERNUS DX.

    EternusPool

    Storage pool name for volumes.

    Enter RAID Group name or TPP name in the ETERNUS DX.

    EternusSnapPool

    Storage pool name for snapshots.

    Enter RAID Group name in the ETERNUS DX.

    EternusISCSIIP (Multiple setting allowed)

    iSCSI connection IP address of the ETERNUS DX.

    Note

    • For EternusSnapPool, you can specify only RAID Group name and cannot specify TPP name.
    • You can specify the same RAID Group name for EternusPool and EternusSnapPool if you create volumes and snapshots on a same storage pool.
Configuration example
  1. Edit cinder.conf:

    [DEFAULT]
    enabled_backends = DXFC, DXISCSI
    
    [DXFC]
    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
    cinder_eternus_config_file = /etc/cinder/fc.xml
    volume_backend_name = FC
    
    [DXISCSI]
    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
    cinder_eternus_config_file = /etc/cinder/iscsi.xml
    volume_backend_name = ISCSI
    
  2. Create the driver configuration files fc.xml and iscsi.xml.

  3. Create a volume type and set extra specs to the type:

    $ cinder type-create DX_FC
    $ cinder type-key DX_FC set volume_backend_name=FC
    $ cinder type-create DX_ISCSI
    $ cinder type-key DX_ISCSI set volume_backend_name=ISCSI
    

    By issuing these commands, the volume type DX_FC is associated with the FC, and the type DX_ISCSI is associated with the ISCSI.

Hitachi NAS Platform iSCSI and NFS drivers

This OpenStack Block Storage volume drivers provides iSCSI and NFS support for Hitachi NAS Platform (HNAS) Models 3080, 3090, 4040, 4060, 4080, and 4100 with NAS OS 12.2 or higher.

Supported operations

The NFS and iSCSI drivers support these operations:

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Manage and unmanage a volume.
  • Manage and unmanage snapshots (HNAS NFS only)
HNAS storage requirements

Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to configure HNAS to work with the drivers. Additionally:

  1. General:
  • It is mandatory to have at least 1 storage pool, 1 EVS and 1 file system to be able to run any of the HNAS drivers.
  • HNAS drivers consider the space allocated to the file systems to provide the reports to cinder. So, when creating a file system, make sure it has enough space to fit your needs.
  • The file system used should not be created as a replication target and should be mounted.
  • It is possible to configure HNAS drivers to use distinct EVSs and file systems, but all compute nodes and controllers in the cloud must have access to the EVSs.
  1. For NFS:
  • Create NFS exports, choose a path for them (it must be different from /) and set the :guilabel: Show snapshots option to hide and disable access.
  • For each export used, set the option norootsquash in the share Access configuration so Block Storage services can change the permissions of its volumes. For example, "* (rw, norootsquash)".
  • Make sure that all computes and controllers have R/W access to the shares used by cinder HNAS driver.
  • In order to use the hardware accelerated features of HNAS NFS, we recommend setting max-nfs-version to 3. Refer to Hitachi NAS Platform command line reference to see how to configure this option.
  1. For iSCSI:
  • You must set an iSCSI domain to EVS.
Block Storage host requirements

The HNAS drivers are supported for Red Hat Enterprise Linux OpenStack Platform, SUSE OpenStack Cloud, and Ubuntu OpenStack. The following packages must be installed in all compute, controller and storage (if any) nodes:

  • nfs-utils for Red Hat Enterprise Linux OpenStack Platform
  • nfs-client for SUSE OpenStack Cloud
  • nfs-common, libc6-i386 for Ubuntu OpenStack
Package installation

If you are installing the driver from an RPM or DEB package, follow the steps below:

  1. Install the dependencies:

    In Red Hat:

    # yum install nfs-utils nfs-utils-lib
    

    Or in Ubuntu:

    # apt-get install nfs-common
    

    Or in SUSE:

    # zypper install nfs-client
    

    If you are using Ubuntu 12.04, you also need to install libc6-i386

    # apt-get install libc6-i386
    
  2. Configure the driver as described in the Driver configuration section.

  3. Restart all Block Storage services (volume, scheduler, and backup).

Driver configuration

HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types combined with the use of multiple back ends and multiple services. Each back end can configure up to 4 service pools, which can be mapped to cinder volume types.

The configuration for the driver is read from the back-end sections of the cinder.conf. Each back-end section must have the appropriate configurations to communicate with your HNAS back end, such as the IP address of the HNAS EVS that is hosting your data, HNAS SSH access credentials, the configuration of each of the services in that back end, and so on. You can find examples of such configurations in the Configuration example section.

Note

HNAS cinder drivers still support the XML configuration the same way it was in the older versions, but we recommend configuring the HNAS cinder drivers only through the cinder.conf file, since the XML configuration file from previous versions is being deprecated as of Newton Release.

Note

We do not recommend the use of the same NFS export or file system (iSCSI driver) for different back ends. If possible, configure each back end to use a different NFS export/file system.

The following is the definition of each configuration option that can be used in a HNAS back-end section in the cinder.conf file:

Configuration options in cinder.conf
Option Type Default Description
volume_backend_name Optional N/A A name that identifies the back end and can be used as an extra-spec to redirect the volumes to the referenced back end.
volume_driver Required N/A The python module path to the HNAS volume driver python class. When installing through the rpm or deb packages, you should configure this to cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver for the iSCSI back end or cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver for the NFS back end.
nfs_shares_config Required (only for NFS) /etc/cinder/nfs_shares Path to the nfs_shares file. This is required by the base cinder generic NFS driver and therefore also required by the HNAS NFS driver. This file should list, one per line, every NFS share being used by the back end. For example, all the values found in the configuration keys hnas_svcX_hdp in the HNAS NFS back-end sections.
hnas_mgmt_ip0 Required N/A HNAS management IP address. Should be the IP address of the Admin EVS. It is also the IP through which you access the web SMU administration frontend of HNAS.
hnas_chap_enabled Optional (iSCSI only) True Boolean tag used to enable CHAP authentication protocol for iSCSI driver.
hnas_username Required N/A HNAS SSH username
hds_hnas_nfs_config_file | hds_hnas_iscsi_config_file Optional (deprecated) /opt/hds/hnas/cinder_[nfs|iscsi]_conf.xml Path to the deprecated XML configuration file (only required if using the XML file)
hnas_cluster_admin_ip0 Optional (required only for HNAS multi-farm setups) N/A The IP of the HNAS farm admin. If your SMU controls more than one system or cluster, this option must be set with the IP of the desired node. This is different for HNAS multi-cluster setups, which does not require this option to be set.
hnas_ssh_private_key Optional N/A Path to the SSH private key used to authenticate to the HNAS SMU. Only required if you do not want to set hnas_password.
hnas_ssh_port Optional 22 Port on which HNAS is listening for SSH connections
hnas_password Required (unless hnas_ssh_private_key is provided) N/A HNAS password
hnas_svcX_hdp [1] Required (at least 1) N/A HDP (export or file system) where the volumes will be created. Use exports paths for the NFS backend or the file system names for the iSCSI backend (note that when using the file system name, it does not contain the IP addresses of the HDP)
hnas_svcX_iscsi_ip Required (only for iSCSI) N/A The IP of the EVS that contains the file system specified in hnas_svcX_hdp
hnas_svcX_volume_type Required N/A A unique string that is used to refer to this pool within the context of cinder. You can tell cinder to put volumes of a specific volume type into this back end, within this pool. See, Service Labels and Configuration example sections for more details.
[1]Replace X with a number from 0 to 3 (keep the sequence when configuring the driver)
Service labels

HNAS driver supports differentiated types of service using the service labels. It is possible to create up to 4 types of them for each back end. (For example gold, platinum, silver, ssd, and so on).

After creating the services in the cinder.conf configuration file, you need to configure one cinder volume_type per service. Each volume_type must have the metadata service_label with the same name configured in the hnas_svcX_volume_type option of that service. See the Configuration example section for more details. If the volume_type is not set, the cinder service pool with largest available free space or other criteria configured in scheduler filters.

$ cinder type-create default
$ cinder type-key default set service_label=default
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum
Multi-backend configuration

You can deploy multiple OpenStack HNAS Driver instances (back ends) that each controls a separate HNAS or a single HNAS. If you use multiple cinder back ends, remember that each cinder back end can host up to 4 services. Each back-end section must have the appropriate configurations to communicate with your HNAS back end, such as the IP address of the HNAS EVS that is hosting your data, HNAS SSH access credentials, the configuration of each of the services in that back end, and so on. You can find examples of such configurations in the Configuration example section.

If you want the volumes from a volume_type to be casted into a specific back end, you must configure an extra_spec in the volume_type with the value of the volume_backend_name option from that back end.

For multiple NFS back ends configuration, each back end should have a separated nfs_shares_config and also a separated nfs_shares file defined (For example, nfs_shares1, nfs_shares2) with the desired shares listed in separated lines.

SSH configuration

Note

As of the Newton OpenStack release, the user can no longer run the driver using a locally installed instance of the SSC utility package. Instead, all communications with the HNAS back end are handled through SSH.

You can use your username and password to authenticate the Block Storage node to the HNAS back end. In order to do that, simply configure hnas_username and hnas_password in your back end section within the cinder.conf file.

For example:

[hnas-backend]
…
hnas_username = supervisor
hnas_password = supervisor

Alternatively, the HNAS cinder driver also supports SSH authentication through public key. To configure that:

  1. If you do not have a pair of public keys already generated, create it in the Block Storage node (leave the pass-phrase empty):

    $ mkdir -p /opt/hitachi/ssh
    $ ssh-keygen -f /opt/hds/ssh/hnaskey
    
  2. Change the owner of the key to cinder (or the user the volume service will be run as):

    # chown -R cinder.cinder /opt/hitachi/ssh
    
  3. Create the directory ssh_keys in the SMU server:

    $ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
    
  4. Copy the public key to the ssh_keys directory:

    $ scp /opt/hitachi/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
    
  5. Access the SMU server:

    $ ssh [manager|supervisor]@<smu-ip>
    
  6. Run the command to register the SSH keys:

    $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
    
  7. Check the communication with HNAS in the Block Storage node:

    For multi-farm HNAS:

    $ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
    

    Or, for Single-node/Multi-Cluster:

    $ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc localhost df -a'
    
  8. Configure your backend section in cinder.conf to use your public key:

    [hnas-backend]
    …
    hnas_ssh_private_key = /opt/hitachi/ssh/hnaskey
    
Managing volumes

If there are some existing volumes on HNAS that you want to import to cinder, it is possible to use the manage volume feature to do this. The manage action on an existing volume is very similar to a volume creation. It creates a volume entry on cinder database, but instead of creating a new volume in the back end, it only adds a link to an existing volume.

Note

It is an admin only feature and you have to be logged as an user with admin rights to be able to use this.

For NFS:

  1. Under the System > Volumes tab, choose the option Manage Volume.
  2. Fill the fields Identifier, Host, Volume Name, and Volume Type with volume information to be managed:
    • Identifier: ip:/type/volume_name (For example: 172.24.44.34:/silver/volume-test)
    • Host: host@backend-name#pool_name (For example: ubuntu@hnas-nfs#test_silver)
    • Volume Name: volume_name (For example: volume-test)
    • Volume Type: choose a type of volume (For example: silver)

For iSCSI:

  1. Under the System > Volumes tab, choose the option Manage Volume.
  2. Fill the fields Identifier, Host, Volume Name, and Volume Type with volume information to be managed:
    • Identifier: filesystem-name/volume-name (For example: filesystem-test/volume-test)
    • Host: host@backend-name#pool_name (For example: ubuntu@hnas-iscsi#test_silver)
    • Volume Name: volume_name (For example: volume-test)
    • Volume Type: choose a type of volume (For example: silver)

By CLI:

$ cinder manage [--id-type <id-type>][--name <name>][--description <description>]
[--volume-type <volume-type>][--availability-zone <availability-zone>]
[--metadata [<key=value> [<key=value> ...]]][--bootable] <host> <identifier>

Example:

For NFS:

$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-nfs#test_silver 172.24.44.34:/silver/volume-test

For iSCSI:

$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-iscsi#test_silver filesystem-test/volume-test
Managing snapshots

The manage snapshots feature works very similarly to the manage volumes feature, currently supported on HNAS cinder drivers. So, if you have a volume already managed by cinder which has snapshots that are not managed by cinder, it is possible to use manage snapshots to import these snapshots and link them with their original volume.

Note

For HNAS NFS cinder driver, the snapshots of volumes are clones of volumes that where created using file-clone-create, not the HNAS snapshot-* feature. Check the HNAS users documentation to have details about those 2 features.

Currently, the manage snapshots function does not support importing snapshots (generally created by storage’s file-clone operation) without parent volumes or when the parent volume is in-use. In this case, the manage volumes should be used to import the snapshot as a normal cinder volume.

Also, it is an admin only feature and you have to be logged as a user with admin rights to be able to use this.

Note

Although there is a verification to prevent importing snapshots using non-related volumes as parents, it is possible to manage a snapshot using any related cloned volume. So, when managing a snapshot, it is extremely important to make sure that you are using the correct parent volume.

For NFS:

$ cinder snapshot-manage <volume> <identifier>
  • Identifier: evs_ip:/export_name/snapshot_name (For example: 172.24.44.34:/export1/snapshot-test)
  • Volume: Parent volume ID (For example: 061028c0-60cf-499f-99e2-2cd6afea081f)

Example:

$ cinder snapshot-manage 061028c0-60cf-499f-99e2-2cd6afea081f 172.24.44.34:/export1/snapshot-test

Note

This feature is currently available only for HNAS NFS Driver.

Configuration example

Below are configuration examples for both NFS and iSCSI backends:

  1. HNAS NFS Driver

    1. For HNAS NFS driver, create this section in your cinder.conf file:

      [hnas-nfs]
      volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
      nfs_shares_config = /home/cinder/nfs_shares
      volume_backend_name = hnas_nfs_backend
      hnas_username = supervisor
      hnas_password = supervisor
      hnas_mgmt_ip0 = 172.24.44.15
      
      hnas_svc0_volume_type = nfs_gold
      hnas_svc0_hdp = 172.24.49.21:/gold_export
      
      hnas_svc1_volume_type = nfs_platinum
      hnas_svc1_hdp = 172.24.49.21:/silver_platinum
      
      hnas_svc2_volume_type = nfs_silver
      hnas_svc2_hdp = 172.24.49.22:/silver_export
      
      hnas_svc3_volume_type = nfs_bronze
      hnas_svc3_hdp = 172.24.49.23:/bronze_export
      
    2. Add it to the enabled_backends list, under the DEFAULT section of your cinder.conf file:

      [DEFAULT]
      enabled_backends = hnas-nfs
      
    3. Add the configured exports to the nfs_shares file:

      172.24.49.21:/gold_export
      172.24.49.21:/silver_platinum
      172.24.49.22:/silver_export
      172.24.49.23:/bronze_export
      
    4. Register a volume type with cinder and associate it with this backend:

      $cinder type-create hnas_nfs_gold
      $cinder type-key hnas_nfs_gold set volume_backend_name=hnas_nfs_backend service_label=nfs_gold
      $cinder type-create hnas_nfs_platinum
      $cinder type-key hnas_nfs_platinum set  volume_backend_name=hnas_nfs_backend service_label=nfs_platinum
      $cinder type-create hnas_nfs_silver
      $cinder type-key hnas_nfs_silver set volume_backend_name=hnas_nfs_backend service_label=nfs_silver
      $cinder type-create hnas_nfs_bronze
      $cinder type-key hnas_nfs_bronze set volume_backend_name=hnas_nfs_backend service_label=nfs_bronze
      
  2. HNAS iSCSI Driver

    1. For HNAS iSCSI driver, create this section in your cinder.conf file:

      [hnas-iscsi]
      volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
      volume_backend_name = hnas_iscsi_backend
      hnas_username = supervisor
      hnas_password = supervisor
      hnas_mgmt_ip0 = 172.24.44.15
      hnas_chap_enabled = True
      
      hnas_svc0_volume_type = iscsi_gold
      hnas_svc0_hdp = FS-gold
      hnas_svc0_iscsi_ip = 172.24.49.21
      
      hnas_svc1_volume_type = iscsi_platinum
      hnas_svc1_hdp = FS-platinum
      hnas_svc1_iscsi_ip = 172.24.49.21
      
      hnas_svc2_volume_type = iscsi_silver
      hnas_svc2_hdp = FS-silver
      hnas_svc2_iscsi_ip = 172.24.49.22
      
      hnas_svc3_volume_type = iscsi_bronze
      hnas_svc3_hdp = FS-bronze
      hnas_svc3_iscsi_ip = 172.24.49.23
      
    2. Add it to the enabled_backends list, under the DEFAULT section of your cinder.conf file:

      [DEFAULT]
      enabled_backends = hnas-nfs, hnas-iscsi
      
    3. Register a volume type with cinder and associate it with this backend:

      $cinder type-create hnas_iscsi_gold
      $cinder type-key hnas_iscsi_gold set volume_backend_name=hnas_iscsi_backend service_label=iscsi_gold
      $cinder type-create hnas_iscsi_platinum
      $cinder type-key hnas_iscsi_platinum set volume_backend_name=hnas_iscsi_backend service_label=iscsi_platinum
      $cinder type-create hnas_iscsi_silver
      $cinder type-key hnas_iscsi_silver set volume_backend_name=hnas_iscsi_backend service_label=iscsi_silver
      $cinder type-create hnas_iscsi_bronze
      $cinder type-key hnas_iscsi_bronze set volume_backend_name=hnas_iscsi_backend service_label=iscsi_bronze
      
Additional notes and limitations
  • The get_volume_stats() function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.

  • After changing the configuration on the storage node, the Block Storage driver must be restarted.

  • On Red Hat, if the system is configured to use SELinux, you need to set virt_use_nfs = on for NFS driver work properly.

    # setsebool -P virt_use_nfs on
    
  • It is not possible to manage a volume if there is a slash (/) or a colon (:) in the volume name.

  • File system auto-expansion: Although supported, we do not recommend using file systems with auto-expansion setting enabled because the scheduler uses the file system capacity reported by the driver to determine if new volumes can be created. For instance, in a setup with a file system that can expand to 200GB but is at 100GB capacity, with 10GB free, the scheduler will not allow a 15GB volume to be created. In this case, manual expansion would have to be triggered by an administrator. We recommend always creating the file system at the maximum capacity or periodically expanding the file system manually.

  • iSCSI driver limitations: The iSCSI driver has a limit of 1024 volumes attached to instances.

  • The hnas_svcX_volume_type option must be unique for a given back end.

  • SSC simultaneous connections limit: In very busy environments, if 2 or more volume hosts are configured to use the same storage, some requests (create, delete and so on) can have some attempts failed and re-tried ( 5 attempts by default) due to an HNAS connection limitation ( max of 5 simultaneous connections).

Hitachi storage volume driver

Hitachi storage volume driver provides iSCSI and Fibre Channel support for Hitachi storages.

System requirements

Supported storages:

  • Hitachi Virtual Storage Platform G1000 (VSP G1000)
  • Hitachi Virtual Storage Platform (VSP)
  • Hitachi Unified Storage VM (HUS VM)
  • Hitachi Unified Storage 100 Family (HUS 100 Family)

Required software:

  • RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM

  • Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family

    Note

    HSNM2 needs to be installed under /usr/stonavm.

Required licenses:

  • Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
  • (Mandatory) ShadowImage in-system replication for HUS 100 Family
  • (Optional) Copy-on-Write Snapshot for HUS 100 Family

Additionally, the pexpect package is required.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Manage and unmanage volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
Configuration
Set up Hitachi storage

You need to specify settings as described below. For details about each step, see the user’s guide of the storage device. Use a storage administrative software such as Storage Navigator to set up the storage device so that LDEVs and host groups can be created and deleted, and LDEVs can be connected to the server and can be asynchronously copied.

  1. Create a Dynamic Provisioning pool.
  2. Connect the ports at the storage to the controller node and compute nodes.
  3. For VSP G1000/VSP/HUS VM, set port security to enable for the ports at the storage.
  4. For HUS 100 Family, set Host Group security or iSCSI target security to ON for the ports at the storage.
  5. For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the controller node and each compute node. Then register a WWN (initiator IQN) for each of the controller node and compute nodes.
  6. For VSP G1000/VSP/HUS VM, perform the following:
    • Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)
    • Create a command device (In-Band), and set user authentication to ON.
    • Register the created command device to the host group for the controller node.
    • To use the Thin Image function, create a pool for Thin Image.
  7. For HUS 100 Family, perform the following:
    • Use the auunitaddauto command to register the unit name and controller of the storage device to HSNM2.
    • When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port.
Set up Hitachi Gigabit Fibre Channel adaptor

Change a parameter of the hfcldd driver and update the initram file if Hitachi Gigabit Fibre Channel adaptor is used:

# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
# reboot
Set up Hitachi storage volume driver
  1. Create a directory:

    # mkdir /var/lock/hbsd
    # chown cinder:cinder /var/lock/hbsd
    
  2. Create volume type and volume key.

    This example shows that HUS100_SAMPLE is created as volume type and hus100_backend is registered as volume key:

    $ cinder type-create HUS100_SAMPLE
    $ cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
    
  3. Specify any identical volume type name and volume key.

    To confirm the created volume type, please execute the following command:

    $ cinder extra-specs-list
    
  4. Edit the /etc/cinder/cinder.conf file as follows.

    If you use Fibre Channel:

    volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
    

    If you use iSCSI:

    volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
    

    Also, set volume_backend_name created by cinder type-key command:

    volume_backend_name = hus100_backend
    

    This table shows configuration options for Hitachi storage volume driver.

    Description of Hitachi storage volume driver configuration options
    Configuration option = Default value Description
    [DEFAULT]  
    hitachi_add_chap_user = False (Boolean) Add CHAP user
    hitachi_async_copy_check_interval = 10 (Integer) Interval to check copy asynchronously
    hitachi_auth_method = None (String) iSCSI authentication method
    hitachi_auth_password = HBSD-CHAP-password (String) iSCSI authentication password
    hitachi_auth_user = HBSD-CHAP-user (String) iSCSI authentication username
    hitachi_copy_check_interval = 3 (Integer) Interval to check copy
    hitachi_copy_speed = 3 (Integer) Copy speed of storage system
    hitachi_default_copy_method = FULL (String) Default copy method of storage system
    hitachi_group_range = None (String) Range of group number
    hitachi_group_request = False (Boolean) Request for creating HostGroup or iSCSI Target
    hitachi_horcm_add_conf = True (Boolean) Add to HORCM configuration
    hitachi_horcm_numbers = 200,201 (String) Instance numbers for HORCM
    hitachi_horcm_password = None (String) Password of storage system for HORCM
    hitachi_horcm_resource_lock_timeout = 600 (Integer) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200.
    hitachi_horcm_user = None (String) Username of storage system for HORCM
    hitachi_ldev_range = None (String) Range of logical device of storage system
    hitachi_pool_id = None (Integer) Pool ID of storage system
    hitachi_serial_number = None (String) Serial number of storage system
    hitachi_target_ports = None (String) Control port names for HostGroup or iSCSI Target
    hitachi_thin_pool_id = None (Integer) Thin pool ID of storage system
    hitachi_unit_name = None (String) Name of an array unit
    hitachi_zoning_request = False (Boolean) Request for FC Zone creating HostGroup
    hnas_chap_enabled = True (Boolean) Whether the chap authentication is enabled in the iSCSI target or not.
    hnas_cluster_admin_ip0 = None (String) The IP of the HNAS cluster admin. Required only for HNAS multi-cluster setups.
    hnas_mgmt_ip0 = None (IP) Management IP address of HNAS. This can be any IP in the admin address on HNAS or the SMU IP.
    hnas_password = None (String) HNAS password.
    hnas_ssc_cmd = ssc (String) Command to communicate to HNAS.
    hnas_ssh_port = 22 (Port number) Port to be used for SSH authentication.
    hnas_ssh_private_key = None (String) Path to the SSH private key used to authenticate in HNAS SMU.
    hnas_svc0_hdp = None (String) Service 0 HDP
    hnas_svc0_iscsi_ip = None (IP) Service 0 iSCSI IP
    hnas_svc0_volume_type = None (String) Service 0 volume type
    hnas_svc1_hdp = None (String) Service 1 HDP
    hnas_svc1_iscsi_ip = None (IP) Service 1 iSCSI IP
    hnas_svc1_volume_type = None (String) Service 1 volume type
    hnas_svc2_hdp = None (String) Service 2 HDP
    hnas_svc2_iscsi_ip = None (IP) Service 2 iSCSI IP
    hnas_svc2_volume_type = None (String) Service 2 volume type
    hnas_svc3_hdp = None (String) Service 3 HDP
    hnas_svc3_iscsi_ip = None (IP) Service 3 iSCSI IP
    hnas_svc3_volume_type = None (String) Service 3 volume type
    hnas_username = None (String) HNAS username.
  5. Restart the Block Storage service.

    When the startup is done, “MSGID0003-I: The storage backend can be used.” is output into /var/log/cinder/volume.log as follows:

    2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi.
    hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None]
    MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
    
HPE 3PAR Fibre Channel and iSCSI drivers

The HPE3PARFCDriver and HPE3PARISCSIDriver drivers, which are based on the Block Storage service (Cinder) plug-in architecture, run volume operations by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH connections. The HTTP and HTTPS communications use python-3parclient, which is part of the Python standard library.

For information about how to manage HPE 3PAR storage systems, see the HPE 3PAR user documentation.

System requirements

To use the HPE 3PAR drivers, install the following software and components on the HPE 3PAR storage system:

  • HPE 3PAR Operating System software version 3.1.3 MU1 or higher.
    • Deduplication provisioning requires SSD disks and HPE 3PAR Operating System software version 3.2.1 MU1 or higher.
    • Enabling Flash Cache Policy requires the following:
      • Array must contain SSD disks.
      • HPE 3PAR Operating System software version 3.2.1 MU2 or higher.
      • python-3parclient version 4.2.0 or newer.
      • Array must have the Adaptive Flash Cache license installed.
      • Flash Cache must be enabled on the array with the CLI command createflashcache SIZE, where size must be in 16 GB increments. For example, createflashcache 128g will create 128 GB of Flash Cache for each node pair in the array.
    • The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This may apply to the volume migrate, retype and manage commands.
    • The Virtual Copy License is required to support any feature that involves volume snapshots. This applies to the volume snapshot-* commands.
  • HPE 3PAR drivers will now check the licenses installed on the array and disable driver capabilities based on available licenses. This will apply to thin provisioning, QoS support and volume replication.
  • HPE 3PAR Web Services API Server must be enabled and running.
  • One Common Provisioning Group (CPG).
  • Additionally, you must install the python-3parclient version 4.2.0 or newer from the Python standard library on the system with the enabled Block Storage service volume drivers.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Replicate host volumes.
  • Fail-over host volumes.
  • Fail-back host volumes.
  • Create, delete, update, snapshot, and clone consistency groups.
  • Create and delete consistency group snapshots.
  • Create a consistency group from a consistency group snapshot or another group.

Volume type support for both HPE 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.types_extra_specs volume type extra specs extension module:

  • hpe3par:snap_cpg
  • hpe3par:provisioning
  • hpe3par:persona
  • hpe3par:vvs
  • hpe3par:flash_cache

To work with the default filter scheduler, the key values are case sensitive and scoped with hpe3par:. For information about how to set the key-value pairs and associate them with a volume type, run the following command:

$ cinder help type-key

Note

Volumes that are cloned only support the extra specs keys cpg, snap_cpg, provisioning and vvs. The others are ignored. In addition the comments section of the cloned volume in the HPE 3PAR StoreServ storage array is not populated.

If volume types are not used or a particular key is not set for a volume type, the following defaults are used:

  • hpe3par:cpg - Defaults to the hpe3par_cpg setting in the cinder.conf file.
  • hpe3par:snap_cpg - Defaults to the hpe3par_snap setting in the cinder.conf file. If hpe3par_snap is not set, it defaults to the hpe3par_cpg setting.
  • hpe3par:provisioning - Defaults to thin provisioning, the valid values are thin, full, and dedup.
  • hpe3par:persona - Defaults to the 2 - Generic-ALUA persona. The valid values are:
    • 1 - Generic
    • 2 - Generic-ALUA
    • 3 - Generic-legacy
    • 4 - HPUX-legacy
    • 5 - AIX-legacy
    • 6 - EGENERA
    • 7 - ONTAP-legacy
    • 8 - VMware
    • 9 - OpenVMS
    • 10 - HPUX
    • 11 - WindowsServer
  • hpe3par:flash_cache - Defaults to false, the valid values are true and false.

QoS support for both HPE 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage qos specs extension module:

  • minBWS
  • maxBWS
  • minIOPS
  • maxIOPS
  • latency
  • priority

The qos keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate

The following keys require that the HPE 3PAR StoreServ storage array has a Priority Optimization license installed.

hpe3par:vvs
The virtual volume set name that has been predefined by the Administrator with quality of service (QoS) rules associated to it. If you specify extra_specs hpe3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.
minBWS
The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue bandwidth rate has no minimum goal.
maxBWS
The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit.
minIOPS
The QoS I/O issue count minimum goal. If not set, the I/O issue count has no minimum goal.
maxIOPS
The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no limit.
latency
The latency goal in milliseconds.
priority
The priority of the QoS rule over other rules. If not set, the priority is normal, valid values are low, normal and high.

Note

Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is set the other will be set to the same value.

The following key requires that the HPE 3PAR StoreServ storage array has an Adaptive Flash Cache license installed.

  • hpe3par:flash_cache - The flash-cache policy, which can be turned on and off by setting the value to true or false.

LDAP authentication is supported if the 3PAR is configured to do so.

Enable the HPE 3PAR Fibre Channel and iSCSI drivers

The HPE3PARFCDriver and HPE3PARISCSIDriver are installed with the OpenStack software.

  1. Install the python-3parclient Python package on the OpenStack Block Storage system.

    $ pip install 'python-3parclient>=4.0,<5.0'
    
  2. Verify that the HPE 3PAR Web Services API server is enabled and running on the HPE 3PAR storage system.

    1. Log onto the HP 3PAR storage system with administrator access.

      $ ssh 3paradm@<HP 3PAR IP Address>
      
    2. View the current state of the Web Services API Server.

      $ showwsapi
      -Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version-
      Enabled   Active Enabled       8008        Enabled       8080       1.1
      
    3. If the Web Services API Server is disabled, start it.

      $ startwsapi
      
  3. If the HTTP or HTTPS state is disabled, enable one of them.

    $ setwsapi -http enable
    

    or

    $ setwsapi -https enable
    

    Note

    To stop the Web Services API Server, use the stopwsapi command. For other options run the setwsapi –h command.

  4. If you are not using an existing CPG, create a CPG on the HPE 3PAR storage system to be used as the default location for creating volumes.

  5. Make the following changes in the /etc/cinder/cinder.conf file.

    # 3PAR WS API Server URL
    hpe3par_api_url=https://10.10.0.141:8080/api/v1
    
    # 3PAR username with the 'edit' role
    hpe3par_username=edit3par
    
    # 3PAR password for the user specified in hpe3par_username
    hpe3par_password=3parpass
    
    # 3PAR CPG to use for volume creation
    hpe3par_cpg=OpenStackCPG_RAID5_NL
    
    # IP address of SAN controller for SSH access to the array
    san_ip=10.10.22.241
    
    # Username for SAN controller for SSH access to the array
    san_login=3paradm
    
    # Password for SAN controller for SSH access to the array
    san_password=3parpass
    
    # FIBRE CHANNEL(uncomment the next line to enable the FC driver)
    # volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
    
    # iSCSI (uncomment the next line to enable the iSCSI driver and
    # hpe3par_iscsi_ips or iscsi_ip_address)
    #volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
    
    # iSCSI multiple port configuration
    # hpe3par_iscsi_ips=10.10.220.253:3261,10.10.222.234
    
    # Still available for single port iSCSI configuration
    #iscsi_ip_address=10.10.220.253
    
    
    # Enable HTTP debugging to 3PAR
    hpe3par_debug=False
    
    # Enable CHAP authentication for iSCSI connections.
    hpe3par_iscsi_chap_enabled=false
    
    # The CPG to use for Snapshots for volumes. If empty hpe3par_cpg will be
    # used.
    hpe3par_snap_cpg=OpenStackSNAP_CPG
    
    # Time in hours to retain a snapshot. You can't delete it before this
    # expires.
    hpe3par_snapshot_retention=48
    
    # Time in hours when a snapshot expires and is deleted. This must be
    # larger than retention.
    hpe3par_snapshot_expiration=72
    
    # The ratio of oversubscription when thin provisioned volumes are
    # involved. Default ratio is 20.0, this means that a provisioned
    # capacity can be 20 times of the total physical capacity.
    max_over_subscription_ratio=20.0
    
    # This flag represents the percentage of reserved back-end capacity.
    reserved_percentage=15
    

    Note

    You can enable only one driver on each cinder instance unless you enable multiple back-end support. See the Cinder multiple back-end support instructions to enable this feature.

    Note

    You can configure one or more iSCSI addresses by using the hpe3par_iscsi_ips option. Separate multiple IP addresses with a comma (,). When you configure multiple addresses, the driver selects the iSCSI port with the fewest active volumes at attach time. The 3PAR array does not allow the default port 3260 to be changed, so IP ports need not be specified.

  6. Save the changes to the cinder.conf file and restart the cinder-volume service.

The HPE 3PAR Fibre Channel and iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

The following table contains all the configuration options supported by the HPE 3PAR Fibre Channel and iSCSI drivers.

Description of HPE 3PAR Fibre Channel and iSCSI drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
hpe3par_api_url = (String) 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
hpe3par_cpg = OpenStack (List) List of the CPG(s) to use for volume creation
hpe3par_cpg_snap = (String) The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
hpe3par_debug = False (Boolean) Enable HTTP debugging to 3PAR
hpe3par_iscsi_chap_enabled = False (Boolean) Enable CHAP authentication for iSCSI connections.
hpe3par_iscsi_ips = (List) List of target iSCSI addresses to use.
hpe3par_password = (String) 3PAR password for the user specified in hpe3par_username
hpe3par_snapshot_expiration = (String) The time in hours when a snapshot expires and is deleted. This must be larger than expiration
hpe3par_snapshot_retention = (String) The time in hours to retain a snapshot. You can’t delete it before this expires.
hpe3par_username = (String) 3PAR username with the ‘edit’ role
HPE LeftHand/StoreVirtual driver

The HPELeftHandISCSIDriver is based on the Block Storage service plug-in architecture. Volume operations are run by communicating with the HPE LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS communications use the python-lefthandclient, which is part of the Python standard library.

The HPELeftHandISCSIDriver can be configured to run using a REST client to communicate with the array. For performance improvements and new functionality the python-lefthandclient must be downloaded, and HP LeftHand/StoreVirtual Operating System software version 11.5 or higher is required on the array. To configure the driver in standard mode, see HPE LeftHand/StoreVirtual REST driver.

For information about how to manage HPE LeftHand/StoreVirtual storage systems, see the HPE LeftHand/StoreVirtual user documentation.

HPE LeftHand/StoreVirtual REST driver

This section describes how to configure the HPE LeftHand/StoreVirtual Block Storage driver.

System requirements

To use the HPE LeftHand/StoreVirtual driver, do the following:

  • Install LeftHand/StoreVirtual Operating System software version 11.5 or higher on the HPE LeftHand/StoreVirtual storage system.
  • Create a cluster group.
  • Install the python-lefthandclient version 2.1.0 from the Python Package Index on the system with the enabled Block Storage service volume drivers.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Replicate host volumes.
  • Fail-over host volumes.
  • Fail-back host volumes.
  • Create, delete, update, and snapshot consistency groups.

When you use back end assisted volume migration, both source and destination clusters must be in the same HPE LeftHand/StoreVirtual management group. The HPE LeftHand/StoreVirtual array will use native LeftHand APIs to migrate the volume. The volume cannot be attached or have snapshots to migrate.

Volume type support for the driver includes the ability to set the following capabilities in the Block Storage API cinder.api.contrib.types_extra_specs volume type extra specs extension module.

  • hpelh:provisioning
  • hpelh:ao
  • hpelh:data_pl

To work with the default filter scheduler, the key-value pairs are case-sensitive and scoped with hpelh:. For information about how to set the key-value pairs and associate them with a volume type, run the following command:

$ cinder help type-key
  • The following keys require the HPE LeftHand/StoreVirtual storage array be configured for:

    hpelh:ao

    The HPE LeftHand/StoreVirtual storage array must be configured for Adaptive Optimization.

    hpelh:data_pl

    The HPE LeftHand/StoreVirtual storage array must be able to support the Data Protection level specified by the extra spec.

  • If volume types are not used or a particular key is not set for a volume type, the following defaults are used:

    hpelh:provisioning

    Defaults to thin provisioning, the valid values are, thin and full

    hpelh:ao

    Defaults to true, the valid values are, true and false.

    hpelh:data_pl

    Defaults to r-0, Network RAID-0 (None), the valid values are,

    • r-0, Network RAID-0 (None)
    • r-5, Network RAID-5 (Single Parity)
    • r-10-2, Network RAID-10 (2-Way Mirror)
    • r-10-3, Network RAID-10 (3-Way Mirror)
    • r-10-4, Network RAID-10 (4-Way Mirror)
    • r-6, Network RAID-6 (Dual Parity)
Enable the HPE LeftHand/StoreVirtual iSCSI driver

The HPELeftHandISCSIDriver is installed with the OpenStack software.

  1. Install the python-lefthandclient Python package on the OpenStack Block Storage system.

    $ pip install 'python-lefthandclient>=2.1,<3.0'
    
  2. If you are not using an existing cluster, create a cluster on the HPE LeftHand storage system to be used as the cluster for creating volumes.

  3. Make the following changes in the /etc/cinder/cinder.conf file:

    # LeftHand WS API Server URL
    hpelefthand_api_url=https://10.10.0.141:8081/lhos
    
    # LeftHand Super user username
    hpelefthand_username=lhuser
    
    # LeftHand Super user password
    hpelefthand_password=lhpass
    
    # LeftHand cluster to use for volume creation
    hpelefthand_clustername=ClusterLefthand
    
    # LeftHand iSCSI driver
    volume_driver=cinder.volume.drivers.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver
    
    # Should CHAPS authentication be used (default=false)
    hpelefthand_iscsi_chap_enabled=false
    
    # Enable HTTP debugging to LeftHand (default=false)
    hpelefthand_debug=false
    
    # The ratio of oversubscription when thin provisioned volumes are
    # involved. Default ratio is 20.0, this means that a provisioned capacity
    # can be 20 times of the total physical capacity.
    max_over_subscription_ratio=20.0
    
    # This flag represents the percentage of reserved back-end capacity.
    reserved_percentage=15
    

    You can enable only one driver on each cinder instance unless you enable multiple back end support. See the Cinder multiple back end support instructions to enable this feature.

    If the hpelefthand_iscsi_chap_enabled is set to true, the driver will associate randomly-generated CHAP secrets with all hosts on the HPE LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets when creating iSCSI connections.

    Important

    CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.

    Note

    CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.

  4. Save the changes to the cinder.conf file and restart the cinder-volume service.

The HPE LeftHand/StoreVirtual driver is now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

Note

Previous versions implement a HPE LeftHand/StoreVirtual CLIQ driver that enable the Block Storage service driver configuration in legacy mode. This is removed from Mitaka onwards.

HP MSA Fibre Channel and iSCSI drivers

The HPMSAFCDriver and HPMSAISCSIDriver Cinder drivers allow HP MSA 2040 or 1040 arrays to be used for Block Storage in OpenStack deployments.

System requirements

To use the HP MSA drivers, the following are required:

  • HP MSA 2040 or 1040 array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed via an HTTPS connection. HTTP can also be used if hpmsa_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

    If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.

  2. Edit the cinder.conf file to define a storage back end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in a key=value format.

    • The hpmsa_backend_name value specifies the name of the storage pool or vdisk on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that is shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate Cinder driver name; IP address or host name of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    iSCSI example back-end entries

    [pool-a]
    hpmsa_backend_name = A
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    hpmsa_backend_name = B
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Fibre Channel example back-end entries

    [pool-a]
    hpmsa_backend_name = A
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    hpmsa_backend_name = B
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If any volume_backend_name value refers to a vdisk rather than a virtual pool, add an additional statement hpmsa_backend_type = linear to that back end entry.

  4. If HTTPS is not enabled in the array, include hpmsa_api_protocol = http in each of the back-end definitions.

  5. If HTTPS is enabled, you can enable certificate verification with the option hpmsa_verify_certificate=True. You may also use the hpmsa_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  6. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_back-ends parameter specifying the backend entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example of [DEFAULT] section changes

    [DEFAULT]
    enabled_backends = pool-a,pool-b
    default_volume_type = hpmsa
    
  7. Create a new volume type for each distinct volume_backend_name value that you added in the cinder.conf file. The example below assumes that the same volume_backend_name=hpmsa-array option was specified in all of the entries, and specifies that the volume type hpmsa can be used to allocate volumes from any of them.

    Example of creating a volume type

    $ cinder type-create hpmsa
    $ cinder type-key hpmsa set volume_backend_name=hpmsa-array
    
  8. After modifying the cinder.conf file, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the HP MSA drivers.

Description of HP MSA volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hpmsa_api_protocol = https (String) HPMSA API interface protocol.
hpmsa_backend_name = A (String) Pool or Vdisk name to use for volume creation.
hpmsa_backend_type = virtual (String) linear (for Vdisk) or virtual (for Pool).
hpmsa_iscsi_ips = (List) List of comma-separated target iSCSI IP addresses.
hpmsa_verify_certificate = False (Boolean) Whether to verify HPMSA array SSL certificate.
hpmsa_verify_certificate_path = None (String) HPMSA array SSL certificate path.
Huawei volume driver

Huawei volume driver can be used to provide functions such as the logical volume and snapshot for virtual machines (VMs) in the OpenStack Block Storage driver that supports iSCSI and Fibre Channel protocols.

Version mappings

The following table describes the version mappings among the Block Storage driver, Huawei storage system and OpenStack:

Version mappings among the Block Storage driver and Huawei storage system
Description Storage System Version

Create, delete, expand, attach, detach, manage, and unmanage volumes.

Create, delete, manage, unmanage, and backup a snapshot.

Create, delete, and update a consistency group.

Create and delete a cgsnapshot.

Copy an image to a volume.

Copy a volume to an image.

Create a volume from a snapshot.

Clone a volume.

QoS

OceanStor T series V2R2 C00/C20/C30

OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00

OceanStor 2200V3 V300R005C00

OceanStor 2600V3 V300R005C00

OceanStor 18500/18800 V1R1C00/C20/C30 V3R3C00

Volume Migration

Auto zoning

SmartTier

SmartCache

Smart Thin/Thick

Replication V2.1

OceanStor T series V2R2 C00/C20/C30

OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00

OceanStor 2200V3 V300R005C00

OceanStor 2600V3 V300R005C00

OceanStor 18500/18800V1R1C00/C20/C30

SmartPartition

OceanStor T series V2R2 C00/C20/C30

OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00

OceanStor 2600V3 V300R005C00

OceanStor 18500/18800V1R1C00/C20/C30

Block Storage driver installation and deployment
  1. Before installation, delete all the installation files of Huawei OpenStack Driver. The default path may be: /usr/lib/python2.7/disk-packages/cinder/volume/drivers/huawei.

    Note

    In this example, the version of Python is 2.7. If another version is used, make corresponding changes to the driver path.

  2. Copy the Block Storage driver to the Block Storage driver installation directory. Refer to step 1 to find the default directory.

  3. Refer to chapter Volume driver configuration to complete the configuration.

  4. After configuration, restart the cinder-volume service:

  5. Check the status of services using the cinder service-list command. If the State of cinder-volume is up, that means cinder-volume is okay.

    # cinder service-list
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    | Binary          | Host            | Zone | Status  | State | Updated_at                 | Disabled Reason |
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    | cinderscheduler | controller      | nova | enabled | up    | 2016-02-01T16:26:00.000000 | -               |
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    | cindervolume    | controller@v3r3 | nova | enabled | up    | 2016-02-01T16:25:53.000000 | -               |
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    
Volume driver configuration

This section describes how to configure the Huawei volume driver for either iSCSI storage or Fibre Channel storage.

Pre-requisites

When creating a volume from image, install the multipath tool and add the following configuration keys in the [DEFAULT] configuration group of the /etc/cinder/cinder.conf file:

use_multipath_for_image_xfer = True
enforce_multipath_for_image_xfer = True

To configure the volume driver, follow the steps below:

  1. In /etc/cinder, create a Huawei-customized driver configuration file. The file format is XML.

  2. Change the name of the driver configuration file based on the site requirements, for example, cinder_huawei_conf.xml.

  3. Configure parameters in the driver configuration file.

    Each product has its own value for the Product parameter under the Storage xml block. The full xml file with the appropriate Product parameter is as below:

      <?xml version="1.0" encoding="UTF-8"?>
         <config>
            <Storage>
               <Product>PRODUCT</Product>
               <Protocol>iSCSI</Protocol>
               <ControllerIP1>x.x.x.x</ControllerIP1>
               <UserName>xxxxxxxx</UserName>
               <UserPassword>xxxxxxxx</UserPassword>
            </Storage>
            <LUN>
               <LUNType>xxx</LUNType>
               <StripUnitSize>xxx</StripUnitSize>
               <WriteType>xxx</WriteType>
               <MirrorSwitch>xxx</MirrorSwitch>
               <Prefetch Type="xxx" Value="xxx" />
               <StoragePool Name="xxx" />
               <StoragePool Name="xxx" />
            </LUN>
            <iSCSI>
               <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
               <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
            </iSCSI>
            <Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/>
         </config>
    
    The corresponding ``Product`` values for each product are as below:
    
    • For T series V2

      <Product>TV2</Product>
      
    • For V3

      <Product>V3</Product>
      
    • For OceanStor 18000 series

      <Product>18000</Product>
      

    The Protocol value to be used is iSCSI for iSCSI and FC for Fibre Channel as shown below:

    # For iSCSI
    <Protocol>iSCSI</Protocol>
    
    # For Fibre channel
    <Protocol>FC</Protocol>
    

    Note

    For details about the parameters in the configuration file, see the Configuration file parameters section.

  4. Configure the cinder.conf file.

    In the [default] block of /etc/cinder/cinder.conf, add the following contents:

    • volume_driver indicates the loaded driver.
    • cinder_huawei_conf_file indicates the specified Huawei-customized configuration file.
    • hypermetro_devices indicates the list of remote storage devices for which Hypermetro is to be used.

    The added content in the [default] block of /etc/cinder/cinder.conf with the appropriate volume_driver and the list of remote storage devices values for each product is as below:

    volume_driver = VOLUME_DRIVER
    cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
    hypermetro_devices = {STORAGE_DEVICE1, STORAGE_DEVICE2....}
    

    Note

    By default, the value for hypermetro_devices is None.

    The volume-driver value for every product is as below:

    # For iSCSI
    volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver
    
    # For FC
    volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
    
  5. Run the service cinder-volume restart command to restart the Block Storage service.

Configuring iSCSI Multipathing

To configure iSCSI Multipathing, follow the steps below:

  1. Create a port group on the storage device using the DeviceManager and add service links that require multipathing into the port group.

  2. Log in to the storage device using CLI commands and enable the multiport discovery switch in the multipathing.

    developer:/>change iscsi discover_multiport switch=on
    
  3. Add the port group settings in the Huawei-customized driver configuration file and configure the port group name needed by an initiator.

    <iSCSI>
       <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
       <Initiator Name="xxxxxx" TargetPortGroup="xxxx" />
    </iSCSI>
    
  4. Enable the multipathing switch of the Compute service module.

    Add iscsi_use_multipath = True in [libvirt] of /etc/nova/nova.conf.

  5. Run the service nova-compute restart command to restart the nova-compute service.

Configuring CHAP and ALUA

On a public network, any application server whose IP address resides on the same network segment as that of the storage systems iSCSI host port can access the storage system and perform read and write operations in it. This poses risks to the data security of the storage system. To ensure the storage systems access security, you can configure CHAP authentication to control application servers access to the storage system.

Adjust the driver configuration file as follows:

<Initiator ALUA="xxx" CHAPinfo="xxx" Name="xxx" TargetIP="x.x.x.x"/>

ALUA indicates a multipathing mode. 0 indicates that ALUA is disabled. 1 indicates that ALUA is enabled. CHAPinfo indicates the user name and password authenticated by CHAP. The format is mmuser; mm-user@storage. The user name and password are separated by semicolons (;).

Configuring multiple storage

Multiple storage systems configuration example:

enabled_backends = v3_fc, 18000_fc
[v3_fc]
volume_driver = cinder.volume.drivers.huawei.huawei_t.HuaweiFCDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_v3_fc.xml
volume_backend_name = HuaweiTFCDriver
[18000_fc]
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_fc.xml
volume_backend_name = HuaweiFCDriver
Configuration file parameters

This section describes mandatory and optional configuration file parameters of the Huawei volume driver.

Mandatory parameters
Parameter Default value Description Applicable to
Product - Type of a storage product. Possible values are TV2, 18000 and V3. All
Protocol - Type of a connection protocol. The possible value is either 'iSCSI' or 'FC'. All
RestURL - Access address of the REST interface, https://x.x.x.x/devicemanager/rest/. The value x.x.x.x indicates the management IP address. OceanStor 18000 uses the preceding setting, and V2 and V3 requires you to add port number 8088, for example, https://x.x.x.x:8088/deviceManager/rest/. If you need to configure multiple RestURL, separate them by semicolons (;).

T series V2

V3 18000

UserName - User name of a storage administrator. All
UserPassword - Password of a storage administrator. All
StoragePool - Name of a storage pool to be used. If you need to configure multiple storage pools, separate them by semicolons (;). All

Note

The value of StoragePool cannot contain Chinese characters.

Optional parameters
Parameter Default value Description Applicable to
LUNType Thin Type of the LUNs to be created. The value can be Thick or Thin. All
WriteType 1 Cache write type, possible values are: 1 (write back), 2 (write through), and 3 (mandatory write back). All
MirrorSwitch 1 Cache mirroring or not, possible values are: 0 (without mirroring) or 1 (with mirroring). All
LUNcopyWaitInterval 5 After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval.

T series V2 V3

18000

Timeout 432000 Timeout interval for waiting LUN copy of a storage device to complete. The unit is second.

T series V2 V3

18000

Initiator Name - Name of a compute node initiator. All
Initiator TargetIP - IP address of the iSCSI port provided for compute nodes. All
Initiator TargetPortGroup - IP address of the iSCSI target port that is provided for compute nodes.

T series V2 V3

18000

DefaultTargetIP - Default IP address of the iSCSI target port that is provided for compute nodes. All
OSType Linux Operating system of the Nova compute node’s host. All
HostIP - IP address of the Nova compute node’s host. All

Important

The Initiator Name, Initiator TargetIP, and Initiator TargetPortGroup are ISCSI parameters and therefore not applicable to FC.

IBM GPFS volume driver

IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be direct attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations.

How the GPFS driver works

The GPFS driver enables the use of GPFS in a fashion similar to that of the NFS driver. With the GPFS driver, instances do not actually access a storage device at the block level. Instead, volume backing files are created in a GPFS file system and mapped to instances, which emulate a block device.

Note

GPFS software must be installed and running on nodes where Block Storage and Compute services run in the OpenStack environment. A GPFS file system must also be created and mounted on these nodes before starting the cinder-volume service. The details of these GPFS specific steps are covered in GPFS: Concepts, Planning, and Installation Guide and GPFS: Administration and Programming Reference.

Optionally, the Image service can be configured to store images on a GPFS file system. When a Block Storage volume is created from an image, if both image data and volume data reside in the same GPFS file system, the data from image file is moved efficiently to the volume file using copy-on-write optimization strategy.

Enable the GPFS driver

To use the Block Storage service with the GPFS driver, first set the volume_driver in the cinder.conf file:

volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver

The following table contains the configuration options supported by the GPFS driver.

Note

The gpfs_images_share_mode flag is only valid if the Image Service is configured to use GPFS with the gpfs_images_dir flag. When the value of this flag is copy_on_write, the paths specified by the gpfs_mount_point_base and gpfs_images_dir flags must both reside in the same GPFS file system and in the same GPFS file set.

Volume creation options

It is possible to specify additional volume configuration options on a per-volume basis by specifying volume metadata. The volume is created using the specified options. Changing the metadata after the volume is created has no effect. The following table lists the volume creation options supported by the GPFS volume driver.

Description of GPFS storage configuration options
Configuration option = Default value Description
[DEFAULT]  
gpfs_images_dir = None (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS.
gpfs_images_share_mode = None (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: “copy” specifies that a full copy of the image is made; “copy_on_write” specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently.
gpfs_max_clone_depth = 0 (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth.
gpfs_mount_point_base = None (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored.
gpfs_sparse_volumes = True (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time.
gpfs_storage_pool = system (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used.
nas_host = (String) IP address or Hostname of NAS system.
nas_login = admin (String) User name to connect to NAS system.
nas_password = (String) Password to connect to NAS system.
nas_private_key = (String) Filename of private key to use for SSH authentication.
nas_ssh_port = 22 (Port number) SSH port to use to connect to NAS system.

This example shows the creation of a 50GB volume with an ext4 file system labeled newfs and direct IO enabled:

$ cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50
Operational notes for GPFS driver

Volume snapshots are implemented using the GPFS file clone feature. Whenever a new snapshot is created, the snapshot file is efficiently created as a read-only clone parent of the volume, and the volume file uses copy-on-write optimization strategy to minimize data movement.

Similarly when a new volume is created from a snapshot or from an existing volume, the same approach is taken. The same approach is also used when a new volume is created from an Image service image, if the source image is in raw format, and gpfs_images_share_mode is set to copy_on_write.

The GPFS driver supports encrypted volume back end feature. To encrypt a volume at rest, specify the extra specification gpfs_encryption_rest = True.

IBM Storwize family and SVC volume driver

The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.

Configure the Storwize family and SVC system
Network configuration

The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.

If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume’s preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system. You do not need to provide these iSCSI IP addresses directly to the driver.

Note

If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system.

If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. The driver uses all available WWPNs to attach the volume to the instance. The driver obtains the WWPNs directly from the storage system. You do not need to provide these WWPNs directly to the driver.

Note

If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system.

iSCSI CHAP authentication

If using iSCSI for data access and the storwize_svc_iscsi_chap_enabled is set to True, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. The compute nodes use these secrets when creating iSCSI connections.

Warning

CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.

Note

Not all OpenStack Compute drivers support CHAP authentication. Please check compatibility before using.

Note

CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.

Configure storage pools

The IBM Storwize/SVC driver can allocate volumes in multiple pools. The pools should be created in advance and be provided to the driver using the storwize_svc_volpool_name configuration flag in the form of a comma-separated list. For the complete list of configuration flags, see Storwize family and SVC driver options in cinder.conf.

Configure user authentication for the driver

The driver requires access to the Storwize family or SVC system management interface. The driver communicates with the management using SSH. The driver should be provided with the Storwize family or SVC management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH). Also, you can set the secondary management IP using the storwize_san_secondary_ip flag.

Note

Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.

To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.

Note

When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role.

If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are san_login and san_password, respectively.

If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the choose file option in the Storwize family or SVC management GUI under SSH public key. Alternatively, you may associate the SSH public key using the command-line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.

Create a SSH key pair with OpenSSH

You can create an SSH key pair using OpenSSH, by running:

$ ssh-keygen -t rsa

The command prompts for a file to save the key pair. For example, if you select key as the filename, two files are created: key and key.pub. The key file holds the private SSH key and key.pub holds the public SSH key.

The command also prompts for a pass phrase, which should be empty.

The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command-line interface.

Note

Ensure that Cinder has read permissions on the private key file.

Configure the Storwize family and SVC driver
Enable the Storwize family and SVC driver

Set the volume driver to the Storwize family and SVC driver by setting the volume_driver option in the cinder.conf file as follows:

iSCSI:

volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver

FC:

volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
Storwize family and SVC driver options in cinder.conf

The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below.

List of configuration flags for Storwize storage and SVC driver
Flag name Type Default Description
san_ip Required   Management IP or host name
san_ssh_port Optional 22 Management port
san_login Required   Management login username
san_password Required [1]   Management login password
san_private_key Required   Management login SSH private key
storwize_svc_volpool_name Required   Default pool name for volumes
storwize_svc_vol_rsize Optional 2 Initial physical allocation (percentage) [2]
storwize_svc_vol_warning Optional 0 (disabled) Space allocation warning threshold (percentage)
storwize_svc_vol_autoexpand Optional True Enable or disable volume auto expand [3]
storwize_svc_vol_grainsize Optional 256 Volume grain size in KB
storwize_svc_vol_compression Optional False Enable or disable Real-time Compression [4]
storwize_svc_vol_easytier Optional True Enable or disable Easy Tier [5]
storwize_svc_vol_iogrp Optional 0 The I/O group in which to allocate vdisks
storwize_svc_flashcopy_timeout Optional 120 FlashCopy timeout threshold [6] (seconds)
storwize_svc_iscsi_chap_enabled Optional True Configure CHAP authentication for iSCSI connections
storwize_svc_multihost_enabled Optional True Enable mapping vdisks to multiple hosts [7]
storwize_svc_vol_nofmtdisk Optional False Enable or disable fast format [8]
[1]The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified, the driver uses only the SSH private key.
[2]The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[3]Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[4]Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[5]Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[6]The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[7]This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[8]Defines whether or not the fast formatting of thick-provisioned volumes is disabled at creation. The default value is False and a value of True means that fast format is disabled. Details about this option can be found in the –nofmtdisk flag of the Storwize family and SVC command-line interface mkvdisk command.
Description of IBM Storwise driver configuration options
Configuration option = Default value Description
[DEFAULT]  
storwize_san_secondary_ip = None (String) Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible.
storwize_svc_allow_tenant_qos = False (Boolean) Allow tenants to specify QOS on create
storwize_svc_flashcopy_rate = 50 (Integer) Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-100.
storwize_svc_flashcopy_timeout = 120 (Integer) Maximum number of seconds to wait for FlashCopy to be prepared.
storwize_svc_iscsi_chap_enabled = True (Boolean) Configure CHAP authentication for iSCSI connections (Default: Enabled)
storwize_svc_multihostmap_enabled = True (Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release.
storwize_svc_multipath_enabled = False (Boolean) Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
storwize_svc_stretched_cluster_partner = None (String) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: “pool2”
storwize_svc_vol_autoexpand = True (Boolean) Storage system autoexpand parameter for volumes (True/False)
storwize_svc_vol_compression = False (Boolean) Storage system compression option for volumes
storwize_svc_vol_easytier = True (Boolean) Enable Easy Tier for volumes
storwize_svc_vol_grainsize = 256 (Integer) Storage system grain size parameter for volumes (32/64/128/256)
storwize_svc_vol_iogrp = 0 (Integer) The I/O group in which to allocate volumes
storwize_svc_vol_nofmtdisk = False (Boolean) Specifies that the volume not be formatted during creation.
storwize_svc_vol_rsize = 2 (Integer) Storage system space-efficiency parameter for volumes (percentage)
storwize_svc_vol_warning = 0 (Integer) Storage system threshold for volume capacity warnings (percentage)
storwize_svc_volpool_name = volpool (List) Comma separated list of storage system storage pools for volumes.
Placement with volume types

The IBM Storwize/SVC driver exposes capabilities that can be added to the extra specs of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities: to indicate that the scheduler should use them. The following extra specs are supported:

  • capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in lssystem, an underscore, and the name of the pool (mdisk group). For example:

    capabilities:volume_back-end_name=myV7000_openstackpool
    
  • capabilities:compression_support - Specify a back-end according to compression support. A value of True should be used to request a back-end that supports compression, and a value of False will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifying True does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:

    capabilities:compression_support='<is> True'
    
  • capabilities:easytier_support - Similar semantics as the compression_support key, but for specifying according to support of the Easy Tier feature. Example syntax:

    capabilities:easytier_support='<is> True'
    
  • capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are iSCSI and FC. This extra specs value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> which is used in the previous examples.

    capabilities:storage_protocol='<in> FC'
    
Configure per-volume creation options

Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the default values set in the configuration file. Contrary to the previous examples where the capabilities scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM Storwize/SVC driver with the drivers scope.

The following extra specs keys are supported by the IBM Storwize/SVC driver:

  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • multipath
  • iogrp

These keys have the same semantics as their counterparts in the configuration file. They are set similarly; for example, rsize=2 or compression=False.

Example: Volume types

In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:

$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True

We can then create a 50GB volume using this type:

$ cinder create --display-name "compressed volume" --volume-type compressed 50

Volume types can be used, for example, to provide users with different

  • performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
  • resiliency levels (such as, allocating volumes in pools with different RAID levels)
  • features (such as, enabling/disabling Real-time Compression)
QOS

The Storwize driver provides QOS support for storage volumes by controlling the I/O amount. QOS is enabled by editing the etc/cinder/cinder.conf file and setting the storwize_svc_allow_tenant_qos to True.

There are three ways to set the Storwize IOThrotting parameter for storage volumes:

  • Add the qos:IOThrottling key into a QOS specification and associate it with a volume type.
  • Add the qos:IOThrottling key into an extra specification with a volume type.
  • Add the qos:IOThrottling key to the storage volume metadata.

Note

If you are changing a volume type with QOS to a new volume type without QOS, the QOS configuration settings will be removed.

Operational notes for the Storwize family and SVC driver
Migrate volumes

In the context of OpenStack Block Storage’s volume migration feature, the IBM Storwize/SVC driver enables the storage’s virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background.

Note

To enable this feature, both pools involved in a given volume migration must have the same values for extent_size. If the pools have different values for extent_size, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.

Extend volumes

The IBM Storwize/SVC driver allows for extending a volume’s size, but only for volumes without snapshots.

Snapshots and clones

Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete.

Volume retype

The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:

  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • iogrp
  • nofmtdisk

Note

When you change the rsize, grainsize or compression properties, volume copies are asynchronously synchronized on the array.

Note

To change the iogrp property, IBM Storwize/SVC firmware version 6.4.0 or later is required.

IBM Storage volume driver

The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV, IBM Spectrum Accelerate, IBM FlashSystem A9000, IBM FlashSystem A9000R and IBM DS8000 storage systems over Fiber channel and iSCSI.

Set the following in your cinder.conf file, and use the following options to configure it.

volume_driver = cinder.volume.drivers.ibm.ibm_storage.IBMStorageDriver
Description of IBM Storage driver configuration options
Configuration option = Default value Description
[DEFAULT]  
proxy = storage.proxy.IBMStorageProxy (String) Proxy driver that connects to the IBM Storage Array
san_clustername = (String) Cluster name to use for creating volumes
san_ip = (String) IP address of SAN controller
san_login = admin (String) Username for SAN controller
san_password = (String) Password for SAN controller

Note

To use the IBM Storage Driver for OpenStack you must download and install the package. For more information, see IBM Support Portal - Select Fixes.

For full documentation, see IBM Knowledge Center.

IBM FlashSystem volume driver

The volume driver for FlashSystem provides OpenStack Block Storage hosts with access to IBM FlashSystems.

Configure FlashSystem
Configure storage array

The volume driver requires a pre-defined array. You must create an array on the FlashSystem before using the volume driver. An existing array can also be used and existing data will not be deleted.

Note

FlashSystem can only create one array, so no configuration option is needed for the IBM FlashSystem driver to assign it.

Configure user authentication for the driver

The driver requires access to the FlashSystem management interface using SSH. It should be provided with the FlashSystem management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH).

Note

Make sure the compute node running the cinder-volume driver has SSH network access to the storage system.

Using password authentication, assign a password to the user on the FlashSystem. For more detail, see the driver configuration flags for the user and password here: Enable IBM FlashSystem FC driver or Enable IBM FlashSystem iSCSI driver.

IBM FlashSystem FC driver
Data Path configuration

Using Fiber Channel (FC), each FlashSystem node should have at least one WWPN port configured. If the flashsystem_multipath_enabled flag is set to True in the Block Storage service configuration file, the driver uses all available WWPNs to attach the volume to the instance. If the flag is not set, the driver uses the WWPN associated with the volume’s preferred node (if available). Otherwise, it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system. You do not need to provide these WWPNs to the driver.

Note

Using FC, ensure that the block storage hosts have FC connectivity to the FlashSystem.

Enable IBM FlashSystem FC driver

Set the volume driver to the FlashSystem driver by setting the volume_driver option in the cinder.conf configuration file, as follows:

volume_driver = cinder.volume.drivers.ibm.flashsystem_fc.FlashSystemFCDriver

To enable the IBM FlashSystem FC driver, configure the following options in the cinder.conf configuration file:

List of configuration flags for IBM FlashSystem FC driver
Flag name Type Default Description
san_ip Required   Management IP or host name
san_ssh_port Optional 22 Management port
san_login Required   Management login user name
san_password Required   Management login password
flashsystem_connection_protocol Required   Connection protocol should be set to FC
flashsystem_multipath_enabled Required   Enable multipath for FC connections
flashsystem_multihost_enabled Optional True Enable mapping vdisks to multiple hosts [1]
[1]This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
IBM FlashSystem iSCSI driver
Network configuration

Using iSCSI, each FlashSystem node should have at least one iSCSI port configured. iSCSI IP addresses of IBM FlashSystem can be obtained by FlashSystem GUI or CLI. For more information, see the appropriate IBM Redbook for the FlashSystem.

Note

Using iSCSI, ensure that the compute nodes have iSCSI network access to the IBM FlashSystem.

Enable IBM FlashSystem iSCSI driver

Set the volume driver to the FlashSystem driver by setting the volume_driver option in the cinder.conf configuration file, as follows:

volume_driver = cinder.volume.drivers.ibm.flashsystem_iscsi.FlashSystemISCSIDriver

To enable IBM FlashSystem iSCSI driver, configure the following options in the cinder.conf configuration file:

List of configuration flags for IBM FlashSystem iSCSI driver
Flag name Type Default Description
san_ip Required   Management IP or host name
san_ssh_port Optional 22 Management port
san_login Required   Management login user name
san_password Required   Management login password
flashsystem_connection_protocol Required   Connection protocol should be set to iSCSI
flashsystem_multihost_enabled Optional True Enable mapping vdisks to multiple hosts [2]
iscsi_ip_address Required   Set to one of the iSCSI IP addresses obtained by FlashSystem GUI or CLI [3]
flashsystem_iscsi_portid Required   Set to the id of the iscsi_ip_address obtained by FlashSystem GUI or CLI [4]
[2]This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[3]On the cluster of the FlashSystem, the iscsi_ip_address column is the seventh column IP_address of the output of lsportip.
[4]On the cluster of the FlashSystem, port ID column is the first column id of the output of lsportip, not the sixth column port_id.
Limitations and known issues

IBM FlashSystem only works when:

open_access_enabled=off
Supported operations

These operations are supported:

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Manage and unmanage a volume.
ITRI DISCO volume driver
Supported operations

The DISCO driver supports the following features:

  • Volume create and delete
  • Volume attach and detach
  • Snapshot create and delete
  • Create volume from snapshot
  • Get volume stats
  • Copy image to volume
  • Copy volume to image
  • Clone volume
  • Extend volume
Configuration options
Description of Disco volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
clone_check_timeout = 3600 (Integer) How long we check whether a clone is finished before we give up
clone_volume_timeout = 680 (Integer) Create clone volume timeout.
disco_client = 127.0.0.1 (IP) The IP of DMS client socket server
disco_client_port = 9898 (Port number) The port to connect DMS client socket server
disco_wsdl_path = /etc/cinder/DISCOService.wsdl (String) Path to the wsdl file to communicate with DISCO request manager
restore_check_timeout = 3600 (Integer) How long we check whether a restore is finished before we give up
retry_interval = 1 (Integer) How long we wait before retrying to get an item detail
Kaminario K2 all-flash array iSCSI and FC OpenStack Block Storage drivers

Kaminario’s K2 all-flash array leverages a unique software-defined architecture that delivers highly valued predictable performance, scalability and cost-efficiency.

Kaminario’s K2 all-flash iSCSI and FC arrays can be used in OpenStack Block Storage for providing block storage using KaminarioISCSIDriver class and KaminarioFCDriver class respectively.

Driver requirements
  • Kaminario’s K2 all-flash iSCSI and/or FC array
  • K2 REST API version >= 2.2.0
  • krest python library should be installed on the Block Storage node using sudo pip install krest
  • The Block Storage Node should also have a data path to the K2 array for the following operations:
    • Create a volume from snapshot
    • Clone a volume
    • Copy volume to image
    • Copy image to volume
    • Retype ‘dedup without replication’<->’nodedup without replication’
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Replicate volume with failover and failback support to K2 array.
Configure Kaminario iSCSI/FC back end
  1. Edit the /etc/cinder/cinder.conf file and define a configuration group for iSCSI/FC back end.

    [DEFAULT]
    enabled_backends = kaminario
    
    # Use DriverFilter in combination of other filters to use 'filter_function'
    # scheduler_default_filters = DriverFilter,CapabilitiesFilter
    
    [kaminario]
    # Management IP of Kaminario K2 All-Flash iSCSI/FC array
    san_ip = 10.0.0.10
    # Management username of Kaminario K2 All-Flash iSCSI/FC array
    san_login = username
    # Management password of Kaminario K2 All-Flash iSCSI/FC array
    san_password = password
    # Enable Kaminario K2 iSCSI/FC driver
    volume_driver = cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
    # volume_driver = cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver
    
    # Backend name
    volume_backend_name = kaminario
    
    # K2 driver calculates max_oversubscription_ratio on setting below
    # option as True. Default value is False
    # auto_calc_max_oversubscription_ratio = False
    
    # Set a limit on total number of volumes to be created on K2 array, for example:
    # filter_function = "capabilities.total_volumes < 250"
    
    # For replication, replication_device must be set and the replication peer must be configured
    # on the primary and the secondary K2 arrays
    # Syntax:
    #     replication_device = backend_id:<s-array-ip>,login:<s-username>,password:<s-password>,rpo:<value>
    # where:
    #     s-array-ip is the secondary K2 array IP
    #     rpo must be either 60(1 min) or multiple of 300(5 min)
    # Example:
    # replication_device = backend_id:10.0.0.50,login:kaminario,password:kaminario,rpo:300
    
    # Suppress requests library SSL certificate warnings on setting this option as True
    # Default value is 'False'
    # suppress_requests_ssl_warnings = False
    
  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

Driver options

The following table contains the configuration options that are specific to the Kaminario K2 FC and iSCSI Block Storage drivers.

Description of Kaminario volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
kaminario_nodedup_substring = K2-nodedup (String) DEPRECATED: If volume-type name contains this substring nodedup volume will be created, otherwise dedup volume wil be created. This option is deprecated in favour of ‘kaminario:thin_prov_type’ in extra-specs and will be removed in the next release.
Lenovo Fibre Channel and iSCSI drivers

The LenovoFCDriver and LenovoISCSIDriver Cinder drivers allow Lenovo S3200 or S2200 arrays to be used for block storage in OpenStack deployments.

System requirements

To use the Lenovo drivers, the following are required:

  • Lenovo S3200 or S2200 array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed using an HTTPS connection. HTTP can also be used if lenovo_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

  2. Edit the cinder.conf file to define a storage back-end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in key=value format.

    • The lenovo_backend_name value specifies the name of the storage pool on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that’s shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate Cinder driver name; IP address or host name of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    Example: iSCSI example back-end entries

    [pool-a]
    lenovo_backend_name = A
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    lenovo_backend_name = B
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Example: Fibre Channel example back-end entries

    [pool-a]
    lenovo_backend_name = A
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    lenovo_backend_name = B
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If HTTPS is not enabled in the array, include lenovo_api_protocol = http in each of the back-end definitions.

  4. If HTTPS is enabled, you can enable certificate verification with the option lenovo_verify_certificate=True. You may also use the lenovo_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  5. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_backends parameter specifying the back-end entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example: [DEFAULT] section changes

    [DEFAULT]
    ...
    enabled_backends = pool-a,pool-b
    default_volume_type = lenovo
    ...
    
  6. Create a new volume type for each distinct volume_backend_name value that you added to the cinder.conf file. The example below assumes that the same volume_backend_name=lenovo-array option was specified in all of the entries, and specifies that the volume type lenovo can be used to allocate volumes from any of them.

    Example: Creating a volume type

    $ cinder type-create lenovo
    $ cinder type-key lenovo set volume_backend_name=lenovo-array
    
  7. After modifying the cinder.conf file, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the Lenovo drivers.

Description of Lenovo volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
lenovo_api_protocol = https (String) Lenovo api interface protocol.
lenovo_backend_name = A (String) Pool or Vdisk name to use for volume creation.
lenovo_backend_type = virtual (String) linear (for VDisk) or virtual (for Pool).
lenovo_iscsi_ips = (List) List of comma-separated target iSCSI IP addresses.
lenovo_verify_certificate = False (Boolean) Whether to verify Lenovo array SSL certificate.
lenovo_verify_certificate_path = None (String) Lenovo array SSL certificate path.
NetApp unified driver

The NetApp unified driver is a Block Storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. Also, the NetApp unified driver supports over subscription or over provisioning when thin provisioned Block Storage volumes are in use on an E-Series backend. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.

Note

With the Juno release of OpenStack, Block Storage has introduced the concept of storage pools, in which a single Block Storage back end may present one or more logical storage resource pools from which Block Storage will select a storage location when provisioning volumes.

In releases prior to Juno, the NetApp unified driver contained some scheduling logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new Block Storage volume would be placed into.

With the introduction of pools, all scheduling logic is performed completely within the Block Storage scheduler, as each NetApp storage container is directly exposed to the Block Storage scheduler as a storage pool. Previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the Block Storage volume would be provisioned into.

NetApp clustered Data ONTAP storage family

The NetApp clustered Data ONTAP storage family represents a configuration group which provides Compute instances access to clustered Data ONTAP storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.

NetApp iSCSI configuration for clustered Data ONTAP

The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN that can be accessed using the iSCSI protocol.

The iSCSI configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.

Configuration options

Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password

Note

To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Description of NetApp cDOT iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_lun_space_reservation = enabled (String) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.

Note

If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the Block Storage logs.

Note

The driver supports iSCSI CHAP uni-directional authentication. To enable it, set the use_chap_auth option to True.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp NFS configuration for clustered Data ONTAP

The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.

The NFS configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.

Configuration options

Configure the volume driver, storage family, and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver, netapp_storage_family, and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Description of NetApp cDOT NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]  
expiry_thres_minutes = 720 (Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_copyoffload_tool_path = None (String) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file.
netapp_host_type = None (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_host_type = None (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
thres_avl_size_perc_start = 20 (Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.

Note

Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Description of NFS storage configuration options.

Note

If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the Block Storage logs.

NetApp NFS Copy Offload client

A feature was added in the Icehouse release of the NetApp unified driver that enables Image service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.

The NetApp NFS Copy Offload client can be used in either of the following scenarios:

  • The Image service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image service. Both FlexVols must be located within the same cluster.
  • The source image from the Image service has already been cached in an NFS image cache within a Block Storage back end. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.

To use this feature, you must configure the Image service, as follows:

  • Set the default_store configuration option to file.
  • Set the filesystem_store_datadir configuration option to the path to the Image service NFS export.
  • Set the show_image_direct_url configuration option to True.
  • Set the show_multiple_locations configuration option to True.
  • Set the filesystem_store_metadata_file configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image service.

To use this feature, you must configure the Block Storage service, as follows:

  • Set the netapp_copyoffload_tool_path configuration option to the path to the NetApp Copy Offload binary.

  • Set the glance_api_version configuration option to 2.

    Important

    This feature requires that:

    • The storage system must have Data ONTAP v8.2 or greater installed.
    • The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
    • To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.

Tip

To download the NetApp copy offload binary to be utilized in conjunction with the netapp_copyoffload_tool_path configuration option, please visit the Utility Toolchest page at the NetApp Support portal (login is required).

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp-supported extra specs for clustered Data ONTAP

Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure QoS, mirroring, or compression for a storage back end.

Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the cinder type-key command.

Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP
Extra spec Type Description
netapp_raid_type String Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
netapp_disk_type String Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp:qos_policy_group [1] String Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume.
netapp_mirrored Boolean Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored [2] Boolean Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup Boolean Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression Boolean Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
netapp_thick_provisioned Boolean Limit the candidate volume list to only the ones that support thick provisioning on the storage controller.
[1]Please note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[2]In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.
NetApp Data ONTAP operating in 7-Mode storage family

The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides Compute instances access to 7-Mode storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.

NetApp iSCSI configuration for Data ONTAP operating in 7-Mode

The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.

The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.

Configuration options

Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password

Note

To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Description of NetApp 7-Mode iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.

Note

The driver supports iSCSI CHAP uni-directional authentication. To enable it, set the use_chap_auth option to True.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp NFS configuration for Data ONTAP operating in 7-Mode

The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.

The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.

Configuration options

Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Description of NetApp 7-Mode NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]  
expiry_thres_minutes = 720 (Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
thres_avl_size_perc_start = 20 (Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.

Note

Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see Description of NFS storage configuration options.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp E-Series storage family

The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in Block Storage to work with the iSCSI storage protocol.

NetApp iSCSI configuration for E-Series

The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN which can be accessed using the iSCSI protocol.

The iSCSI configuration for E-Series is an interface from Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.

The use of multipath and DM-MP are required when using the Block Storage driver for E-Series. In order for Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:

  • The use_multipath_for_image_xfer option should be set to True in the cinder.conf file within the driver-specific stanza (for example, [myDriver]).
  • The iscsi_use_multipath option should be set to True in the nova.conf file within the [libvirt] stanza.

Configuration options

Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True

Note

To use the E-Series driver, you must override the default value of netapp_storage_family with eseries.

To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Description of NetApp E-Series driver configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_controller_ips = None (String) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning.
netapp_enable_multiattach = False (Boolean) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host.
netapp_host_type = None (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_sa_password = None (String) Password for the NetApp E-Series storage array.
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_webservice_path = /devmgr/v2 (String) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp-supported extra specs for E-Series

Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with an E-Series storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure thin provisioning for a storage back end.

Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the cinder type-key command.

Description of extra specs options for NetApp Unified Driver with E-Series
Extra spec Type Description
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
Upgrading prior NetApp drivers to the NetApp unified driver

NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining an upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.

Upgraded NetApp drivers

This section describes how to update Block Storage configuration from a pre-Havana release to the unified driver format.

  • NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = iscsi
    
  • NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = nfs
    
  • NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = iscsi
    
  • NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = nfs
    
Deprecated NetApp drivers

This section lists the NetApp drivers in earlier releases that are deprecated in Havana.

  • NetApp iSCSI driver for clustered Data ONTAP:

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
    
  • NetApp NFS driver for clustered Data ONTAP:

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
    
  • NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller:

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
    
  • NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller:

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
    

Note

For support information on deprecated NetApp drivers in the Havana release, visit the NetApp OpenStack Deployment and Operations Guide.

Nimble Storage volume driver

Nimble Storage fully integrates with the OpenStack platform through the Nimble Cinder driver, allowing a host to configure and manage Nimble Storage array features through Block Storage interfaces.

Support for the Liberty release is available from Nimble OS 2.3.8 or later.

Supported operations
  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
  • Manage and unmanage a volume
  • Enable encryption and default performance policy for a volume-type extra-specs
  • Force backup of an in-use volume.

Note

The Nimble Storage implementation uses iSCSI only. Fibre Channel is not supported.

Nimble Storage driver configuration

Update the file /etc/cinder/cinder.conf with the given configuration.

In case of a basic (single back-end) configuration, add the parameters within the [default] section as follows.

[default]
san_ip = NIMBLE_MGMT_IP
san_login = NIMBLE_USER
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver

In case of multiple back-end configuration, for example, configuration which supports multiple Nimble Storage arrays or a single Nimble Storage array with arrays from other vendors, use the following parameters.

[default]
enabled_backends = Nimble-Cinder

[Nimble-Cinder]
san_ip = NIMBLE_MGMT_IP
san_login = NIMBLE_USER
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver
volume_backend_name = NIMBLE_BACKEND_NAME

In case of multiple back-end configuration, Nimble Storage volume type is created and associated with a back-end name as follows.

Note

Single back-end configuration users do not need to create the volume type.

$ cinder type-create NIMBLE_VOLUME_TYPE
$ cinder type-key NIMBLE_VOLUME_TYPE set volume_backend_name=NIMBLE_BACKEND_NAME

This section explains the variables used above:

NIMBLE_MGMT_IP
Management IP address of Nimble Storage array/group.
NIMBLE_USER
Nimble Storage account login with minimum power user (admin) privilege if RBAC is used.
NIMBLE_PASSWORD
Password of the admin account for nimble array.
NIMBLE_BACKEND_NAME
A volume back-end name which is specified in the cinder.conf file. This is also used while assigning a back-end name to the Nimble volume type.
NIMBLE_VOLUME_TYPE

The Nimble volume-type which is created from the CLI and associated with NIMBLE_BACKEND_NAME.

Note

Restart the cinder-api, cinder-scheduler, and cinder-volume services after updating the cinder.conf file.

Nimble driver extra spec options

The Nimble volume driver also supports the following extra spec options:

‘nimble:encryption’=’yes’
Used to enable encryption for a volume-type.
‘nimble:perfpol-name’=PERF_POL_NAME
PERF_POL_NAME is the name of a performance policy which exists on the Nimble array and should be enabled for every volume in a volume type.
‘nimble:multi-initiator’=’true’
Used to enable multi-initiator access for a volume-type.

These extra-specs can be enabled by using the following command:

$ cinder type-key VOLUME_TYPE set KEY=VALUE

VOLUME_TYPE is the Nimble volume type and KEY and VALUE are the options mentioned above.

Configuration options

The Nimble storage driver supports these configuration options:

Description of Nimble driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nimble_pool_name = default (String) Nimble Controller pool name
nimble_subnet_label = * (String) Nimble Subnet Label
NexentaStor 4.x NFS and iSCSI drivers

NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) platform delivering unified file (NFS and SMB) and block (FC and iSCSI) storage services, runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes all data management functionality by default.

For NexentaStor 4.x user documentation, visit https://nexenta.com/products/downloads/nexentastor.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Change volume type.
Nexenta iSCSI driver

The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta namespace. The Nexenta iSCSI volume driver should work with all versions of NexentaStor.

The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A volume and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release-specific NexentaStor documentation.

The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple backend volume drivers.

You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta iSCSI driver
    volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
    
    # IP address of NexentaStor host (string value)
    nexenta_host=HOST-IP
    
    # Username for NexentaStor REST (string value)
    nexenta_user=USERNAME
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8457
    
    # Password for NexentaStor REST (string value)
    nexenta_password=PASSWORD
    
    # Volume on NexentaStor appliance (string value)
    nexenta_volume=volume_name
    

Note

nexenta_volume represents a zpool which is called volume on NS appliance. It must be pre-created before enabling the driver.

  1. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.
Nexenta NFS driver

The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory.

While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that is already deployed on NexentaStor appliances.

The Nexenta NFS volume driver should work with all versions of NexentaStor. The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. This directory must be created and exported on each NexentaStor appliance. This should be done as specified in the release- specific NexentaStor documentation.

You must configure these items for each NexentaStor appliance that the NFS volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta NFS driver
    volume_driver=cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
    
    # Path to shares config file
    nexenta_shares_config=/home/ubuntu/shares.cfg
    

    Note

    Add your list of Nexenta NFS servers to the file you specified with the nexenta_shares_config option. For example, this is how this file should look:

    192.168.1.200:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.200:8457
    192.168.1.201:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.201:8457
    192.168.1.202:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.202:8457
    

Each line in this file represents an NFS share. The first part of the line is the NFS share URL, the second line is the connection URL to the NexentaStor Appliance.

Driver options

Nexenta Driver supports these options:

Description of Nexenta driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nexenta_blocksize = 4096 (Integer) Block size for datasets
nexenta_chunksize = 32768 (Integer) NexentaEdge iSCSI LUN object chunk size
nexenta_client_address = (String) NexentaEdge iSCSI Gateway client address for non-VIP service
nexenta_dataset_compression = on (String) Compression value for new ZFS folders.
nexenta_dataset_dedup = off (String) Deduplication value for new ZFS folders.
nexenta_dataset_description = (String) Human-readable description for the folder.
nexenta_host = (String) IP address of Nexenta SA
nexenta_iscsi_target_portal_port = 3260 (Integer) Nexenta target portal port
nexenta_mount_point_base = $state_path/mnt (String) Base directory that contains NFS share mount points
nexenta_nbd_symlinks_dir = /dev/disk/by-path (String) NexentaEdge logical path of directory to store symbolic links to NBDs
nexenta_nms_cache_volroot = True (Boolean) If set True cache NexentaStor appliance volroot option value.
nexenta_password = nexenta (String) Password to connect to Nexenta SA
nexenta_rest_port = 8080 (Integer) HTTP port to connect to Nexenta REST API server
nexenta_rest_protocol = auto (String) Use http or https for REST connection (default auto)
nexenta_rrmgr_compression = 0 (Integer) Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression.
nexenta_rrmgr_connections = 2 (Integer) Number of TCP connections.
nexenta_rrmgr_tcp_buf_size = 4096 (Integer) TCP Buffer size in KiloBytes.
nexenta_shares_config = /etc/cinder/nfs_shares (String) File with the list of available nfs shares
nexenta_sparse = False (Boolean) Enables or disables the creation of sparse datasets
nexenta_sparsed_volumes = True (Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.
nexenta_target_group_prefix = cinder/ (String) Prefix for iSCSI target groups on SA
nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder- (String) IQN prefix for iSCSI targets
nexenta_user = admin (String) User name to connect to Nexenta SA
nexenta_volume = cinder (String) SA Pool that holds all volumes
NexentaStor 5.x NFS and iSCSI drivers

NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) platform delivering unified file (NFS and SMB) and block (FC and iSCSI) storage services. NexentaStor runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes all data management functionality by default.

For NexentaStor user documentation, visit: http://docs.nexenta.com/.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Change volume type.
iSCSI driver

The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A pool and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release-specific NexentaStor documentation.

The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers.

You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta iSCSI driver
    volume_driver=cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
    
    # IP address of NexentaStor host (string value)
    nexenta_host=HOST-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8080
    
    # Username for NexentaStor Rest (string value)
    nexenta_user=USERNAME
    
    # Password for NexentaStor Rest (string value)
    nexenta_password=PASSWORD
    
    # Pool on NexentaStor appliance (string value)
    nexenta_volume=volume_name
    
    # Name of a parent Volume group where cinder created zvols will reside (string value)
    nexenta_volume_group = iscsi
    

    Note

    nexenta_volume represents a zpool, which is called pool on NS 5.x appliance. It must be pre-created before enabling the driver.

    Volume group does not need to be pre-created, the driver will create it if does not exist.

  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

NFS driver

The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory.

While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that already is deployed on NexentaStor appliances.

The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. Create and export the directory on each NexentaStor appliance.

You must configure these items for each NexentaStor appliance that the NFS volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta NFS driver
    volume_driver=cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
    
    # IP address or Hostname of NexentaStor host (string value)
    nas_host=HOST-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8080
    
    # Path to parent filesystem (string value)
    nas_share_path=POOL/FILESYSTEM
    
    # Specify NFS version
    nas_mount_options=vers=4
    
  2. Create filesystem on appliance and share via NFS. For example:

    "securityContexts": [
       {"readWriteList": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
        "root": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
        "securityModes": ["sys"]}]
    
  3. Create ACL for the filesystem. For example:

    {"type": "allow",
    "principal": "everyone@",
    "permissions": ["list_directory","read_data","add_file","write_data",
    "add_subdirectory","append_data","read_xattr","write_xattr","execute",
    "delete_child","read_attributes","write_attributes","delete","read_acl",
    "write_acl","write_owner","synchronize"],
    "flags": ["file_inherit","dir_inherit"]}
    
Driver options

Nexenta Driver supports these options:

Description of NexentaStor 5 driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nexenta_dataset_compression = on (String) Compression value for new ZFS folders.
nexenta_dataset_dedup = off (String) Deduplication value for new ZFS folders.
nexenta_dataset_description = (String) Human-readable description for the folder.
nexenta_host = (String) IP address of Nexenta SA
nexenta_iscsi_target_portal_port = 3260 (Integer) Nexenta target portal port
nexenta_mount_point_base = $state_path/mnt (String) Base directory that contains NFS share mount points
nexenta_ns5_blocksize = 32 (Integer) Block size for datasets
nexenta_rest_port = 8080 (Integer) HTTP port to connect to Nexenta REST API server
nexenta_rest_protocol = auto (String) Use http or https for REST connection (default auto)
nexenta_sparse = False (Boolean) Enables or disables the creation of sparse datasets
nexenta_sparsed_volumes = True (Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.
nexenta_user = admin (String) User name to connect to Nexenta SA
nexenta_volume = cinder (String) SA Pool that holds all volumes
nexenta_volume_group = iscsi (String) Volume group for ns5
NexentaEdge NBD & iSCSI drivers

NexentaEdge is designed from the ground-up to deliver high performance Block and Object storage services and limitless scalability to next generation OpenStack clouds, petabyte scale active archives and Big Data applications. NexentaEdge runs on shared nothing clusters of industry standard Linux servers, and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW) technology to break new ground in terms of reliability, functionality and cost efficiency.

For NexentaEdge user documentation, visit http://docs.nexenta.com.

iSCSI driver

The NexentaEdge cluster must be installed and configured according to the relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created, as well as an iSCSI service on the NexentaEdge gateway node.

The NexentaEdge iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers.

You must configure these items for each NexentaEdge cluster that the iSCSI volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta iSCSI driver
    volume_driver = cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
    
    # Specify the ip address for Rest API (string value)
    nexenta_rest_address = MANAGEMENT-NODE-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8080
    
    # Protocol used for Rest calls (string value, default=htpp)
    nexenta_rest_protocol = http
    
    # Username for NexentaEdge Rest (string value)
    nexenta_user=USERNAME
    
    # Password for NexentaEdge Rest (string value)
    nexenta_password=PASSWORD
    
    # Path to bucket containing iSCSI LUNs (string value)
    nexenta_lun_container = CLUSTER/TENANT/BUCKET
    
    # Name of pre-created iSCSI service (string value)
    nexenta_iscsi_service = SERVICE-NAME
    
    # IP address of the gateway node attached to iSCSI service above or
    # virtual IP address if an iSCSI Storage Service Group is configured in
    # HA mode (string value)
    nexenta_client_address = GATEWAY-NODE-IP
    
  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
NBD driver

As an alternative to using iSCSI, Amazon S3, or Openstack Swift protocols, NexentaEdge can provide access to cluster storage via a Network Block Device (NBD) interface.

The NexentaEdge cluster must be installed and configured according to the relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created. The driver requires NexentaEdge Service to run on Hypervisor Node (Nova) node. The node must sit on Replicast Network and only runs NexentaEdge service, does not require physical disks.

You must configure these items for each NexentaEdge cluster that the NBD volume driver controls:

  1. Make the following changes on data node /etc/cinder/cinder.conf file.

    # Enable Nexenta NBD driver
    volume_driver = cinder.volume.drivers.nexenta.nexentaedge.nbd.NexentaEdgeNBDDriver
    
    # Specify the ip address for Rest API (string value)
    nexenta_rest_address = MANAGEMENT-NODE-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port = 8080
    
    # Protocol used for Rest calls (string value, default=htpp)
    nexenta_rest_protocol = http
    
    # Username for NexentaEdge Rest (string value)
    nexenta_rest_user = USERNAME
    
    # Password for NexentaEdge Rest (string value)
    nexenta_rest_password = PASSWORD
    
    # Path to bucket containing iSCSI LUNs (string value)
    nexenta_lun_container = CLUSTER/TENANT/BUCKET
    
    # Path to directory to store symbolic links to block devices
    # (string value, default=/dev/disk/by-path)
    nexenta_nbd_symlinks_dir = /PATH/TO/SYMBOLIC/LINKS
    
  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
Driver options

Nexenta Driver supports these options:

Description of NexentaEdge driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nexenta_blocksize = 4096 (Integer) Block size for datasets
nexenta_chunksize = 32768 (Integer) NexentaEdge iSCSI LUN object chunk size
nexenta_client_address = (String) NexentaEdge iSCSI Gateway client address for non-VIP service
nexenta_iscsi_service = (String) NexentaEdge iSCSI service name
nexenta_iscsi_target_portal_port = 3260 (Integer) Nexenta target portal port
nexenta_lun_container = (String) NexentaEdge logical path of bucket for LUNs
nexenta_rest_address = (String) IP address of NexentaEdge management REST API endpoint
nexenta_rest_password = nexenta (String) Password to connect to NexentaEdge
nexenta_rest_port = 8080 (Integer) HTTP port to connect to Nexenta REST API server
nexenta_rest_protocol = auto (String) Use http or https for REST connection (default auto)
nexenta_rest_user = admin (String) User name to connect to NexentaEdge
ProphetStor Fibre Channel and iSCSI drivers

ProhetStor Fibre Channel and iSCSI drivers add support for ProphetStor Flexvisor through the Block Storage service. ProphetStor Flexvisor enables commodity x86 hardware as software-defined storage leveraging well-proven ZFS for disk management to provide enterprise grade storage services such as snapshots, data protection with different RAID levels, replication, and deduplication.

The DPLFCDriver and DPLISCSIDriver drivers run volume operations by communicating with the ProphetStor storage system over HTTPS.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
Enable the Fibre Channel or iSCSI drivers

The DPLFCDriver and DPLISCSIDriver are installed with the OpenStack software.

  1. Query storage pool id to configure dpl_pool of the cinder.conf file.

    1. Log on to the storage system with administrator access.

      $ ssh root@STORAGE_IP_ADDRESS
      
    2. View the current usable pool id.

      $ flvcli show pool list
      - d5bd40b58ea84e9da09dcf25a01fdc07 : default_pool_dc07
      
    3. Use d5bd40b58ea84e9da09dcf25a01fdc07 to configure the dpl_pool of /etc/cinder/cinder.conf file.

      Note

      Other management commands can be referenced with the help command flvcli -h.

  2. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # IP address of SAN controller (string value)
    san_ip=STORAGE IP ADDRESS
    
    # Username for SAN controller (string value)
    san_login=USERNAME
    
    # Password for SAN controller (string value)
    san_password=PASSWORD
    
    # Use thin provisioning for SAN volumes? (boolean value)
    san_thin_provision=true
    
    # The port that the iSCSI daemon is listening on. (integer value)
    iscsi_port=3260
    
    # DPL pool uuid in which DPL volumes are stored. (string value)
    dpl_pool=d5bd40b58ea84e9da09dcf25a01fdc07
    
    # DPL port number. (integer value)
    dpl_port=8357
    
    # Uncomment one of the next two option to enable Fibre channel or iSCSI
    # FIBRE CHANNEL(uncomment the next line to enable the FC driver)
    #volume_driver=cinder.volume.drivers.prophetstor.dpl_fc.DPLFCDriver
    # iSCSI (uncomment the next line to enable the iSCSI driver)
    #volume_driver=cinder.volume.drivers.prophetstor.dpl_iscsi.DPLISCSIDriver
    
  3. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

The following table contains the options supported by the ProphetStor storage driver.

Description of ProphetStor Fibre Channel and iSCSi drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
dpl_pool = (String) DPL pool uuid in which DPL volumes are stored.
dpl_port = 8357 (Port number) DPL port number.
iscsi_port = 3260 (Port number) The port that the iSCSI daemon is listening on
san_ip = (String) IP address of SAN controller
san_login = admin (String) Username for SAN controller
san_password = (String) Password for SAN controller
san_thin_provision = True (Boolean) Use thin provisioning for SAN volumes?
Pure Storage iSCSI and Fibre Channel volume drivers

The Pure Storage FlashArray volume drivers for OpenStack Block Storage interact with configured Pure Storage arrays and support various operations.

Support for iSCSI storage protocol is available with the PureISCSIDriver Volume Driver class, and Fibre Channel with PureFCDriver.

All drivers are compatible with Purity FlashArrays that support the REST API version 1.2, 1.3, or 1.4 (Purity 4.0.0 and newer).

Limitations and known issues

If you do not set up the nodes hosting instances to use multipathing, all network connectivity will use a single physical port on the array. In addition to significantly limiting the available bandwidth, this means you do not have the high-availability and non-disruptive upgrade benefits provided by FlashArray. Multipathing must be used to take advantage of these benefits.

Supported operations
  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create a volume from snapshot.
  • Create, list, and delete volume snapshots.
  • Create, list, update, and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Get volume statistics.
  • Create a thin provisioned volume.
  • Replicate volumes to remote Pure Storage array(s).
Configure OpenStack and Purity

You need to configure both your Purity array and your OpenStack cluster.

Note

These instructions assume that the cinder-api and cinder-scheduler services are installed and configured in your OpenStack cluster.

Configure the OpenStack Block Storage service

In these steps, you will edit the cinder.conf file to configure the OpenStack Block Storage service to enable multipathing and to use the Pure Storage FlashArray as back-end storage.

  1. Install Pure Storage PyPI module. A requirement for the Pure Storage driver is the installation of the Pure Storage Python SDK version 1.4.0 or later from PyPI.

    $ pip install purestorage
    
  2. Retrieve an API token from Purity. The OpenStack Block Storage service configuration requires an API token from Purity. Actions performed by the volume driver use this token for authorization. Also, Purity logs the volume driver’s actions as being performed by the user who owns this API token.

    If you created a Purity user account that is dedicated to managing your OpenStack Block Storage volumes, copy the API token from that user account.

    Use the appropriate create or list command below to display and copy the Purity API token:

    • To create a new API token:

      $ pureadmin create --api-token USER
      

      The following is an example output:

      $ pureadmin create --api-token pureuser
      Name      API Token                             Created
      pureuser  902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9  2014-08-04 14:50:30
      
    • To list an existing API token:

      $ pureadmin list --api-token --expose USER
      

      The following is an example output:

      $ pureadmin list --api-token --expose pureuser
      Name      API Token                             Created
      pureuser  902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9  2014-08-04 14:50:30
      
  3. Copy the API token retrieved (902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 from the examples above) to use in the next step.

  4. Edit the OpenStack Block Storage service configuration file. The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Pure Storage array:

    [DEFAULT]
    enabled_backends = puredriver-1
    default_volume_type = puredriver-1
    
    [puredriver-1]
    volume_backend_name = puredriver-1
    volume_driver = PURE_VOLUME_DRIVER
    san_ip = IP_PURE_MGMT
    pure_api_token = PURE_API_TOKEN
    use_multipath_for_image_xfer = True
    

    Replace the following variables accordingly:

    PURE_VOLUME_DRIVER

    Use either cinder.volume.drivers.pure.PureISCSIDriver for iSCSI or cinder.volume.drivers.pure.PureFCDriver for Fibre Channel connectivity.

    IP_PURE_MGMT

    The IP address of the Pure Storage array’s management interface or a domain name that resolves to that IP address.

    PURE_API_TOKEN

    The Purity Authorization token that the volume driver uses to perform volume management on the Pure Storage array.

Note

The volume driver automatically creates Purity host objects for initiators as needed. If CHAP authentication is enabled via the use_chap_auth setting, you must ensure there are no manually created host objects with IQN’s that will be used by the OpenStack Block Storage service. The driver will only modify credentials on hosts that it manages.

Note

If using the PureFCDriver it is recommended to use the OpenStack Block Storage Fibre Channel Zone Manager.

Volume auto-eradication

To enable auto-eradication of deleted volumes, snapshots, and consistency groups on deletion, modify the following option in the cinder.conf file:

pure_eradicate_on_delete = true

By default, auto-eradication is disabled and all deleted volumes, snapshots, and consistency groups are retained on the Pure Storage array in a recoverable state for 24 hours from time of deletion.

SSL certification

To enable SSL certificate validation, modify the following option in the cinder.conf file:

driver_ssl_cert_verify = true

By default, SSL certificate validation is disabled.

To specify a non-default path to CA_Bundle file or directory with certificates of trusted CAs:

driver_ssl_cert_path = Certificate path

Note

This requires the use of Pure Storage Python SDK > 1.4.0.

Replication configuration

Add the following to the back-end specification to specify another Flash Array to replicate to:

[puredriver-1]
replication_device = backend_id:PURE2_NAME,san_ip:IP_PURE2_MGMT,api_token:PURE2_API_TOKEN

Where PURE2_NAME is the name of the remote Pure Storage system, IP_PURE2_MGMT is the management IP address of the remote array, and PURE2_API_TOKEN is the Purity Authorization token of the remote array.

Note that more than one replication_device line can be added to allow for multi-target device replication.

A volume is only replicated if the volume is of a volume-type that has the extra spec replication_enabled set to <is> True.

To create a volume type that specifies replication to remote back ends:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'

The following table contains the optional configuration parameters available for replication configuration with the Pure Storage array.

Option Description Default
pure_replica_interval_default Snapshot replication interval in seconds. 900
pure_replica_retention_short_term_default Retain all snapshots on target for this time (in seconds). 14400
pure_replica_retention_long_term_per_day_default Retain how many snapshots for each day. 3
pure_replica_retention_long_term_default Retain snapshots per day on target for this time (in days). 7

Note

replication-failover is only supported from the primary array to any of the multiple secondary arrays, but subsequent replication-failover is only supported back to the original primary array.

Automatic thin-provisioning/oversubscription ratio

To enable this feature where we calculate the array oversubscription ratio as (total provisioned/actual used), add the following option in the cinder.conf file:

[puredriver-1]
pure_automatic_max_oversubscription_ratio = True

By default, this is disabled and we honor the hard-coded configuration option max_over_subscription_ratio.

Note

Arrays with very good data reduction rates (compression/data deduplication/thin provisioning) can get very large oversubscription rates applied.

Scheduling metrics

A large number of metrics are reported by the volume driver which can be useful in implementing more control over volume placement in multi-backend environments using the driver filter and weighter methods.

Metrics reported include, but are not limited to:

total_capacity_gb
free_capacity_gb
provisioned_capacity
total_volumes
total_snapshots
total_hosts
total_pgroups
writes_per_sec
reads_per_sec
input_per_sec
output_per_sec
usec_per_read_op
usec_per_read_op
queue_depth

Note

All total metrics include non-OpenStack managed objects on the array.

In conjunction with QOS extra-specs, you can create very complex algorithms to manage volume placement. More detailed documentation on this is available in other external documentation.

Quobyte driver

The Quobyte volume driver enables storing Block Storage service volumes on a Quobyte storage back end. Block Storage service back ends are mapped to Quobyte volumes and individual Block Storage service volumes are stored as files on a Quobyte volume. Selection of the appropriate Quobyte volume is done by the aforementioned back end configuration that specifies the Quobyte volume explicitly.

Note

Note the dual use of the term volume in the context of Block Storage service volumes and in the context of Quobyte volumes.

For more information see the Quobyte support webpage.

Supported operations

The Quobyte volume driver supports the following volume operations:

  • Create, delete, attach, and detach volumes
  • Secure NAS operation (Starting with Mitaka release secure NAS operation is optional but still default)
  • Create and delete a snapshot
  • Create a volume from a snapshot
  • Extend a volume
  • Clone a volume
  • Copy a volume to image
  • Generic volume migration (no back end optimization)

Note

When running VM instances off Quobyte volumes, ensure that the Quobyte Compute service driver has been configured in your OpenStack cloud.

Configuration

To activate the Quobyte volume driver, configure the corresponding volume_driver parameter:

volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver

The following table contains the configuration options supported by the Quobyte driver:

Description of Quobyte USP volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
quobyte_client_cfg = None (String) Path to a Quobyte Client configuration file.
quobyte_mount_point_base = $state_path/mnt (String) Base dir containing the mount point for the Quobyte volume.
quobyte_qcow2_volumes = True (Boolean) Create volumes as QCOW2 files rather than raw files.
quobyte_sparsed_volumes = True (Boolean) Create volumes as sparse files which take no space. If set to False, volume is created as regular file.In such case volume creation takes a lot of time.
quobyte_volume_url = None (String) URL to the Quobyte volume e.g., quobyte://<DIR host>/<volume name>
Scality SOFS driver

The Scality SOFS volume driver interacts with configured sfused mounts.

The Scality SOFS driver manages volumes as sparse files stored on a Scality Ring through sfused. Ring connection settings and sfused options are defined in the cinder.conf file and the configuration file pointed to by the scality_sofs_config option, typically /etc/sfused.conf.

Supported operations

The Scality SOFS volume driver provides the following Block Storage volume operations:

  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Backup a volume.
  • Restore backup to new or existing volume.
Configuration

Use the following instructions to update the cinder.conf configuration file:

[DEFAULT]
enabled_backends = scality-1

[scality-1]
volume_driver = cinder.volume.drivers.scality.ScalityDriver
volume_backend_name = scality-1

scality_sofs_config = /etc/sfused.conf
scality_sofs_mount_point = /cinder
scality_sofs_volume_dir = cinder/volumes
Compute configuration

Use the following instructions to update the nova.conf configuration file:

[libvirt]
scality_sofs_mount_point = /cinder
scality_sofs_config = /etc/sfused.conf
Description of Scality SOFS volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
scality_sofs_config = None (String) Path or URL to Scality SOFS configuration file
scality_sofs_mount_point = $state_path/scality (String) Base dir where Scality SOFS shall be mounted
scality_sofs_volume_dir = cinder/volumes (String) Path from Scality SOFS root to volume dir
SolidFire

The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.

To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf file as follows:

volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182         # the address of your MVIP
san_login = sfadmin           # your cluster admin login
san_password = sfpassword     # your cluster admin password
sf_account_prefix = ''        # prefix for tenant account creation on solidfire cluster

Warning

Older versions of the SolidFire driver (prior to Icehouse) created a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant. Unfortunately, this account formation resulted in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. The current default implementation does not experience this issue as no prefix is used. For installations created on a prior release, the OLD default behavior can be configured by using the keyword hostname in sf_account_prefix.

Note

The SolidFire driver creates names for volumes on the back end using the format UUID-<cinder-id>. This works well, but there is a possibility of a UUID collision for customers running multiple clouds against the same cluster. In Mitaka the ability was added to eliminate the possibility of collisions by introducing the sf_volume_prefix configuration variable. On the SolidFire cluster each volume will be labeled with the prefix, providing the ability to configure unique volume names for each cloud. The default prefix is ‘UUID-‘.

Changing the setting on an existing deployment will result in the existing volumes being inaccessible. To introduce this change to an existing deployment it is recommended to add the Cluster as if it were a second backend and disable new deployments to the current back end.

Description of SolidFire driver configuration options
Configuration option = Default value Description
[DEFAULT]  
sf_account_prefix = None (String) Create SolidFire accounts with this prefix. Any string can be used here, but the string “hostname” is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix.
sf_allow_template_caching = True (Boolean) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls.
sf_allow_tenant_qos = False (Boolean) Allow tenants to specify QOS on create
sf_api_port = 443 (Port number) SolidFire API port. Useful if the device api is behind a proxy on a different port.
sf_emulate_512 = True (Boolean) Set 512 byte emulation on volume creation;
sf_enable_vag = False (Boolean) Utilize volume access groups on a per-tenant basis.
sf_enable_volume_mapping = True (Boolean) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False.
sf_svip = None (String) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud.
sf_template_account_name = openstack-vtemplate (String) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist).
sf_volume_prefix = UUID- (String) Create SolidFire volumes with this prefix. Volume names are of the form <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of ‘UUID-‘.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Manage and unmanage a volume.

QoS support for the SolidFire drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage qos specs extension module:

  • minIOPS - The minimum number of IOPS guaranteed for this volume. Default = 100.
  • maxIOPS - The maximum number of IOPS allowed for this volume. Default = 15,000.
  • burstIOPS - The maximum number of IOPS allowed over a short period of time. Default = 15,000.

The QoS keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate
Synology DSM volume driver

The SynoISCSIDriver volume driver allows Synology NAS to be used for Block Storage (cinder) in OpenStack deployments. Information on OpenStack Block Storage volumes is available in the DSM Storage Manager.

System requirements

The Synology driver has the following requirements:

  • DSM version 6.0.2 or later.
  • Your Synology NAS model must support advanced file LUN, iSCSI Target, and snapshot features. Refer to the Support List for applied models.

Note

The DSM driver is available in the OpenStack Newton release.

Supported operations
  • Create, delete, clone, attach, and detach volumes.
  • Create and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Extend a volume.
  • Get volume statistics.
Driver configuration

Edit the /etc/cinder/cinder.conf file on your volume driver host.

Synology driver uses a volume in Synology NAS as the back end of Block Storage. Every time you create a new Block Storage volume, the system will create an advanced file LUN in your Synology volume to be used for this new Block Storage volume.

The following example shows how to use different Synology NAS servers as the back end. If you want to use all volumes on your Synology NAS, add another section with the volume number to differentiate between volumes within the same Synology NAS.

[default]
enabled_backends = ds1515pV1, ds1515pV2, rs3017xsV3, others

[ds1515pV1]
# configuration for volume 1 in DS1515+

[ds1515pV2]
# configuration for volume 2 in DS1515+

[rs3017xsV1]
# configuration for volume 1 in RS3017xs

Each section indicates the volume number and the way in which the connection is established. Below is an example of a basic configuration:

[Your_Section_Name]

# Required settings
volume_driver = cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver
iscs_protocol = iscsi
iscsi_ip_address = DS_IP
synology_admin_port = DS_PORT
synology_username = DS_USER
synology_password = DS_PW
synology_pool_name = DS_VOLUME

# Optional settings
volume_backend_name = VOLUME_BACKEND_NAME
iscsi_secondary_ip_addresses = IP_ADDRESSES
driver_use_ssl = True
use_chap_auth = True
chap_username = CHAP_USER_NAME
chap_password = CHAP_PASSWORD
DS_PORT
This is the port for DSM management. The default value for DSM is 5000 (HTTP) and 5001 (HTTPS). To use HTTPS connections, you must set driver_use_ssl = True.
DS_IP
This is the IP address of your Synology NAS.
DS_USER
This is the account of any DSM administrator.
DS_PW
This is the password for DS_USER.
DS_VOLUME
This is the volume you want to use as the storage pool for the Block Storage service. The format is volume[0-9]+, and the number is the same as the volume number in DSM.

Note

If you set driver_use_ssl as True, synology_admin_port must be an HTTPS port.

Configuration options

The Synology DSM driver supports the following configuration options:

Description of Synology volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
pool_type = default (String) Pool type, like sata-2copy.
synology_admin_port = 5000 (Port number) Management port for Synology storage.
synology_device_id = None (String) Device id for skip one time password check for logging in Synology storage if OTP is enabled.
synology_one_time_pass = None (String) One time password of administrator for logging in Synology storage if OTP is enabled.
synology_password = (String) Password of administrator for logging in Synology storage.
synology_pool_name = (String) Volume on Synology storage to be used for creating lun.
synology_ssl_verify = True (Boolean) Do certificate validation or not if $driver_use_ssl is True
synology_username = admin (String) Administrator of Synology storage.
Tintri

Tintri VMstore is a smart storage that sees, learns, and adapts for cloud and virtualization. The Tintri Block Storage driver interacts with configured VMstore running Tintri OS 4.0 and above. It supports various operations using Tintri REST APIs and NFS protocol.

To configure the use of a Tintri VMstore with Block Storage, perform the following actions:

  1. Edit the etc/cinder/cinder.conf file and set the cinder.volume.drivers.tintri options:

    volume_driver=cinder.volume.drivers.tintri.TintriDriver
    # Mount options passed to the nfs client. See section of the
    # nfs man page for details. (string value)
    nfs_mount_options = vers=3,lookupcache=pos
    
    #
    # Options defined in cinder.volume.drivers.tintri
    #
    
    # The hostname (or IP address) for the storage system (string
    # value)
    tintri_server_hostname = {Tintri VMstore Management IP}
    
    # User name for the storage system (string value)
    tintri_server_username = {username}
    
    # Password for the storage system (string value)
    tintri_server_password = {password}
    
    # API version for the storage system (string value)
    # tintri_api_version = v310
    
    # Following options needed for NFS configuration
    # File with the list of available nfs shares (string value)
    # nfs_shares_config = /etc/cinder/nfs_shares
    
    # Tintri driver will clean up unused image snapshots. With the following
    # option, users can configure how long unused image snapshots are
    # retained. Default retention policy is 30 days
    # tintri_image_cache_expiry_days = 30
    
    # Path to NFS shares file storing images.
    # Users can store Glance images in the NFS share of the same VMstore
    # mentioned in the following file. These images need to have additional
    # metadata ``provider_location`` configured in Glance, which should point
    # to the NFS share path of the image.
    # This option will enable Tintri driver to directly clone from Glance
    # image stored on same VMstore (rather than downloading image
    # from Glance)
    # tintri_image_shares_config = <Path to image NFS share>
    #
    # For example:
    # Glance image metadata
    # provider_location =>
    # nfs://<data_ip>/tintri/glance/84829294-c48b-4e16-a878-8b2581efd505
    
  2. Edit the /etc/nova/nova.conf file and set the nfs_mount_options:

    nfs_mount_options = vers=3
    
  3. Edit the /etc/cinder/nfs_shares file and add the Tintri VMstore mount points associated with the configured VMstore management IP in the cinder.conf file:

    {vmstore_data_ip}:/tintri/{submount1}
    {vmstore_data_ip}:/tintri/{submount2}
    
Description of Tintri volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
tintri_api_version = v310 (String) API version for the storage system
tintri_image_cache_expiry_days = 30 (Integer) Delete unused image snapshots older than mentioned days
tintri_image_shares_config = None (String) Path to image nfs shares file
tintri_server_hostname = None (String) The hostname (or IP address) for the storage system
tintri_server_password = None (String) Password for the storage system
tintri_server_username = None (String) User name for the storage system
Violin Memory 7000 Series FSP volume driver

The OpenStack V7000 driver package from Violin Memory adds Block Storage service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP controllers.

The driver package release can be used with any OpenStack Liberty deployment for all 7300 FSPs and 7700 FSP controllers running Concerto 7.5.3 and later using Fibre Channel HBAs.

System requirements

To use the Violin driver, the following are required:

  • Violin 7300/7700 series FSP with:

    • Concerto OS version 7.5.3 or later
    • Fibre channel host interfaces
  • The Violin block storage driver: This driver implements the block storage API calls. The driver is included with the OpenStack Liberty release.

  • The vmemclient library: This is the Violin Array Communications library to the Flash Storage Platform through a REST-like interface. The client can be installed using the python ‘pip’ installer tool. Further information on vmemclient can be found on PyPI.

    pip install vmemclient
    
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.

Note

Listed operations are supported for thick, thin, and dedup luns, with the exception of cloning. Cloning operations are supported only on thick luns.

Driver configuration

Once the array is configured as per the installation guide, it is simply a matter of editing the cinder configuration file to add or modify the parameters. The driver currently only supports fibre channel configuration.

Fibre channel configuration

Set the following in your cinder.conf configuration file, replacing the variables using the guide in the following section:

volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
volume_backend_name = vmem_violinfsp
extra_capabilities = VMEM_CAPABILITIES
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
use_multipath_for_image_xfer = true
Configuration parameters

Description of configuration value placeholders:

VMEM_CAPABILITIES
User defined capabilities, a JSON formatted string specifying key-value pairs (string value). The ones particularly supported are dedup and thin. Only these two capabilities are listed here in cinder.conf file, indicating this backend be selected for creating luns which have a volume type associated with them that have dedup or thin extra_specs specified. For example, if the FSP is configured to support dedup luns, set the associated driver capabilities to: {“dedup”:”True”,”thin”:”True”}.
VMEM_MGMT_IP
External IP address or host name of the Violin 7300 Memory Gateway. This can be an IP address or host name.
VMEM_USER_NAME
Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller. This user must have administrative rights on the array or controller.
VMEM_PASSWORD
Log-in user’s password.
Virtuozzo Storage driver

The Virtuozzo Storage driver is a fault-tolerant distributed storage system that is optimized for virtualization workloads. Set the following in your cinder.conf file, and use the following options to configure it.

volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
Description of Virtuozzo Storage volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
vzstorage_default_volume_format = raw (String) Default format that will be used when creating volumes if no volume format is specified.
vzstorage_mount_options = None (List) Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details.
vzstorage_mount_point_base = $state_path/mnt (String) Base dir containing mount points for vzstorage shares.
vzstorage_shares_config = /etc/cinder/vzstorage_shares (String) File with the list of available vzstorage shares.
vzstorage_sparsed_volumes = True (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time.
vzstorage_used_ratio = 0.95 (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
VMware VMDK driver

Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN.

Note

The VMware VMDK driver requires vCenter version 5.1 at minimum.

Functional context

The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster.

When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance. The reason for this requirement is that data stores visible to the instance determine where to place the volume. Before the service creates the VMDK file, attach a volume to the target instance.

The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk.

With the update to ESX version 6.0, the VMDK driver now supports NFS version 4.1.

Configuration

The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server.

In the nova.conf file, use this option to define the Compute driver: